id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.00579
RAPID: Autonomous Multi-Agent Racing using Constrained Potential Dynamic Games
In this work, we consider the problem of autonomous racing with multiple agents where agents must interact closely and influence each other to compete. We model interactions among agents through a game-theoretical framework and propose an efficient algorithm for tractably solving the resulting game in real time. More specifically, we capture interactions among multiple agents through a constrained dynamic game. We show that the resulting dynamic game is an instance of a simple-to-analyze class of games. Namely, we show that our racing game is an instance of a constrained dynamic potential game. An important and appealing property of dynamic potential games is that a generalized Nash equilibrium of the underlying game can be computed by solving a single constrained optimal control problem instead of multiple coupled constrained optimal control problems. Leveraging this property, we show that the problem of autonomous racing is greatly simplified and develop RAPID (autonomous multi-agent RAcing using constrained PotentIal Dynamic games), a racing algorithm that can be solved tractably in real-time. Through simulation studies, we demonstrate that our algorithm outperforms the state-of-the-art approach. We further show the real-time capabilities of our algorithm in hardware experiments.
Yixuan Jia, Maulik Bhatt, Negar Mehr
2023-04-30T21:22:04Z
http://arxiv.org/abs/2305.00579v1
# RAPID: Autonomous Multi-Agent Racing using Constrained Potential Dynamic Games ###### Abstract In this work, we consider the problem of autonomous racing with multiple agents where agents must interact closely and influence each other to compete. We model interactions among agents through a game-theoretical framework and propose an efficient algorithm for tractably solving the resulting game in real time. More specifically, we capture interactions among multiple agents through a constrained dynamic game. We show that the resulting dynamic game is an instance of a simple-to-analyze class of games. Namely, we show that our racing game is an instance of a constrained dynamic potential game. An important and appealing property of dynamic potential games is that a generalized Nash equilibrium of the underlying game can be computed by solving a single constrained optimal control problem instead of multiple coupled constrained optimal control problems. Leveraging this property, we show that the problem of autonomous racing is greatly simplified and develop RAPID (autonomous multi-agent RAcing using constrained Potential Dynamic games), a racing algorithm that can be solved tractably in real-time. Through simulation studies, we demonstrate that our algorithm outperforms the state-of-the-art approach. We further show the real-time capabilities of our algorithm in hardware experiments. ## I Introduction Autonomous racing is gaining popularity because of its broad applicability in various competitive and non-cooperative motion planning scenarios. Multi-agent autonomous racing is a highly challenging motion planning task, requiring multiple nonlinear agents to plan their motions in real time while operating at their limits. Furthermore, they must account for other agents with conflicting objectives and ensure safety constraints, such as avoiding collisions and staying on the track. This results in a set of coupled motion planning problems that are highly nonlinear and complex. Such complexities require efficient motion planning algorithms for real-time capabilities and to ensure safety while generating competitive trajectories. One of the earlier approaches to autonomous racing in the context of RC cars was [1], where the authors employed an optimization-based model predictive controller to maximize progress on the track, subject to safety requirements. However, due to the reactive nature of such approaches, they do not generate competitive trajectories. Learning-based approaches for autonomous racing were studied in [2, 3, 4, 5]. However, these methods mainly focus on the path-planning aspect of racing instead of interactions among the agents and may fail to generate competitive behaviors such as blocking and overtaking. Due to the interactive nature of racing, where each agent has to account for other agents' decisions, interactions among agents can naturally be captured in a game-theoretic framework. Game-theoretic planning has been extensively used in non-cooperative motion planning for multiple agents [6, 7, 8, 9, 10, 11]. Due to its success in motion planning, game theory has also recently been applied in autonomous racing. Non-cooperative game theoretic planners for autonomous racing involving two agents were studied in [12, 13]. However, these works apply to only two-agent settings. Going beyond two agents, it is often computationally challenging to account for interactions among multiple agents due to the nonlinearities inherent in racing. In [14], a game-theoretic planner was proposed for racing among multiple agents using sensitivity-based analysis. However, the proposed algorithm does not scale well with the number of agents. A game-theoretic planner with data-driven identification of vehicle models was developed for head-to-head autonomous racing in [15]. This work uses Stackelberg strategies which are not generally suitable for multi-agent racing as they normally involve a leader-follower structure. In this work, we present RAPID, an autonomous multi Fig. 1: Visualizations of two of the hardware experiments using two Crazyflies. As it can be seen in the left figure, even when the ego drone starts behind, it overtakes the opponent and ultimately blocks the opponent to avoid being overtaken. On the right, it can be seen that when the ego drone starts ahead of the opponent, it generates blocking behavior twice to avoid being overtaken. No collisions occurred in the hardware experiments and the ego-drone won the race in both instances. agent racing algorithm that uses constrained dynamic potential games. We pose the racing problem as a non-cooperative constrained general-sum dynamic game and seek the Nash equilibria of the game. We show that although, in general, finding generalized Nash equilibria -- Nash equilibria of constrained dynamic games-- is very challenging, the equilibria of our racing game can be found tractably and efficiently. Our key insight is that our racing game is an instance of a _dynamic potential game_ for which equilibria always exist and can be found by solving a single constrained optimal control problem. The advantage of formulating the problem as a dynamic potential game is that it is generally more efficient to solve the underlying multivariate optimal control problem than to solve a set of coupled optimal control problems [16, 17]. We leverage this property and develop a tractable planning algorithm for racing among multiple agents. We compare our method with the state-of-the-art and demonstrate that our method beats the existing work in terms of both the computation time and the quality of the generated trajectories. We further demonstrate the real-time capabilities of our method in a hardware racing experiment involving three quadcopters. ## II Problem Formulation ### _Notations_ We consider the problem of autonomous racing in discrete time. Let \(T\in\mathbb{N}\) denote the number of time steps in the problem, where \(\mathbb{N}\) denotes the set of natural numbers. For any natural number \(n\), let \([n]\coloneqq\{1,\ldots,n\}\) be the set of all natural numbers smaller than or equal to \(n\). Let \([N]\), where \(N\in\mathbb{N}\), denote the set of agents' indices, where each index corresponds to an agent. For each agent \(i\in[N]\), agent \(i\)'s state is denoted as \(x^{i}\in\mathcal{X}^{i}\subset\mathbb{R}^{n_{i}}\) where \(\mathcal{X}^{i}\) is the state space for agent \(i\) and \(n_{i}\in\mathbb{N}\) is the dimension of state space of agent \(i\). Similarly, the control action for each agent \(i\in[N]\) is denoted as \(u^{i}\in\mathcal{U}^{i}\subset\mathbb{R}^{m_{i}}\), where \(\mathcal{U}^{i}\) is the action space for agent \(i\) and \(m_{i}\in\mathbb{N}\) is the dimension of the control space of agent \(i\). Let \(\mathcal{X}\coloneqq\mathcal{X}^{1}\times\ldots\times\mathcal{X}^{N}\) denote the entire state space of the system and let \(\mathcal{U}\coloneqq\mathcal{U}^{1}\times\ldots\times\mathcal{U}^{N}\) be the action space of all agents participating in the race. We denote the state of all agents at time step \(t\) as \(x_{t}\coloneqq(x_{t}^{1^{\top}},\ldots,x_{t}^{N^{\top}})^{\top}\in\mathcal{X} \subset\mathbb{R}^{n}\), and the control action of all agents at time \(t\) can be denoted as \(u_{t}\coloneqq(u_{t}^{1^{\top}},\ldots,u_{t}^{N^{\top}})^{\top}\in\mathcal{U} \subset\mathbb{R}^{m}\), where \(n\coloneqq\sum_{i\in[N]}n_{i}\) and \(m\coloneqq\sum_{i\in[N]}m_{i}\). Furthermore, we use \(u_{t}^{-i}:=(u_{t}^{1^{\top}},\ldots,u_{t}^{i-1^{\top}},u_{t}^{i+1^{\top}}, \ldots,u_{t}^{N^{\top}})^{\top}\in\mathcal{U}^{-i}\), where \(\mathcal{U}^{-i}:=\Pi_{j\neq i}\mathcal{U}^{j}\), to denote the control actions of all agents at time step \(t\) except for agent \(i\). With a slight abuse of notations, we can write \(u_{t}=(u_{t}^{i^{\top}},u_{t}^{-i^{\top}})^{\top}\). We assume that for each agent \(i\in[N]\), its state at every time step \(x_{t}^{i}\) evolves according to the discrete dynamics \(x_{t+1}^{i}=f^{i}(x_{t}^{i},u_{t}^{i},t)\), where \(f^{i}:\mathcal{X}^{i}\times\mathcal{U}^{i}\times[N]\rightarrow\mathbb{R}^{n_{i}}\), and \(x_{t}^{i}\), \(u_{t}^{i}\) denote the state and action of agent \(i\) at time step \(t\). We define the overall system dynamics as \(f\coloneqq\left(f^{1},\ldots,f^{N}\right)\). Then, the evolution of the joint state of the system is Fig. 2: Experiment snapshots of the race among three quadrotors. Green color is used to represent the ego quadrotor which uses our trajectory planner RAPID and exhibits an overtaking maneuver around the two quadrotors that use reactive MPC. The top row consists of top views that are plotted with the actual hardware experiment data. The second row shows the snapshots of the actual hardware experiment. For visualization purposes, the approximated boundaries of the track are marked by the white lines on the second row. We see that the ego (green) is trying to block the agent marked with the blue color. Videos can be found at: [https://youtu.be/85PYCj6vUd4](https://youtu.be/85PYCj6vUd4). described by: \[x_{t+1}=f(x_{t},u_{t},t). \tag{1}\] While racing against each other, each agent has to satisfy some constraints, such as staying on the race track and avoiding collisions with obstacles and other agents. We denote the constraints of the race track through the function, \[h(x_{t}^{i})\leq 0,\forall i\in[N],\forall t\in\{0,1,\ldots,T-1\}. \tag{2}\] We consider collision avoidance constraints which are defined as follows for every pair of agents: \[d(x_{t}^{i},x_{t}^{j})>d_{min},\quad\forall i<j,i,j\in[N], \tag{3}\] where \(d(x_{t}^{i},x_{t}^{j})\) is the Euclidean distance between agents \(i\) and \(j\) at time step \(t\) and \(d_{min}\) is the minimum distance between before which two agents are considered collided. We also consider upper and lower bounds on the control inputs of each agent. For each agent \(i\in[N]\), we have \[u_{min}^{i}\leq u_{t}^{i}\leq u_{max}^{i},\forall t\in\{0,1,\ldots,T-1\} \tag{4}\] where \(u_{min}^{i}\) and \(u_{max}^{i}\) are the vectors of minimum and maximum control inputs for agent \(i\), respectively. Let \(g^{i}(x,u,t)\leq 0\) denote the concatenated vector of constraints in (2), (3) and (4) for agent \(i\), where \(g^{i}:\mathcal{X}\times\mathcal{U}\times[N]\rightarrow\mathbb{R}^{c_{i}}\) is a vector-valued function and \(c_{i}\in\mathbb{N}\) is the number of constraints of agent \(i\). Note that the inequality is to be understood element-wise. It should also be noted that the constraint function of agent \(i\), in general, depends on the states and control actions of _all agents_. This results in a coupling between agents' decisions since one agent cannot satisfy their constraint function in isolation. Hence, agents must account for such couplings while choosing their control actions. We can collect constraints for all agents and define \(g:=({g^{1}}^{\top},\ldots,{g^{N}}^{\top})^{\top}:\mathcal{X}\times\mathcal{U} \times[N]\rightarrow\mathbb{R}^{c}\), where \(c:=\sum_{i\in[N]}c_{i}\). Then the combined constraint function for the entire system is: \[g(x_{t},u_{t},t)\leq 0. \tag{5}\] We define \(\mathcal{C}_{t}\) to be the set of all feasible states and actions at time \(t\), i.e., the constrained subset of \(\mathcal{X}\times\mathcal{U}\) which satisfies all the constraints (5) at time step \(t\). We can define \(\mathcal{C}_{0}=\mathcal{X}\times\mathcal{U}\cap\{(x_{0},u_{0}):g(x_{0},u_{0}) \leq 0\}\) and \(\mathcal{C}_{t}=\{\{\mathcal{X}\cap\{x_{t}:x_{t}=f(x_{t-1},u_{t-1},t-1)\}\} \times\mathcal{U}\}\cap\{(x_{t},u_{t}):g(x_{t},u_{t})\leq 0\}\) for \(t\in\{1,\ldots,T-1\}\). At the terminal time step \(T\), no action is being taken, and; hence, the constraint function will only be a function of states \((g(x_{T},(\cdot),T))\) and the corresponding constraint set will be \(\mathcal{C}_{T}=\mathcal{X}\cap\{x_{T}:g(x_{T},(\cdot),T)\leq 0\}\). ### _Agents' Strategies and Objectives_ Each agent wants to choose actions sequentially to maximize their chances of winning the race. The policy through which each agent chooses its actions is called its strategy. A strategy can have various forms. In the present scenario, we consider open-loop strategies for the agents, which means that the strategy of each agent depends only on the initial state of the system and time.1. Let \(\mathcal{T}=\{0,1,\ldots,T\}\). For any agent \(i\in[N]\), we define the open-loop strategy of agent \(i\), \(\gamma^{i}:\mathcal{X}\times\mathcal{T}\rightarrow\mathcal{U}^{i}\) as follows: Footnote 1: We acknowledge that considering open-loop strategies may generate different trajectories in contrast to choosing closed-loop strategies which can be a function of the system’s state at every time step. However, we implement the open-loop strategies in a receding horizon fashion to account for the new information at each time step and mimic feedback strategies. We show empirically that this is a reasonable approximation for practical purposes in motion planning. \[\gamma^{i}(x_{0},t):=u_{t}^{i}.\] In other words, at each time step \(t\), given the initial state of the system \(x_{0}\), \(\gamma^{i}(x_{0},t)\) would be the control action chosen by agent \(i\). We use \(\Gamma^{i}\) to denote the space of all possible strategies for agent \(i\). We define \(\gamma:=({\gamma^{1}}^{\top},\ldots,{\gamma^{N}}^{\top})^{\top}\in\Gamma\), where \(\Gamma:=\Gamma^{1}\times\ldots\times\Gamma^{N}\) is the combined strategy space of the system. Let \(\Gamma^{-i}:=\Pi_{j\neq i}\Gamma^{j}\) denote the strategy space of all agents except agent \(i\). Same as before, we use \(\gamma^{-i}:=({\gamma^{1}}^{\top},\ldots,{\gamma^{i-1}}^{\top},{\gamma^{i+1}} ^{\top},\ldots,{\gamma^{N}}^{\top})^{\top}\in\Gamma^{-i}\) to denote the strategy and 3strategy space of all agents except the agent \(i\). It should be noted that open-loop strategies provide equivalence between strategy and control actions at all time instants (\(\gamma\equiv\{u_{t}\}_{\{1,\ldots,T-1\}}\)), which in turn determines the states of the whole system at all time steps given the initial state. When racing, each agent has their own objective of finishing the race before the opponents while satisfying some constraints such as being on the race track and avoiding collisions with other agents. As discussed before, let these constraints be denoted by \(g(x_{t},u_{t},t)\leq 0\). The agents' objectives should incentivize behaviors such as moving as fast as possible along the race track and blocking the other agents if they are trying to overtake. Let the objective function of each agent be denoted by a function \(J^{i}:\mathcal{X}\times\Gamma\rightarrow\mathbb{R}\) which is a function of the initial state of the game and the strategies of all the agents. Taking inspiration from [14], for each agent \(i\), we choose \(J^{i}\) to be of the form \[J^{i}(x_{0},\gamma)=-r(x_{T}^{i})+\alpha\sum_{t=0}^{T-1}\sum_{ \begin{subarray}{c}j\in[N]\\ j\neq i\end{subarray}}d(x_{t}^{i},x_{t}^{j})^{2}, \tag{6}\] where \(r(\cdot)\) is the path covered along the race track, and \(\alpha\) is a hyperparameter that determines the amount of weight given to generating blocking behavior with respect to moving along the track as fast as possible. We choose the objective functions to be of the form in (6) with the following motivations: * For a fixed planning horizon \(T\), each agent wants to maximize the path covered along the race track, which corresponds to minimizing \(-r(x_{T}^{i})\). * We take motivation from the sensitivity terms introduced in [14] to incentivize competitive behaviors. Each agent wants to prevent other agents from overtaking themselves and remain in close proximity to their opponents who are ahead of them. Our method incentivizes blocking behaviors if the agent using our method is ahead of other agents. This is because, while the first term in (6) encourages the agent to progress along the track, the second term encourages being in proximity to other agents, which generates blocking behavior if the agent using our method is ahead. On the other hand, if the opponents are ahead, minimizing the distance to opponents is equivalent to catching up. Furthermore, in such cases, due to the first term, our method will incentivize the agent not only to catch up but also to overtake. * Note that the \(\alpha\) term in (6) indicates the aggressiveness of agents, as a larger value of \(\alpha\) will incentivize agents to be in proximity with other agents. It should also be noted that, due to the collision constraints, the distance between each pair of agents has a lower bound, which guarantees the safety of the generated trajectories. When racing, each agent wants to win the race, which translates to minimizing its objective. However, each agent cannot minimize its objective and satisfy its constraints in isolation, as the value of its objective depends on the actions and states of other agents as well. Therefore, none of the agents can independently minimize their objectives as the objectives and constraints are inherently coupled. Consequently, to capture the dependence of agents upon one another, they must seek equilibria of the dynamic game underlying their interactions. We denote such a non-cooperative general-sum constrained dynamic game in a compact form as \(\mathcal{G}\ :=\ \left([N],\{\Gamma_{i}\}_{i\in[N]},\{J_{i}\}_{i\in[N]},\{ \mathcal{C}_{k}\}_{k\in\{0,\ldots,T\}},f,x_{0}\right)\). We model the outcome of this competitive racing scenario by open-loop constrained Nash equilibria, also known as generalized Nash equilibria, of the constrained dynamic game \(\mathcal{G}\). **Definition 1**: _A set of strategies \(\gamma^{*}\) is a generalized Nash equilibrium of the game \(\mathcal{G}=\left([N],\{\Gamma_{i}\}_{i\in[N]},\{J_{i}\}_{i\in[N]},\{\mathcal{C }_{k}\}_{k\in\{0,\ldots,T\}},f,x_{0}\right)\) if the following holds for each agent \(i\in[N]\):_ \[J^{i}(x_{0},\gamma^{*})\leq J^{i}(x_{0},\gamma^{i},\gamma^{-i^{*}})\] \[\forall\left(x_{t},\gamma^{i}(x_{0},t),\gamma^{-i^{*}}(x_{0},t) \right)\in\mathcal{C}_{t},t\in\{0,\ldots,T-1\},\] \[\forall\ x_{T}\in\mathcal{C}_{T}. \tag{7}\] _This definition essentially means that no agent would want to deviate from their equilibrium strategy to any other feasible strategy as it would result in incurring a higher cost. It is important to notice that finding a solution to (7) will require solving a set of N-coupled constrained optimal control problems, which is challenging to solve tractably in practice. In the next section, we will discuss how we can efficiently find the equilibria of the game tractably in real time._ ## III Dynamic Potential Games Dynamic potential games are a class of games for which Nash equilibria can be found by solving a single optimal control problem instead of having to solve several coupled optimal control problems. If a game is a potential game, there exists a potential function \(P\), and the Nash equilibria of the game can be computed by minimizing this potential function. This largely simplifies the computation of Nash equilibria. This property of potential games has been recently leveraged in the context of multi-agent navigation and has been shown to simplify the problem of trajectory planning significantly [18, 19]. We formally define dynamic potential games as follows: **Definition 2**: _A non-cooperative constrained dynamic game \(\mathcal{G}:=\left([N],\{\Gamma_{i}\}_{i\in[N]},\{J_{i}\}_{i\in[N]},\{ \mathcal{C}_{k}\}_{k\in\{0,\ldots,T\}},f,x_{0}\right)\) is a constrained potential dynamic game if there exists a potential function \(P:\mathcal{X}\times\Gamma\rightarrow\mathbb{R}\) such that for every agent \(i\in[N]\), and every pair of strategies \(\gamma^{i}\in\Gamma^{i},\nu^{i}\in\Gamma^{i}\), once we fix the set of strategies \(\gamma^{-i}\in\Gamma^{-i}\), we have:_ \[J^{i}(x_{0},\gamma^{i},\gamma^{-i})-J^{i}(x_{0},\nu^{i},\gamma^{-i})\] \[\qquad=P(x_{0},\gamma^{i},\gamma^{-i})-P(x_{0},\nu^{i},\gamma^{-i}). \tag{8}\] Definition 2 essentially states that a game \(\mathcal{G}\) is a constrained dynamic potential game if there exists a global potential function that captures the change in the cost of all agents when they change their strategy while other agents keep their strategies fixed. **Proposition 1**: _If a game is a constrained dynamic potential game, it can be shown that generalized open-loop Nash equilibria of the dynamic potential game \(\mathcal{G}\) can be computed by solving the following multivariate optimal control problem:_ \[\underset{\gamma\in\Gamma}{\text{minimize}} P(x_{0},\gamma)\] \[\text{subject to} x_{k+1}=f(x_{k},u_{k},k),\ x_{0}\ \text{given}\] \[g(x_{k},u_{k},k)\leq 0. \tag{9}\] Proof:: Proof can be found in Theorem-1 from [20]. ## IV RAPID: Scalable Planner for Racing In this section, we show that our proposed multi-agent racing game is an instance of a dynamic constrained potential game. We will prove that under cost structures (6), the resulting constrained dynamic game will be a dynamic potential game whose generalized Nash equilibria can be found by solving one single constrained optimal control problem. The following theorem characterizes the cost structures under which the game \(\mathcal{G}\) will be a constrained dynamic potential game. **Theorem 1**: _A game \(\mathcal{G}\) where every agent optimizes (6) with constraints given in (2), (3) and (4), is a constrained dynamic potential game with the potential function_ \[P(x_{0},\gamma)=-\sum_{i=1}^{N}r(x_{T}^{i})+\alpha\sum_{t=0}^{T-1}\sum_{i=1}^{N }\sum_{j=i+1}^{N}d(x_{t}^{i},x_{t}^{j})^{2}. \tag{10}\] _A generalized Nash equilibrium of \(\mathcal{G}\) can be computed by solving the following single optimal control problem_ \[\underset{\gamma\in\Gamma}{\text{minimize}} P(x_{0},\gamma)\] (11) _subject to_ \[u_{min}\leq u_{t}^{i}\leq u_{max},\quad\forall i\in[N],\forall t \in[T-1];\] \[d(x_{t}^{i},x_{t}^{j})>d_{min},\quad\forall i\neq j\in[N],\forall t \in[T]\,;\] \[h(x_{t}^{i})\leq 0,\quad\forall i\in[N],\forall t\in[T]\,. \tag{12}\] Proof:: See Appendix A. Note that Theorem 1 indicates that to find the equilibria of the games, it suffices to solve (11). Since (11) is a single constrained optimal control problem, one can use any existing constrained trajectory optimizer to solve it. Our overall algorithm is described in Algorithm 1, where ego represents the agent that uses our algorithm. Note that while the \(\alpha\) term in Algorithm 1 stays the same during each planning horizon, its value can be different between planning horizons. Since the \(\alpha\) value indicates the aggressiveness of agents, we introduce the idea of _adaptive aggressiveness scaling_, which scales \(\alpha\) depending on the average distance of ego to other agents. In line 1 of Algorithm 1, we sum over the squared distance of the ego agent to other agents. When the average squared distance of ego to other agents is above some threshold \(D\), we use a smaller \(\alpha\) value (\(\alpha_{inactive}\) in line 2). Otherwise, a larger value (\(\alpha_{active}\) in line 4) is used when ego is in proximity with other agents. When ego is far from others, the sense of being "aggressive" is not of relevance and thus the aggressiveness term is inactive. When ego is in proximity with other agents, having a larger \(\alpha\) value indicates the aggressiveness term is active and ego will try to catch up with the agent ahead or block the agent behind, depending on the relative positions, which generates overtaking and blocking maneuvers. **Input:** all agents' current states \(\{x^{i}\}_{i\in[N]}\), active distance threshold \(D\), alpha inactive \(\alpha_{inactive}\), alpha active \(\alpha_{active}\). ``` 1:if\(\sum_{i\in[N]}\|d(x^{ego},x^{i})\|^{2}>(N-1)D\)then 2:\(\alpha=\alpha_{inactive}\) 3:else 4:\(\alpha=\alpha_{active}\) 5:endif 6:solve (11) with \(\alpha\). ``` **Algorithm 1** RAPID with Adaptive Aggressiveness Scaling maneuvers. ## V Simulation Results In this section, we demonstrate the performance of our algorithm by considering two agents racing against each other. We use a discrete variation of the Dubins car model for each agent. The state vector of each agent \(i\in[N]\) at time step \(t\) is: \[x^{i}_{t}=\begin{bmatrix}p^{i}_{x,t},p^{i}_{y,t},v^{i}_{t},\theta^{i}_{t}\end{bmatrix} \tag{13}\] where \(p^{i}_{x,t},p^{i}_{y,t}\) represent the \(x,y\) coordinates of the agent in the world frame and \(v^{i}_{t},\theta^{i}_{t}\) represent its speed and orientation. Each agent's dynamics is given by: \[x^{i}_{t+1}=\begin{bmatrix}p^{i}_{x,t}+v^{i}_{t}\text{cos}(\theta^{i}_{t}) \delta t\\ p^{i}_{y,t}+v^{i}_{t}\text{sin}(\theta^{i}_{t})\delta t\\ v^{i}_{t}+a^{i}_{t}\delta t\\ \theta^{i}_{t}+\omega^{i}_{t}\delta t\end{bmatrix}\] where \(a^{i}_{k},\omega^{i}_{t}\) are the acceleration and angular velocity (control inputs) of agent \(i\) at time step \(t\), and \(\delta t\) represents the time step length. During the start of each planning horizon, all agents have access to the ground truth of their own current states as well as the current states of other agents. For simplicity, we call the agent that uses our algorithm the _ego_. We compare our algorithm with the racing algorithm developed in [14], which will be called the _baseline_. The baseline algorithm used iterative best responses and was shown to be able to generate competitive behaviors. We solve both the ego's optimization problem and the baseline's optimization problem with do-mpc [21], which models the problems symbolically with CasADi [22] and solves them with IPOPT. [23]. We choose the planning horizon to be 5 time steps for each agent, where each time step corresponds to \(0.1s\). The race track that we used is shown in Fig 3. The start and finish lines are marked with brown and gold, respectively. We run the experiments between the ego and the baseline for 50 random initial positions. The initial positions are randomly sampled within a fixed area around the starting line so that the initial distance between the two agents is within \([1.0,1.5]\,m\). The agent with a leading starting position is given a maximum speed of \(2.4\ m/s\), and the other agent is given a maximum speed of \(2.5\ m/s\). Note that we chose this setup to ensure enough interactions between the two agents. The results are demonstrated in Table I. Note that the solve time here represents the time that it takes the agent to solve its action for one time step. As shown in Table I, our algorithm has a significant advantage over the baseline in terms of both the solve time and the quality of the generated trajectories. Our algorithm is about 4 times faster than the baseline while winning the race more frequently. We find that the ego is able to push the baseline against the outer boundary of the track during the first corner, and the baseline would be stuck at the boundary until the ego passes. This blocking behavior is plotted in Fig 4. To test the scalability of our approach, we ran our algorithm and the baseline algorithm against the reactive MPC algorithm, respectively, where the reactive MPC algorithm is described in Equation (15). We ran simulations for various numbers of agents, where one of them used our algorithm or the baseline algorithm, while the others used the reactive MPC algorithm. The results are recorded in Table II. As \begin{table} \begin{tabular}{|c|c|c|} \hline & **Average Solve Time (s)** & **Win** \\ \hline **Ego** & \(0.051\pm 0.010\) & 36 \\ \hline **Baseline** & \(0.215\pm 0.084\) & 14 \\ \hline \end{tabular} \end{table} TABLE I: Simulation Results for the Customized Track Fig. 3: The top view of the customized track. The brown line is the starting line, and the gold line is the finish line. indicated by the results, our algorithm is able to scale better than the baseline. ## VI Hardware Experiments To further evaluate the real-time capabilities of our framework, we set up experiments in hardware using Crazyflie 2.0 quadcopters within the Robot Operating System (ROS) framework [24]. We used a Vicon motion capture system to collect the state information of the agents. We also used Crazyswarm [25] to send waypoint commands to the Crazyflies. We then used a low-level controller to follow the waypoints. The state vector of each quadrotor \(i\in[N]\), at each time step \(t\), is: \[x_{t}^{i}=[p_{x,t}^{i},\ p_{y,t}^{i},\ p_{z,t}^{i},\ \phi_{t}^{i},\ \theta_{t}^{i},\ \psi_{t}^{i}], \tag{14}\] which consists of its position (\([p_{x,t}^{i},\ p_{y,t}^{i},\ p_{z,t}^{i}]\)) and orientation (\([\phi_{t}^{i},\ \theta_{t}^{i},\ \psi_{t}^{i}]\)). We model the quadrotor dynamics as given by \(\dot{x}_{t}^{i}=u_{t}^{i}\), where \(u_{t}^{i}=[u_{1,t}^{i},\ u_{2,t}^{i},\ u_{3,t}^{i},\ u_{4,t}^{i},\ u_{5,t}^{i},\ u_{6,t}^{i}]\) is the control input. The first three entries of \(u_{k}^{i}\) correspond to the linear velocities and the last three entries correspond to the angular velocities. We use this simplified model since we can send waypoints directly as commands to the quadcopters through the Crazyswarm framework [25], and an in-place low-level controller will be used to track the waypoints. We also implemented a model predictive controller (MPC) for comparison purposes. This MPC planner is in fact a reactive planner that treats other agents as obstacles that need to be avoided. We call this planner _reactive MPC_. When running reactive MPC, each agent aims to maximize its progress along the track while satisfying input constraints, collision constraints, and track constraints. The reactive MPC predicts the position of other agents by assuming other agents move with constant velocity during each planning horizon, where the constant velocity comes from finite difference estimation. For each agent \(l\in[N]\), the reactive MPC solves the following optimization problem: \[\underset{\gamma^{l}\in\Gamma^{l}}{\text{minimize}} -r(x_{T}^{l},T)\] subject to \[u_{min}\leq\|u_{t}^{l}\|\leq u_{max},\forall i\in[N],\forall t \in[T]\,;\] \[d(x_{t}^{l},x_{t}^{i})>d_{min},\forall j\in[N]\setminus\{l\}, \forall t\in[T]\,;\] \[h(x_{t}^{l})\leq 0,\quad\forall t\in[T] \tag{15}\] ### _Two-agent Race_ We begin by considering a race between two agents, where one agent uses our algorithm while the other uses the reactive MPC algorithm. The track is shown in Fig 1. The initial positions are chosen with some randomness. The quadrotor that starts ahead is given a maximum speed of \(1.5\) m/s, and the one that starts behind is given a maximum speed of \(1.8\) m/s. The results are displayed in Table III, where _Overtakings_ records the number of times where the ego agents started behind but reached the finish line first and _Defendings_ records the number of times where it started ahead and reached the finish line first. As shown by the results, the ego agent significantly outperforms the reactive MPC by winning most of the races. Fig 1 shows both agents' trajectories in two of the hardware experiments. As we can see, the ego quadrotor is able to overtake the opponent (the quadrotor that uses reactive MPC) when it starts behind and then block the opponent when the opponent has a possibility to overtake. In both cases, the ego is able to secure a leading position without causing any collision. This also demonstrates that our algorithm is able to exploit the trade-off between making progress and blocking opponents without causing constraint violations. ### _Three-agent Race_ We also test our algorithm with a more complicated setting where there are three agents and the track becomes narrower. One of the agents uses our algorithm, and the other two use the reactive MPC algorithm. The track is similar to the one used in the two-agent race but is made to be narrower in order to test our algorithm in a more competitive scenario. The initial conditions are selected such that the quadrotors engage in interactions and have enough space for maneuvering. The speed limit for the leading quadrotor was set to be \(1.5m/s\), the middle quadrotor \(1.65m/s\), and the quadrotor behind had a speed limit of \(1.8m/s\). \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Number** & **2** & **5** & **10** \\ \hline **of Agents** & & & \\ \hline **RAPID** & \(\mathbf{0.078\pm 0.046}\) & \(\mathbf{0.229\pm 0.535}\) & \(\mathbf{0.548\pm 0.400}\) \\ \hline **Baseline** & \(0.217\pm 0.061\) & \(0.637\pm 0.110\) & \(1.456\pm 0.187\) \\ \hline \end{tabular} \end{table} TABLE II: Mean Solve Time (in seconds) Comparisons for Different Numbers of Agents. Fig. 4: The ego (plotted with green color) pushes the baseline (plotted with blue color) against the boundary. \begin{table} \begin{tabular}{|l||c|} \hline **Overtakings** & **Defendings** \\ \hline 10/10 & 8/10 \\ \hline \end{tabular} \end{table} TABLE III: Results for Two-agent Hardware Experiments. Our method was able to overtake 10 times out of the 10 trials when it started behind. And it was able to defend its leading position 8 times out of the 10 trials when it started ahead. We performed the experiments 15 times where the ego quadrotor starts first 5 times, starts second 5 times, and starts third 5 times. The results are recorded in table IV. As we can see, the ego was able to win most of the races, even when it starts behind or starts ahead but given a lower speed limit. We observe that our algorithm is able to take advantage of the collision avoidance constraints of other agents and generate overtaking or blocking, and it is doing so without causing any collision. Fig 2 shows an instance of such a blocking maneuver during a three-agent race. As we can see, the ego (represented by green color) intentionally moves toward agent 1 (represented by blue color) to block it. As a result, the ego is able to make more progress than agent 1 despite being given a lower speed limit. Note that the experimental snapshots are not taken from a top view, so the relative position of agents may look different from the top view plots. For more details on the hardware experiments of three drones, we recommend that interested readers check out the videos. ## VII Conclusion and Future Work In this work, we presented RAPID, an efficient algorithm for trajectory planning in autonomous racing. Our method exploits the special cost structure under which the race can be formulated as a potential game, which enables us to obtain a generalized Nash equilibrium of the game by solving a single constrained optimal control problem. We demonstrated the performance of the algorithm through both simulation studies and hardware experiments. We showed that RAPID could generate delicate interactive maneuvers such as overtaking and blocking while avoiding collisions with other agents. Currently, some parameters such as the active distance threshold and \(\alpha_{inactive},\alpha_{active}\) in Algorithm-1 used in RAPID need to be determined empirically. In future work, we wish to provide theoretical results on the effects of these parameters.
2309.13808
Asynchronous Muddy Children Puzzle (work in progress)
In this work-in-progress paper we explore using the recently introduced VLSM formalism to define and reason about the dynamics of agent-based systems. To this aim we use VLSMs to formally present several possible approaches to modeling the interactions in the Muddy Children Puzzle as protocols that reach consensus asynchronously.
Dafina Trufaş, Ioan Teodorescu, Denisa Diaconescu, Traian Şerbănuţă, Vlad Zamfir
2023-09-25T01:24:21Z
http://arxiv.org/abs/2309.13808v1
# Asynchronous Muddy Children Puzzle ###### Abstract In this work-in-progress paper we explore using the recently introduced VLSM formalism to define and reason about the dynamics of agent-based systems. To this aim we use VLSMs to formally present several possible approaches to modeling the interactions in the Muddy Children Puzzle as protocols that reach consensus asynchronously. ## 1 Introduction Formally modeling and reasoning about distributed systems with faults is a challenging task [3]. We recently proposed the theory of _Validating Labeled State transition and Message production systems (VLSMs)_[10] as a general approach to modeling and verifying distributed protocols executing in the presence of faults. The theory of VLSMs has its roots in the work on verification of the CBC Casper protocol [11, 6] and follows the correct-by-construction methodology for design and development. Even though the theory of VLSMs was primarily designed for applications in faulty distributed systems, and in blockchains in particular, the framework is general and flexible enough to capture various types of problems in distributed systems. As an illustration, in this paper we show how we can use VLSMs to model and solve (an asynchronous variant of) the epistemic Muddy Children Puzzle [2]. ## 2 Classical Muddy Children Puzzle Let us begin by recalling the statement of the Muddy Children Puzzle and a classical epistemic logic approach to solving it [2]. There are \(n\) children playing together. It happens during their play that \(k\) of the children get mud on their foreheads. Their father comes and says: "At least one of you has mud on your forehead." (if \(k>1\), thus expresses a fact known to each of them before he spoke). The father then asks the following question, repeatedly: "Does any of you know whether you have mud on your own forehead?". The initial assumptions are expressed in terms of common knowledge. Hence, we shall assume that it is common knowledge that the father is truthful, that all the children can and do hear the father, that all the children can and do see which of the other children besides themselves have muddy foreheads and that all the children are truthful and intelligent enough to make all the necessary deductions during the game. The solutionLet's consider first the situation before the father speaks. We model the problem by a Kripke structure \(M=(S,\models,\mathcal{K}_{1},\ldots,\mathcal{K}_{n})\) over \(\Phi,\) where: * \(\Phi=\{p_{i},\ldots,p_{n},p\}\) (\(p_{i}\) = "child \(i\) has a muddy forehead" and \(p\) = "at least one child has a muddy forehead"). Note that \(p\) could be obtained as the disjunction of all \(p_{i}\)'s; however, for simplicity one can consider it a primitive (albeit non-atomic) predicate. * \(S\) is a set of \(2^{n}\) states (corresponding to all the possible configurations of clean and muddy children, represented as binary tuples) * \((M,(x_{1},\ldots,x_{n}))\models p_{i}\) iff \(x_{i}=1\) \((M,(x_{1},\ldots,x_{n}))\models p\) iff \(x_{i}=1\) for some \(i\) * \((s,t)\in\mathcal{K}_{i}\) if s and t are identical in all components except eventually the \(t^{th}\) one. It's crucial to remark that, in the absence of the father's initial announcement, the fact that "there is at least one muddy child" is not common knowledge and the state of knowledge never changes, no matter how many rounds we take into account. Indeed, after the first question, all the children will certainly answer "No", since they all consider possible the situation in which they themselves do not have mud on their forehead. No information is gained from this round and the situation remains the same after each of the following ones, because each child considers possible a state in which they are clean. Now let's analyze how the epistemic context changes after the father speaks: as mentioned above, the common knowledge is now larger (even though in the case with \(k\geq 2\) muddy children, \(p\) was already common knowledge), because of the _public_ nature of the announcement. Let's consider how the children reason after all of them answered "No" in the first round: it is obvious that all of them eliminate the states containing one muddy child, since the others could not have all answered "No" otherwise. Continuing inductively, we obtain that after \(k\) rounds in which all the children answer "No", we can eliminate from the problem graph all the nodes corresponding to states with at most \(k\) muddy children. An immediate consequence of this is that after \(k-1\) rounds, it becomes common knowledge that there are at least \(k\) muddy children. Hence, the muddy children, who each only see \(k-1\) muddy children will conclude that they are muddy and answer "Yes". There are multiple formalizations of this puzzle in the literature [9, 5, 7, 8, 3]; indeed it seems that each new formalism reasoning about knowledge includes a modeling of this puzzle as a basic example of the expressiveness of the formalism. ## 3 VLSMs - basic notions We give a high-level presentation of the theory of VLSMs. More details can be found in [10]. **Definition 1** (Vlsm).: _A Validating Labeled State transition and Message production system (VLSM, for short) is a structure of the form \(\mathcal{V}=(L,S,S_{0},M,M_{0},\tau,\beta),\) where \(L\) is a set of labels, \((S_{0}\subseteq)\)\(S\) is a non-empty set of (initial) states, \((M_{0}\subseteq)\)\(M\) is a set of (initial) messages, \(\tau:L\times S\times M?\to S\times M?\) is a transition function which takes as arguments a label, a state, and possibly a message, and outputs a state and possibly a message, while \(\beta\) is a validity constraint on the inputs of the transition function.1_ Footnote 1: For any set \(M\) of messages, let \(M?=M\cup\{\mathbf{\mathsf{w}}\}\) be the extension of \(M\) with \(\mathbf{\mathsf{w}}\), where \(\mathbf{\mathsf{w}}\) stands for _no-message_. The transition function in a VLSM is total; however, it indirectly becomes a partial function since the validity constraint can filter out some of its inputs. The set of labels in a VLSM can be used to model non-determinism: it is possible to have multiple parallel transitions from a state using the same input message, each with its own label. Validity.A transition is called _constrained_ if the validity constraint holds on its input. We denote a constrained transition \(\tau(l,s,m)=(s^{\prime},m^{\prime})\) by \(s\xrightarrow[m\to m^{\prime}]{}s^{\prime}.\) A _constrained trace_ is a sequence of constrained transitions that starts in an initial state. A _valid trace_ is inductively defined as a constrained trace in which the input of each transition can be emitted from a valid trace. A state is _constrained/valid_ if there is a constrained/valid trace leading to it. Similarly, a message is _constrained/valid_ if it is produced on a constrained/valid trace (we also consider the no-message to be valid). A transition is called _valid_ if it is a constrained transition that uses only valid states and messages; thus a valid trace is a sequence of valid transitions starting in an initial state. Equivocation.In the literature concerning fault-tolerance in distributed systems, equivocation models the fact that certain agents can claim to be in different states to different parties. VLSMs allow modeling such behavior, by specifying that in a valid trace the valid input messages can be produced on different (though still valid) traces. Composition.A single VLSM can represent the local point of view of a component in a distributed system. We can obtain the global point of view by composing multiple VLSMs and lifting the local validity constraint of each component. Designers of systems can impose additional restrictions, which are stronger than the ones that can be specified locally on individual components because they can be stated in terms of the global composite state. We capture this phenomenon by the notion of _composition constraint_. **Definition 2** (Composition).: _Let \(\{\mathcal{V}_{i}\}_{i=1}^{n}\) be an indexed set of VLSMs over the same set of messages \(M\). The constrained composition under a composition constraint \(\varphi\) is the VLSM \(\mathcal{V}=\left(\sum_{i=1}^{n}\mathcal{V}_{i}\right)\Big{|}_{\varphi}=(L,S, S_{0},M,M_{0},\tau,\beta\wedge\varphi)\) where \(L\) is the disjoint union of labels, the (initial) states are the product of (initial) states of the components, the transition function \(\tau\) and the constraint predicate \(\beta\) are defined component-wise, while the composition constraint \(\varphi\subseteq L\times S\times M?\) is an additional predicate that filters the inputs for the transition function._ When a composition constraint is trivial, i.e., it is the set \(L\times S\times M?\), we refer to the composition as the _free composition_ and drop the subscript \(\varphi\) in the notation. No Equivocation constraint.As mentioned above, VLSMs implicitly allow equivocation. Nevertheless, a truthful behavior of the agents in a composition can still be enforced by means of a _no equivocation_ constraint, which does not allow receiving in a (composite) state a message from a component if that component could not have emitted the message in a trace leading to the current state. ## 4 Asynchronous Muddy Children Puzzle as a VLSM -- take 1 We would like to model the Muddy Children Puzzle as a collective effort of a group of agents (the children) to reach a common goal (knowing whether they are muddy or not) by exchanging messages which reveal as little as possible of their personal (initial) knowledge about the problem (a.k.a., which children they see as muddy). To this aim, there are several characteristics which will be common to both our presented models: * Each child needs to know (at all times) which of the other children are muddy (the child's initial knowledge); * Every message needs to have a sender; * Every message has to report on the epistemic status of its sender at the time the message was sent; * Decisions are final: once a child has decided upon their own status (clean/muddy), they will not receive additional messages, as these would not bring new knowledge. In our models we choose to index the children with natural numbers and to represent a child's initial knowledge as a set of indices of the muddy children they see (note that the index of the child cannot appear in this set). We will also use these indices to identify the sender of a message. For the epistemic status, we will use three possible values: * \(u\) stands for _"child doesn't known their status"_; * \(m\) stands for _"child knows they have mud on their forehead"_; * \(c\) stands for _"child knows they don't have mud on their forehead"_. For our first modeling attempt, we let the children maintain and communicate a number reflecting "the round number" (from the original solution) at which they perceive themselves to be. Let us start formalizing this setting using VLSMs. We represent each child as a VLSM of the form \(\mathcal{C}_{i}=(L_{i},S_{i},S_{0,i},M,M_{0,i},\tau_{i},\beta_{i})\). The states of \(\mathcal{C}_{i}\) are either initial states of the form \(\langle Obs\rangle\) where \(Obs\) represents the children seen as muddy by child \(i\), or running states of the form \(\langle Obs,r,status\rangle\), where additionally \(r\) is the round perceived by child \(i\), and \(status\) represents the epistemic status of child \(i\). * \(L_{i}=\{init,emit,receive\}\) reflects the three types of transitions (_in_italization, corresponding to the father's announcement and _emitting_/_receiving_ messages) * \(M_{i}=\{\langle j,r,status\rangle\mid j\in\{1,\ldots,n\},\ r\in\mathbb{N},\ \textit{status}\in\{u,m,c\}\}\) -- messages also communicate the round number * \(M_{0,i}=\emptyset\) * \(S_{i}=\{\langle Obs\rangle\mid Obs\subseteq\{1,\ldots,n\}\}\cup\{\langle Obs,s,r\rangle\mid Obs\subseteq\{1,\ldots,n\},\ s\in\{u,m,c\},\ r\in\mathbb{N}\}\) * \(S_{0,i}=\{\langle Obs\rangle\mid Obs\subseteq\{1,\ldots,n\}\}\), The invariant we would like to maintain is that a child at round \(r\) knows (from the messages exchanged with its peers) that there are more than \(r\) muddy children. Assuming that such an invariant holds for all accessible states, then when a child (say \(i\)) sends a message containing their round number (say \(k\)) and the fact that they still cannot determine their status, a child receiving such a message (say \(j\)) can derive from it that \(i\) knows that there are more than \(k\) muddy children except themselves (since otherwise \(i\) would know their status). Moreover, since the receiving child knows whether the sender is muddy or not, if \(j\) sees \(i\) as muddy, \(j\) can infer that there are actually more than \(k+1\) muddy children. If the current round of \(j\) is smaller than the number inferred (either \(k\), if \(i\) is clean, or \(k+1\), if \(i\) is muddy), then \(j\) can update their current round to that number. If that happened, say \(r\) is the new current round number, if \(r\) is less than the number of muddy children that \(j\) sees, then the information provided by \(r\) is not yet useful enough to draw a conclusion about the status, so the status will stay unknown. However, if \(r\) ever becomes equal to the number of muddy children that \(j\) sees, then the child knows that there are more than \(r\) muddy children, and since they can only see \(r\) muddy children, they will conclude they have to be muddy. An interesting fact holds for clean children. Note first that they see all the children who are muddy (say \(N\)), so for them the number of muddy children they see is larger by \(1\) than the number of muddy children seen by any muddy child. Hence, they cannot infer their status using the reasoning above, because for that they would have to receive a message from a muddy child, say \(i\) with an unknown status at round \(N-1\), but that is precisely the number of muddy children that child \(i\) sees, so at that round child \(i\) would already be able to infer their status. Let's assume a child \(i\) is clean. If there are enough muddy children, \(i\) can receive a message from a muddy child with round \(N-2\), and update their round to \(N-1\) (using the reasoning above) while maintaining their status as unknown (since the child still sees more muddy children). Note that \(N-1\) is its maximal round according to the invariant proposed above. But, at the same time, one of the other muddy children, say \(j\) can receive the same message and update their round to \(N-1\), which coincides to the number of muddy children they see, so they would change their status to knowing that they are muddy. Now if \(j\) sends a message with their new round and status and \(i\) receives it, then \(i\) would know they must be clean, but the question is: what would happen with the child's round? If \(i\) keeps the same round (as to not violate the invariant), then they would have two statuses at the same round. To avoid that, we decide to break the invariant in this case and let a clean child advance to round \(N\) when they infer they are clean, thus staying in sync with the traditional solution to the puzzle. Thus we can rephrase the first invariant as: "a message \(\langle j,\,r,\,status\rangle\) with status different than clean guarantees that \(j\) knows that there are more than \(r\) muddy children", and add a second property, saying that a message \(\langle j,\,r,\,status\rangle\) with status different than unknown guarantees that \(j\) sees precisely \(r\) muddy children. In the sequel, we propose a transition function (and constraint predicate) to help us realize the proposal above. **Init**: From the initial state, each child takes one (silent) transition, analyzing their current knowledge and initializing the dynamic part of the state accordingly. * The round number is initialized with \(0\) * If the set of muddy children the child sees is empty, then the knowledge flag is set to muddy (since at least one must be muddy); otherwise to unknown \(\tau_{i}(init,\langle Obs\rangle,\boldsymbol{\kappa})=(\langle Obs,0,u \rangle,\boldsymbol{\kappa})\), if \(Obs\neq\emptyset\)\(\tau_{i}(init,\langle Obs\rangle,\boldsymbol{\kappa})=(\langle Obs,0,m \rangle,\boldsymbol{\kappa})\), if \(Obs=\emptyset\)\(\beta_{i}(init,\langle Obs\rangle,m)=(m=\boldsymbol{\kappa})\) \[\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle, \langle j,r^{\prime},\mathit{status}^{\prime}\rangle)= (\langle Obs,r,\mathit{status}\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}\in\{m,c\}\] \[(\langle Obs,r^{\prime},c\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=c\] \[\text{and }j\notin Obs\text{ and }r^{\prime}=|Obs|\] \[(\langle Obs,r^{\prime}-1,m\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=c\] \[\text{and }j\notin Obs\text{ and }r^{\prime}=|Obs|+1\] \[(\langle Obs,r^{\prime},m\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=m\] \[\text{and }j\in Obs\text{ and }r^{\prime}=|Obs|\] \[(\langle Obs,r^{\prime}+1,c\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=m\] \[\text{and }j\in Obs\text{ and }r^{\prime}=|Obs|-1\] \[(\langle Obs,r,\mathit{status}\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\in Obs\text{ and }r^{\prime}<r\] \[(\langle Obs,r^{\prime}+1,\mathit{status}\rangle,\textbf{\emph{ \#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\in Obs\text{ and }r\leq r^{\prime}<|Obs|-1\] \[(\langle Obs,r^{\prime}+1,m\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\in Obs\text{ and }r^{\prime}=|Obs|-1\] \[(\langle Obs,r,\mathit{status}\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\notin Obs\text{ and }r^{\prime}\leq r\] \[(\langle Obs,r^{\prime},\mathit{status}\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\notin Obs\text{ and }r<r^{\prime}<|Obs|\] \[(\langle Obs,r^{\prime},\mathit{m}\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\notin Obs\text{ and }r^{\prime}=|Obs|\] \[\text{and }j\notin Obs\text{ and }r^{\prime}=|Obs|\] \[(\langle Obs,r^{\prime},\mathit{m}\rangle,\textbf{\emph{\#}}), \text{if }\mathit{status}=u\text{ and }\mathit{status}^{\prime}=u\] \[\text{and }j\notin Obs\text{ and }r^{\prime}=|Obs|\] \[\text{for the cases not treated above}\] **Emit**: From any non-initial state, a child can emit a message consisting of their identifier, current round number and epistemic status, without changing state. \(\tau_{i}(\mathit{emit},\langle Obs,r,\mathit{status}\rangle,\textbf{\emph{ \#}})=(\langle Obs,r,\mathit{status}\rangle,\langle i,\,r,\mathit{status} \rangle)\) \(\beta_{i}(\mathit{emit},\langle Obs,r,\mathit{status}\rangle,m)=(m=\textbf{ \emph{\#}})\) **Receive**: To update their state, whenever receiving a message \(\langle j,r^{\prime},\mathit{status}^{\prime}\rangle\) in a state \(\langle Obs,r,\mathit{status}\rangle\), the child does the following: * If their current \(\mathit{status}\) is not \(u\), they ignore the message (decisions are final). \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,r^{ \prime},\mathit{status}^{\prime}\rangle)=(\langle Obs,r,\mathit{status}\rangle, \textbf{\emph{\#}})\), if \(\mathit{status}\in\{m,c\}\) * If message status (\(\mathit{status}^{\prime}\)) is \(c\), and \(j\) is not known to be muddy (\(j\not\in Obs\)), then from the property above \(r^{\prime}\) must represent the actual number of muddy children. Hence: Figure 1: The transition function for the _receive_ label * If \(r^{\prime}=|Obs|\), then child \(i\) can update their round to the received message's round and then conclude that they are clean. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime},c\rangle,\textbf{x})\) * If \(r^{\prime}=|Obs|+1\), then child \(i\) can update their round to the round before received message's round and then conclude that they are muddy. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}-1,m\rangle,\textbf{x})\) * If message status (\(\mathit{status}^{\prime}\)) is \(m\) then it must be that \(j\) is known as muddy (\(j\in Obs\)); then, from the property above, \(r^{\prime}+1\) must represent the actual number of muddy children. Hence: * If \(r^{\prime}=|Obs|\), then child \(i\) can update their round to the received message's round and then conclude that they are muddy. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,c\rangle,\textbf{x})\) * If message status (\(\mathit{status}^{\prime}\)) is \(u\), and \(j\) is known as muddy (\(j\in Obs\)), then \(i\) can infer that \(j\) knows that there are more than \(r^{\prime}\) muddy children, and therefore infers that there are more than \(r^{\prime}+1\) muddy children. Hence: * If \(r^{\prime}<r\), then child \(i\) can ignore the message (it brings nothing new). \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r,\mathit{status}\rangle,\textbf{x})\) * If \(r\leq r^{\prime}<|Obs|-1\), then child \(i\) can update their round to \(r^{\prime}+1\), but their status will remain unknown (they already know there are at least \(|Obs|\) muddy children). \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}=|Obs|-1\), then child \(j\) sees at least \(|Obs|\) muddy children. Since the sender is known as muddy, there are at least \(|Obs|+1\) muddy children. On the other hand, child \(i\) knows there are at most \(|Obs|+1\) muddy children. Combining the two inequalities, we get that there are precisely \(|Obs|+1\) muddy children, so child \(i\) can advance to round \(r^{\prime}+1\) and knows they are muddy. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,m\rangle,\textbf{x})\) * If message status (\(\mathit{status}^{\prime}\)) is unknown, and \(j\) is not known as muddy (\(j\notin Obs\)), then \(i\) can infer that \(j\) knows that there are more than \(r^{\prime}\) muddy children, and therefore infers (only) that there are more than \(r^{\prime}\) muddy children. Hence: * If \(r^{\prime}\leq r\), then child \(i\) can ignore the message. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}\leq r^{\prime}<|Obs|-1\), then child \(i\) can update their round to \(r^{\prime}+1\), but their status will remain unknown (they already know there are at least \(|Obs|\) muddy children). \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}=|Obs|-1\), then child \(j\) sees at least \(|Obs|\) muddy children. Since the sender is known as muddy, there are at least \(|Obs|+1\) muddy children. On the other hand, child \(i\) knows there are at most \(|Obs|+1\) muddy children. Combining the two inequalities, we get that there are precisely \(|Obs|+1\) muddy children, so child \(i\) can advance to round \(r^{\prime}+1\) and knows they are muddy. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,m\rangle,\textbf{x})\) * If message status (\(\mathit{status}^{\prime}\)) is unknown, and \(j\) is not known as muddy (\(j\notin Obs\)), then \(i\) can infer that \(j\) knows that there are more than \(r^{\prime}\) muddy children, and therefore infers (only) that there are more than \(r^{\prime}\) muddy children. Hence: * If \(r^{\prime}\leq r\), then child \(i\) can ignore the message. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}\leq r^{\prime}<|Obs|-1\), then child \(i\) can update their round to \(r^{\prime}+1\), but their status will remain unknown (they already know there are at least \(|Obs|\) muddy children). \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}=|Obs|-1\), then child \(j\) sees at least \(|Obs|\) muddy children. Since the sender is known as muddy, there are at least \(|Obs|+1\) muddy children. On the other hand, child \(i\) knows there are at most \(|Obs|+1\) muddy children. Combining the two inequalities, we get that there are precisely \(|Obs|+1\) muddy children, so child \(i\) can advance to round \(r^{\prime}+1\) and knows they are muddy. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,m\rangle,\textbf{x})\) * If message status (\(\mathit{status}^{\prime}\)) is unknown, and \(j\) is not known as muddy (\(j\notin Obs\)), then \(i\) can infer that \(j\) knows that there are more than \(r^{\prime}\) muddy children, and therefore infers (only) that there are more than \(r^{\prime}\) muddy children. Hence: * If \(r^{\prime}\leq r\), then child \(i\) can ignore the message. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}\leq r^{\prime}<|Obs|-1\), then child \(i\) can update their round to \(r^{\prime}+1\), but their status will remain unknown (they already know there are at least \(|Obs|\) muddy children). \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}=|Obs|-1\), then child \(j\) sees at least \(|Obs|+1\) muddy children. Since the sender is known as muddy, there are at least \(|Obs|+1\) muddy children. On the other hand, child \(i\) knows there are at most \(|Obs|+1\) muddy children. Combining the two inequalities, we get that there are precisely \(|Obs|+1\) muddy children, so child \(i\) can advance to round \(r^{\prime}+1\) and knows they are muddy. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r^{\prime}+1,m\rangle,\textbf{x})\) * If message status (\(\mathit{status}^{\prime}\)) is unknown, and \(j\) is not known as muddy (\(j\notin Obs\)), then \(i\) can infer that \(j\) knows that there are more than \(r^{\prime}\) muddy children, and therefore infers (only) that there are more than \(r^{\prime}\) muddy children. Hence: * If \(r^{\prime}\leq r\), then child \(i\) can ignore the message. \(\tau_{i}(\mathit{receive},\langle Obs,r,\mathit{status}\rangle,\langle j,\,r^{ \prime},\mathit{status}^{\prime}\rangle)\) = \((\langle Obs,r,\mathit{status}\rangle,\textbf{x})\) * If \(r^{\prime}\leq r^{\prime}<|Obs|-1\), then child \(i\) can update their round to \(r^{\prime}+1\), but their status will remain unknown (they already know there are at least \(|Obs|+1\) muddy children). Hence: * If \(r^{\prime}\leq r^{\prime}\), then child \(i\) can ignore the message. * If \(r^{\prime}\leq r^{\prime}\), then child \(i\) can ignore the message. * If \(r<r^{\prime}<|Obs|\), then child \(i\) can update its round to \(r^{\prime}\), but its status will remain unknown (it already knows there are at least \(|Obs|\) muddy children). \(\tau_{i}(\textit{receive},\langle Obs,r,\textit{status}\rangle,\langle j,\textit {r}^{\prime},\textit{status}^{\prime}\rangle)=(\langle Obs,r^{\prime}, \textit{status}\rangle,\boldsymbol{\kappa})\) * If \(r^{\prime}=|Obs|\), we reason analogous to the similar above case: child \(j\) knows that there are at least \(|Obs|+1\) muddy children and. Adding the fact that child \(i\) knows there are at most \(|Obs|+1\) muddy children, we get that there are precisely \(|Obs|+1\) muddy children, so child \(i\) knows they are muddy. \(\tau_{i}(\textit{receive},\langle Obs,r,\textit{status}\rangle,\langle j, \textit{r}^{\prime},\textit{status}^{\prime}\rangle)=(\langle Obs,r^{\prime}, m\rangle,\boldsymbol{\kappa})\) Composition.Note that the notion of VLSM composition in general allows arbitrary initial states for the components. However, in this setting, we need to ensure that the sets of children known by each child to be muddy are _consistent_. Formally, given a child-state \(s\), let \(Obs(s)\) be the observation-set associated to \(s\). Then, for any composite state \(\sigma=\langle\sigma_{1},\ldots,\sigma_{n}\rangle\), let \(\textbf{consistent}(\sigma)\) be the predicate defined by * \(M\neq\emptyset\) (there should be at least one muddy child) * \(Obs(\sigma_{i})=M\setminus\{i\}\), for any \(i\) (each child sees all other muddy children) where \(M=\bigcup_{i=1}^{n}Obs(\sigma_{i})\). Finally, the game flow can be formalized as a constrained VLSM composition. \[\mathcal{M}uddy\mathcal{P}uzzle= \Big{(}\mathcal{C}_{1}+\ldots+\mathcal{C}_{n}\Big{)}\Big{|}_{\varphi}\] where \(\varphi\) specifies the following composition constraint: **init**: At the first transition from an initial state we check that the observation sets corresponding to each component are consistent \[\varphi((i,init),(\langle Obs_{1}\rangle,\ldots,\langle Obs_{n}\rangle),m)= \textbf{consistent}(\langle Obs_{1}\rangle,\ldots,\langle Obs_{n}\rangle)\] **receive**: We must enforce a no-equivocation constraint to ensure the truthfulness of the participants \[\begin{array}{l}\varphi(\langle i,\textit{receive}\rangle,\langle\sigma_{1 },\ldots,\sigma_{n}\rangle,\langle j,\textit{r}^{\prime},\textit{status}^{ \prime}\rangle)=(\textit{status}^{\prime}=\textit{status}_{j}\wedge r^{ \prime}=r_{j})\vee(\textit{status}^{\prime}=u\wedge r^{\prime}<r_{j}),\\ \text{where }\sigma_{j}=\langle Obs,r_{j},\textit{status}_{j}\rangle.\end{array}\] ### Correctness of the protocol In the following, we give a justification of the fact that the above described protocol is correct in the sense that it converges to a solution. The valid states of the protocol (\(S_{V}\)) correspond to the composite states which are VLSM-valid in the constrained composition described above. We define the set of non-initial valid states \(S_{V}^{*}=S_{V}\setminus S_{0}\) and the set of final states \(S_{F}=\{\langle\sigma_{i}\ldots\sigma_{n}\rangle\in S_{V}\mid\forall i\in\{1, \ldots,n\},\textit{status}(\sigma_{i})\neq u\}\). It can be easily checked that the consistency predicate holds for any \(\sigma\in S_{V}^{*}\): **Remark 1**.: _For each \(\sigma\in S_{V}^{*}\), \(\textbf{consistent}(\sigma)\) holds._ Another VLSM-related property, which we state without proof is the fact that the constraint for _receive_ transitions is indeed a no-equivocation constraint: **Remark 2** (No Equivocation).: _For any valid trace \(tr\) leading from an initial state \(\sigma_{0}\) to a state \(\sigma\in S_{v}^{*}\) and for any input message \(m\) which is valid for \(\sigma\) there exists a valid trace starting in \(\sigma_{0}\), of length less than that of \(tr\) which emits \(m\)._ Let us now show that the invariant that we stated about the dynamics of the protocol is indeed preserved during a (valid) protocol run. **Lemma 1** (Invariant Preservation).: _For any \(\sigma\in S_{V}\), if \(\sigma\not\in S_{0}\), then:_ _For any component \(i\) of \(\sigma\) of the form \(\sigma_{i}=\langle Obs_{i},r_{i},\mathit{status}_{i}\rangle\):_ * _If_ \(\mathit{status}_{i}=u\)_, then_ \(r_{i}<|Obs_{i}|\) _(and_ \(|Obs_{i}|\leq N\)_)_ * _If_ \(\mathit{status}_{i}=m\)_, then_ \(r_{i}=N-1=|Obs_{i}|\)__ * _If_ \(\mathit{status}_{i}=c\)_, then_ \(r_{i}=N=|Obs_{i}|\)__ Proof.: Note that it is common knowledge that \(|Obs_{i}|\leq N\leq|Obs_{i}|+1\) for any \(i\). We prove the invariant by induction on the length of a valid trace leading to \(\sigma\). The property trivially holds for \(\sigma\in S_{0}\). For the induction case, we consider the final (valid) transition leading to \(\sigma\), say \(\sigma^{\prime}\xrightarrow[m\to m^{\prime}]{(i,I)}\sigma\), and we assume that the invariant holds for \(\sigma^{\prime}\). We proceed by case analysis on the label of the transition. \(l=\mathit{init}\)**:**: This transition obviously preserves the invariant, because of the father's statement. \(l=\mathit{emit}\)**:**: In case of a _emit_ transition, the conclusion is also immediate, since the state remains unchanged. \(l=\mathit{receive}\)**:**: From Remark 2 there is a composite valid state from which the input message can be emitted which has the same observation sets as \(\sigma^{\prime}\) (since they both are reachable from the same initial state) and for which we can apply the induction hypothesis and thus assume that the invariant holds. * \(\tau_{i}(\mathit{receive},\langle Obs_{i},r,m\rangle,\langle j,\mathit{r}^{ \prime},\mathit{status}^{\prime}\rangle)\): the message is ignored and the state remains unchanged, so the conclusion is immediate. * \(\tau_{i}(\mathit{receive},\langle Obs_{i},r,c\rangle,\langle j,\mathit{r}^{ \prime},\mathit{status}^{\prime}\rangle)\): we proceed analogous to the previous case. * \(\tau_{i}(\mathit{receive},\langle Obs_{i},r,u\rangle,\langle j,\mathit{r}^{ \prime},c\rangle)\) and \(j\not\in Obs_{i}\) and \(r^{\prime}=|Obs_{i}|\): By applying the induction hypothesis to the state from which the message is obtained, we have \(r^{\prime}=N=|Obs_{j}|\). The resulting state \(\langle Obs_{i},r^{\prime},c\rangle\) preserves the invariant, because \(r^{\prime}=N=|Obs_{i}|\). * \(\tau_{i}(\mathit{receive},\langle Obs_{i},r,u\rangle,\langle j,\mathit{r}^{ \prime},c\rangle)\) and \(j\not\in Obs_{i}\) and \(r^{\prime}=|Obs_{i}|+1\): By applying the induction hypothesis to the state from which the message is obtained, we have \(r^{\prime}=N=|Obs_{j}|\). The resulting state \(\langle Obs_{i},r^{\prime}-1,m\rangle\) preserves the invariant, because \(r^{\prime}-1=N-1=|Obs_{i}|\). * \(\tau_{i}(\mathit{receive},\langle Obs_{i},r,u\rangle,\langle j,\mathit{r}^{ \prime},m\rangle)\) and \(j\in Obs_{i}\) and \(r^{\prime}=|Obs_{i}|\): By applying the induction hypothesis to the state from which the message is obtained, we have \(r^{\prime}=N-1=|Obs_{j}|\). The resulting state \(\langle Obs_{i},r^{\prime},m\rangle\) preserves the invariant, because \(r^{\prime}=N-1=|Obs_{i}|\). * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},m\rangle)\) and \(j\in Obs_{i}\) and \(r^{\prime}=|Obs_{i}|-1\): By applying the induction hypothesis to the state from which the message is obtained, we have \(r^{\prime}=N-1=|Obs_{j}|\). The resulting state \(\langle Obs_{i},r^{\prime}+1,c\rangle\) preserves the invariant, because \(r^{\prime}+1=N=|Obs_{i}|\). * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},u\rangle)\) and \(j\in Obs_{i}\) and \(r^{\prime}<r\): The state after applying the transition remains unchanged, so the conclusion is immediate. * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},u\rangle)\) and \(j\in Obs_{i}\) and \(r\leq r^{\prime}<|Obs_{i}|-1\): The resulting state \(\langle Obs_{i},r^{\prime}+1,u\rangle\) obviously preserves the invariant, since \(r^{\prime}+1<|Obs_{i}|\). * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},u\rangle)\) and \(j\in Obs_{i}\) and \(r^{\prime}=|Obs_{i}|-1\): We have \(r^{\prime}+1=|Obs_{i}|\geq N-1\), so \(r^{\prime}\geq N-2\). On the other hand, applying the induction hypothesis, we have that \(r^{\prime}<|Obs_{j}|\) and since child \(j\) is muddy, \(|Obs_{j}|=N-1\), and combining these we get \(r^{\prime}<|Obs_{j}|=N-1\), which implies \(r^{\prime}\leq N-2\). We can conclude that \(r^{\prime}=N-2\), so the resulting state \(\langle Obs_{i},r^{\prime}+1,m\rangle\) preserves the invariant, because \(r^{\prime}+1=|Obs_{i}|=N-1\). * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},u\rangle)\) and \(j\notin Obs_{i}\) and \(r^{\prime}<=r\): The state after applying the transition remains unchanged, so the conclusion is immediate. * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},u\rangle)\) and \(j\notin Obs\) and \(r<r^{\prime}<|Obs_{i}|\): The resulting state \(\langle Obs_{i},r^{\prime},status\rangle\) preserves the invariant, since \(r^{\prime}<|Obs_{i}|\). * \(\tau_{i}(receive,\langle Obs_{i},r,u\rangle,\langle j,\,r^{\prime},u\rangle)\) and \(j\notin Obs_{i}\) and \(r^{\prime}=|Obs_{i}|\): By applying the induction hypothesis to the state from which the message is obtained, we have \(r^{\prime}<|Obs_{j}|\) and since child \(j\) is clean, \(|Obs_{j}|=N\) so we get \(r^{\prime}<N\) and combining this with the last condition in the transition, we immediately obtain that \(|Obs_{i}|<N\). But since it is common knowledge that \(|Obs_{i}|\geq N-1\) it must be that \(|Obs_{i}|=N-1\). We can conclude that \(r^{\prime}=|Obs_{i}|=N-1\), so the resulting state \(\langle Obs_{i},r^{\prime},m\rangle\) preserves the invariant. * For all the cases not treated above, the child's \(\beta\) predicate does not hold, so there cannot be any transition. **Theorem 1**.: _From any initial consistent state, there is a path leading to a final state in which each child's status is consistent with the instance of the problem._ Proof sketch.: The result can be obtained from the following properties: **Progress**: From any valid non-final state \(\sigma\), there is a valid transition leading to a state \(\sigma^{\prime}\) on some component \(i\), such that \(round(\sigma^{\prime}_{i})>round(\sigma_{i})\), where we let \(round(\langle Obs\rangle)\) be \(-1\). If there are any component initial states left, we advance one of them. If there are any children which already know their status, we advance to the final state any of the others (there must be at least one child with unknown status since the state is not final). If no child knows their status yet, then there must be at least two muddy children. Then take the one among them with minimum round number (if their round numbers are equal, take any of them) and receive the message corresponding to the current state of the other. This will surely increase their current round number. **Invariant Preservation**: The invariant from Lemma 1 holds. In particular, if a final state is reached, each child's status will be consistent with the current instance of the problem. **Termination**: No child can increase their round number past the number of muddy children they see. Easy to prove by induction and analysis on the cases of the transition function. ### Possible optimization If we analyze the dynamics of the solution proposed above, we notice that the only relevant exchange of messages happens in the final stages of the protocol. Indeed, assuming a child sees \(n\) muddy children, they know that any other child sees at least \(n-1\) muddy children; therefore no message it would send with round less than \(n-1\) can help any other child determine their status. Hence, the child can "jump" during the initialization phase directly to the round \(n-1\). Formally, we add a new label **jump** and replace the **Init** transition with the following: From any state with status unknown, a child can take one (silent) transition "jumping" to a future state it knows it can reach based on existing knowledge, where the round number becomes the number preceding the number of muddy children the child sees and the child's knowledge flag remains unknown. \[\tau_{i}(jump,\langle Obs,r,status\rangle,m)=(\langle Obs,|Obs|-1,u\rangle, \textbf{{x}})\] \[\beta_{i}(jump,\langle Obs,r,status\rangle,m)=(status=u) \wedge(m=\textbf{{x}})\wedge(r<|Obs|-1)\] After such a jump one can find a very short path to a solution: 1. a child sends a message after the jump with the unknown status 2. another (muddy) child receives that message and discovers they are muddy and sends a message with this discovery 3. all other children (including the first) receive this second message and discover their status ### Discussion The above proposed solution seems to satisfy our initial guidelines for a good asynchronous solution. Nevertheless, it is far from being perfect, as illustrated by the following example. **Example 1** (Information leak).: _Assume there are \(5\) children, \(4\) of whom are muddy and one is not. Then a muddy child (say, child \(1\)) can discover that they are muddy by only exchanging messages with the clean child (say, child \(5\)) with the following scenario:_ 1. \(1\) _at round_ \(0\) _sends message_ \(m_{0}^{1}\) _that they do not know_ 2. \(5\) _at round_ \(0\) _receives message_ \(m_{0}^{1}\)_, advances to round_ \(1\)_, and sends message_ \(m_{1}^{5}\) _that they do not know_ 3. \(1\) _at round_ \(0\) _receives the message_ \(m_{1}^{5}\)_, advances to round_ \(1\)_, and sends message_ \(m_{1}^{1}\) _that they do not know_ 4. \(5\) _at round_ \(1\) _receives message_ \(m_{1}^{1}\)_, advances to round_ \(2\)_, and sends message_ \(m_{2}^{5}\) _that they do not know_ 5. \(1\) _at round_ \(1\) _receives the message_ \(m_{2}^{5}\)_, advances to round_ \(2\)_, and sends message_ \(m_{2}^{1}\) _that they do not know_ 6. \(5\) _at round_ \(2\) _receives message_ \(m_{2}^{1}\)_, advances to round_ \(3\)_, and sends message_ \(m_{3}^{5}\) _that they do not know_ 7. \(1\) _at round_ \(2\) _receives the message_ \(m_{3}^{5}\)_, advances to round_ \(3\) _which equals the number of muddy children they see and changes their status to knowing they are muddy (and sends message_ \(m_{3}^{1}\) _with status muddy)_ _Moreover, once \(1\) knows they are muddy and sends a message about it, upon receiving that message all the other children will know their status._ There are at least two issues about the above example: (1) that a muddy child needs no information from other muddy children to discover that they are muddy; and (2) that once a child knows their status, all other children immediately know their status, even if they have not taken part in the conversation so far. It thus seems that, although sharing the perceived round as part of the message helps with making the discovery process asynchronous, it also alters it more than expected in terms of what becomes inferable during an exchange of information. The following section proposes a solution we believe to be closer to the original formulation and intended dynamics, while staying asynchronous. ## 5 Solving the puzzle by extracting information from messages To alleviate the issues identified with the solution in Section 4, we propose a new solution, which follows more closely an epistemic logic point of view, in the sense that the messages exchanged between children only carry information that the child has previously seen. This guarantees that deduction can only be done according to information that is (was) publicly available. To do that, we define the child \(i\) as the following VLSM, resembling the previous encoding, but using _message histories_ (lists of messages previously received) instead of round numbers: * \(L_{i}=\{\mathit{init},\mathit{emit},\mathit{receive}\}\) * \(M=\{\langle j,s,h\rangle\mid j\in\{1,\ldots,n\},\ s\in\{u,m,c\},\ h\in M^{*}\}\) * \(M_{0,i}=\emptyset\) * \(S_{i}=\{\langle Obs\rangle\mid Obs\subseteq\{1,\ldots,n\}\}\cup\{\langle Obs,s,h\rangle\mid Obs\subseteq\{1,\ldots,n\},\ s\in\{u,m,c\},\ h\in M^{*}\}\) * \(S_{0,i}=\{\langle Obs\rangle\mid Obs\subseteq\{1,\ldots,n\}\}\), Let us now describe the dynamics of a child's interaction with the others. **Init**: The initialization phase will remain basically the same as for the previous solution, except that, instead of round \(0\), now the children will have the empty history: \[\tau_{i}(init,\langle Obs\rangle,\boldsymbol{x})=\left\{\begin{array}{ll}( \langle Obs,u,\,[\!]\rangle,\boldsymbol{x}),&\mbox{if }Obs\neq\emptyset\\ (\langle Obs,m,\,[\!]\rangle,\boldsymbol{x}),&\mbox{if }Obs=\emptyset\end{array}\right.\] \(\beta_{i}(init,\langle Obs\rangle,msg)=(msg=\boldsymbol{x})\) **Emit**: Similarly, the emit phase is basically the same, except that, instead of sending their round number, the children will now send their current history: \[\tau_{i}(emit,\langle Obs,s,h\rangle,\boldsymbol{x})=(\langle Obs,s,h\rangle,\,\langle i,s,h\rangle)\] \(\beta_{i}(emit,\langle Obs,s,h\rangle,msg)=(msg=\boldsymbol{x})\) **Receive**: Upon receiving a message (in a not yet known status) the child will aggregate the message received with the information they already had (append it to its history) and compute its new status based on its _Observation_ set and its new history. \[\tau_{i}(\mbox{\it receive},\langle Obs,s,h\rangle,\langle j,s_{j},h_{j} \rangle)=\] \[\left\{\begin{array}{ll}(\langle Obs,s,h\rangle,\boldsymbol{x}),&\mbox{ if }s\in\{c,m\}\\ (\langle Obs,compute(Obs,h++[\langle j,s_{j},h_{j}\rangle]),\,h++[\langle j,s_{j},h_{j}\rangle]\rangle,\boldsymbol{x}),&\mbox{if }s=u\end{array}\right.\] _// we compute the new status_ \[compute(Obs,h)=\left\{\begin{array}{ll}m,&\mbox{if }\min_{k\in Obs}|groupSimilar _{i}(unknown_{k}(flatten(h)))|\geq|Obs|\\ c,&\mbox{if the first rule didn't apply and }|muddy(flatten(h))|=|Obs|\\ u,&\mbox{otherwise}\end{array}\right.\] _// we unfold all the messages contained (in depth) in the history flatten\((h++[\langle j,s_{j},h_{j}\rangle])=\) flatten\((h)\,\cup\,flatten(h_{j})\,\cup\,\{\langle j,s_{j},h_{j}\rangle\}\,\cup\,\mbox{ extra}(\langle j,s_{j},h_{j}\rangle)\)_ _// we extract extra information from a message about messages which could have been emitted by its sender (e.g. in all prefixes of the history in which the sender didn't know their status) extra\((\langle j,s_{j},h_{j}\rangle)=\{\langle j,u,h^{\prime}_{j}\rangle\mid h^{\prime}_{j} \,\triangleleft\,h_{j}\rangle\}\)2_ Footnote 2: We denote by \(h_{1}\triangleleft h_{2}\) the fact that \(h_{1}\) is a strict prefix of \(h_{2}\). _// we select messages belonging to a given child and having an unknown status unknown\({}_{k}(h)=\{m\mid m\in h\wedge m=\langle k,u,h^{\prime}\rangle\mbox{ for some h'}\}\)_ _// we group messages that carry the same information relative to child \(i\) group\(Similar_{i}(h)=h\setminus\{\langle j,\,u,\,h^{\prime}++[\langle i,\_,\_ \}\rangle]\rangle\mbox{ for some h'}\}\)_ _// we construct the set of all children who know they are muddy in a history muddy\((h)=\{j\mid\langle j,m,h_{j}\rangle\in h\}\)_ Finally, the game can be formalized as a constrained VLSM composition. \[\mathcal{M}uddy\mathcal{P}uzzle=\Bigl{(}\mathcal{C}_{1}+\mathcal{C}_{2}+ \mathcal{C}_{3}\Bigr{)}\Bigr{|}_{\varphi}\] where // consistency \(\varphi((i,init),(\langle Obs_{1}\rangle,\langle Obs_{2}\rangle,\langle Obs_{3} \rangle),m)=\textbf{consistent}(\langle Obs_{1}\rangle,\langle Obs_{2} \rangle,\langle Obs_{3}\rangle)\), // no equivocation \(\varphi((i,receive),\langle\sigma_{1},\sigma_{2},\sigma_{3}\rangle,\langle j,s _{j},h_{j}\rangle)=\) \[(s_{j}=status(\sigma_{j})\ \wedge\ h_{j}=history(\sigma_{j}))\ \vee\ (s_{j}=u\ \wedge\ h_{j}\triangleleft\ history(\sigma_{j})),\] where for a state \(\sigma=\langle Obs,s,h\rangle\), we define \(status(\sigma)=s\) and \(history(\sigma)=h\). It is relatively easy to see that the kind of reasoning described in the (counter-)Example 1 is no longer possible, because after the first exchange of messages, child 1 cannot really infer anything, because all they see is a message from child 5 containing a message from themselves. We argue that the information contained in a message represents the epistemic knowledge of its sender about the status of the other children at the time the message was sent, thus making this solution much closer to the standard solution (using synchronous rounds and public announcements). First, let \(N\) be the total number of muddy children, and let us observe that if a child knows \(N\), then they also know their status (by comparing \(N\) with the number of muddy visible children). On the other hand, telling the others that they know \(N\) (without actually telling them the value) is no different than telling them that their status can be inferred. Let \(K_{1},\ldots,K_{n}\) be the epistemic operators assigned to each child, with the intuitive meaning of \(K_{i}\psi\) that \(i\) can infer that \(\psi\) holds, based on the commonly available information, and on the private information that \(i\) has. Let also \(q_{1},\ldots,q_{n}\) be primitive propositions, \(q_{i}\) meaning that \(i\) knows the value of \(N\). Then, we can recursively encode a message \(m\) as a formula \(E(m)\), as follows: \[E(\langle j,s_{j},h_{j}\rangle)=\left\{\begin{array}{ll}(K_{j}\bigwedge_{m \in h_{j}}E(m))\wedge K_{j}\left(\bigwedge_{m\in h_{j}}E(m)\to q_{j} \right)&\text{if }s_{j}\neq u\\ (K_{j}\bigwedge_{m\in h_{j}}E(m))\wedge\neg K_{j}\left(\bigwedge_{m\in h_{j} }E(m)\to q_{j}\right)&\text{if }s_{j}=u\end{array}\right.\] One important observation is that the formulas corresponding to all additional messages introduced by _flatten(h)_ are directly deducible from the formulas of \(h\). Discussion.Our preliminary exploration of the interactions possible within the model seems to hint that a solution is reachable from any initial state for the case of three children. Nevertheless, our current definition of _groupSimilar_ was engineered for the case of three children and we know it doesn't scale to more children as-is. We believe that for the general case it should be replaced by factoring the messages through an equivalence relation grouping messages which carry the similar amount of information as perceived by the child receiving these messages. ## 6 Conclusion and Future Work In this paper we have shown that compositions of VLSMs seem like a useful tool to model the dynamics of distributed agents, allowing independent description of their behavior, but also enabling the specification of global constraints (e.g., truthfulness, fairness, etc.). Regarding the models proposed for the Muddy Children Puzzle, for the first model we were able to show that it achieves the goal of converging towards a solution, but also showed that it might leak more information than intended through communication. We believe the second model to be promising and more faithful to our modeling goals and plan to further explore its possible generalizations to an arbitrary number of children.
2308.16481
Point-TTA: Test-Time Adaptation for Point Cloud Registration Using Multitask Meta-Auxiliary Learning
We present Point-TTA, a novel test-time adaptation framework for point cloud registration (PCR) that improves the generalization and the performance of registration models. While learning-based approaches have achieved impressive progress, generalization to unknown testing environments remains a major challenge due to the variations in 3D scans. Existing methods typically train a generic model and the same trained model is applied on each instance during testing. This could be sub-optimal since it is difficult for the same model to handle all the variations during testing. In this paper, we propose a test-time adaptation approach for PCR. Our model can adapt to unseen distributions at test-time without requiring any prior knowledge of the test data. Concretely, we design three self-supervised auxiliary tasks that are optimized jointly with the primary PCR task. Given a test instance, we adapt our model using these auxiliary tasks and the updated model is used to perform the inference. During training, our model is trained using a meta-auxiliary learning approach, such that the adapted model via auxiliary tasks improves the accuracy of the primary task. Experimental results demonstrate the effectiveness of our approach in improving generalization of point cloud registration and outperforming other state-of-the-art approaches.
Ahmed Hatem, Yiming Qian, Yang Wang
2023-08-31T06:32:11Z
http://arxiv.org/abs/2308.16481v2
# Point-TTA: Test-Time Adaptation for Point Cloud Registration ###### Abstract We present Point-TTA, a novel test-time adaptation framework for point cloud registration (PCR) that improves the generalization and the performance of registration models. While learning-based approaches have achieved impressive progress, generalization to unknown testing environments remains a major challenge due to the variations in 3D scans. Existing methods typically train a generic model and the same trained model is applied on each instance during testing. This could be sub-optimal since it is difficult for the same model to handle all the variations during testing. In this paper, we propose a test-time adaptation approach for PCR. Our model can adapt to unseen distributions at test-time without requiring any prior knowledge of the test data. Concretely, we design three self-supervised auxiliary tasks that are optimized jointly with the primary PCR task. Given a test instance, we adapt our model using these auxiliary tasks and the updated model is used to perform the inference. During training, our model is trained using a meta-auxiliary learning approach, such that the adapted model via auxiliary tasks improves the accuracy of the primary task. Experimental results demonstrate the effectiveness of our approach in improving generalization of point cloud registration and outperforming other state-of-the-art approaches. ## 1 Introduction Given a pair of overlapping 3D point clouds, the goal of point cloud registration (PCR) is to estimate the 3D transformation that aligns these point clouds. PCR plays a vital role in a variety of applications, such as autonomous driving [7], augmented reality [5], and robotics [30]. Standard PCR approaches start with extracting the pointwise features. These features are matched to establish the correspondences between points in the pair. Then an outlier rejection module is used to remove outliers since the correspondences obtained by feature matching are not completely reliable. Finally, the correspondences are used to estimate the optimal 3D rotation and translation to align the two point cloud fragments. Traditional approaches establish correspondences by matching hand-crafted features and leveraging robust iterative sampling strategies such as RANSAC [17] for model estimation. However, these approaches take a long time to converge and their accuracy drops in the presence of high outliers. To address the limitation of traditional approaches, most recent PCR methods use learning-based approaches instead of handcrafted features. These approaches first learn a model on a labeled dataset. Then the model is fixed when evaluating on unseen test data. The single set of model parameters may not be optimal for different test environments captured with different 3D scanners due to the domain shift. Our work is inspired by the success of test-time adaption (TTA) [47] in image classification, where an auxiliary task is used to update the model parameters at inference time to learn feature representation specifically for a test instance. In this paper, we introduce a TTA approach for point cloud registration that adapts to new distributions at test-time. Unlike existing 3D point cloud domain adaptation approaches [27, 54, 40, 28, 51], our method does not require any prior knowledge about the test data distributions. We adapt the model parameters in an instance-specific manner during inference and obtain a different set of network parameters for each different instance. This allows our model to better capture the uniqueness of each test instance and thus generalize better to unseen data. More importantly, our TTA formulation is a _generic_ framework that can be applied in a plug-and-play manner to boost standard PCR pipelines. Auxiliary learning has been shown to be effective to improve a predefined primary task and is used in multiple 2D computer vision tasks [35, 36, 47, 9]. Recently, there have been some attempts at using auxiliary tasks for improving the representation learning of 3D point clouds [1, 24, 44, 15]. However, using auxiliary tasks for test-time adaptation is still largely unexplored for 3D point cloud data. In this work, we introduce three self-supervised auxiliary tasks: point cloud reconstruction, feature learning, and correspondence classification. These auxiliary tasks do not require any extra supervision. Some recent work [9] has shown that naively training the primary task and the auxiliary task together may not be optimal, since the model may be biased toward improving the auxiliary task rather than the primary task. Following [9], we use a meta auxiliary learning framework based on the model-agnostic meta learning (MAML) [16]. Each point cloud pair acts as a task in MAML. We update the model for each task using the auxiliary loss provided by the proposed three auxiliary tasks. The updated model is then used to perform the primary task. The model parameters are learned in such a way that the updated model using the auxiliary tasks improve the performance of the primary registration task. Our key contributions are summarized as follows: * We propose a test-time adaptation approach for point cloud registration. To the best of our knowledge, this is the first work to apply test-time adaptation for 3D point cloud registration. * We design three self-supervised auxiliary tasks to effectively extract useful features from test instances and adapt the model to unseen test distribution to improve generalization. A meta auxiliary learning paradigm is used to learn the model parameters, such that adapting the model parameter via the auxiliary tasks during testing improves the performance of the primary task. * We perform extensive experiments on 3Dmatch [56] and KITTI [19] benchmarks to show the effectiveness of our approach in improving point cloud registration performance and achieving superior results. ## 2 Related Work We review two lines of related work in point cloud registration and meta-auxiliary learning. Point Cloud Registration.Most traditional PCR approaches consist of two modules: feature-based correspondence matching and outlier filtering. Feature descriptors have been proposed to effectively extract the local and global features of point clouds, which are used to match correspondences in the feature space. Traditional methods use hand-crafted features such as spatial features histogram [29, 48, 18], or geometric features histogram [6, 43]. Recently, learning-based approaches have been proposed to learn 3D feature descriptors including fully convolution methods [20, 11], keypoint detection methods [3, 52, 33], and coarse-to-fine methods [55, 41]. The correspondences obtained from feature matching often include many outliers that must be filtered out for robust point cloud registration. Many traditional approaches [17, 58, 50, 8] have been proposed for robust outlier filtering of correspondences. RANSAC [17] is the most popular method, where a set of correspondences are iteratively sampled to filter outliers. RANSAC variants [45, 13, 4] have been introduced to provide new sampling strategies for fast convergence. However, these methods still have slow convergence rate and low performance in the presence of high outliers. Other methods use robust cost functions that are more effective with high outlier ratio. FGR [58] uses the Geman-McClure cost function and TEASER [50] uses the truncated least squares cost function for robust point cloud registration. In recent years, deep learning techniques have been employed for outlier filtering. The 3D outlier filtering approaches [38, 25, 26, 12, 32, 2] follow similar ideas in 2D image matching [53, 57], where outlier filtering is defined as an inlier classification problem. 3DRegNet [38] uses the 2D correspondence selection network [53] for 3D point clouds and added a regression module for rigid transformation. DGR [12] proposes a fully convolutional network to better capture global features of correspondences and predict the inlier confidence of each correspondence. DHVR [32] leverages Hough voting in 6D transformation parameter space to identify the confidence of correspondences from Hough space to predict the final transformation. PointDSC [2] uses the spatial consistency between inlier correspondences to better prune the outliers. Auxiliary and Meta Learning.In auxiliary learning, an auxiliary task (often self-supervised) is defined to improve the performance and generalization of a target primary task. This differs from multi-task learning where the goal is to improve performance across all tasks [35]. Auxiliary learning has been proven to be effective in multiple 2D image domain problems. [47] uses the image rotation prediction as a self-supervised auxiliary task to improve image classification. [9] uses image reconstruction as the auxiliary task to improve the primary task of deblurring. Auxiliary learning has also been studied for 3D point clouds. [44] proposes an auxiliary task that reconstruct point clouds whose parts have been randomly displaced. [24] adopts a contrastive auxiliary task [22] for better representation of 3D point clouds. Moreover, several studies have proposed self-supervised tasks [49, 1, 15] to learn domain-invariant and useful representations of point clouds from unlabelled point cloud data. [1] presents a self-supervised task of reconstructing deformations to learn the underlying structures of 3D objects. [15] introduces a pair of self-supervised tasks, including a scale prediction task and a 3D/2D projection reconstruction task to facilitate global and local features learning across different domains. MAML [16] is a widely used meta-learning algorithm, which has been successfully employed for many 2D image domain tasks [39, 46, 35, 34, 9]. MAML learns the model parameters to fastly adapt to new tasks with few training samples and few gradient updates. [39, 46] use MAML for super-resolution problem to facilitate adaptation to unseen images. [35] proposes a meta-auxiliary framework (MAXL), in which an auxiliary label generator is trained to generate optimal labels to improve the generalization of the primary task. [9] uses a self-supervised auxiliary task in a meta learning framework for image deblurring. [34] propose to employ meta-auxiliary learning along with test-time adaptation for the problem of future depth prediction in videos. ## 3 Approach Given a pair of partially overlapping 3D point clouds \(X\in R^{M\times 3}\) with \(M\) points and \(Y\in R^{N\times 3}\) with \(N\) points, our goal is to find an optimal 3D transformation \(T\) between the two point clouds that accurately aligns them. Our goal is to learn a model \(F_{\theta}(X,Y)\to T\) parameterized by \(\theta\) that maps \((X,Y)\) to \(T\). In this work, we propose a framework for point cloud registration that adapts the trained model parameters to each different input at test time, so that our model can improve generalization and performance of point cloud registration. The adaptation is achieved via self-supervised auxiliary tasks. ### Auxiliary Tasks We propose three different auxiliary tasks in our work. All these auxiliary tasks are self-supervised and do not require extra labels. So they can be used during test-time for adaptation. **Point Cloud Reconstruction.** Inspired by the success of using image reconstruction as the auxiliary task [9, 36], we propose to use 3D point cloud reconstruction as one of our self-supervised auxiliary tasks. Given a point cloud \(P\), the features of the point cloud are extracted using the feature encoder. Then, a decoder is used to reconstruct the point cloud \(P^{\prime}\). Adapting the model parameters at test time using the reconstruction auxiliary loss enables the model to take advantage of the internal features of the test instance before performing the primary task. The reconstruction loss does not require any supervision, which makes it suitable for test-time adaptation. We use \(L1\) reconstruction loss as follows: \[\ell_{rec}=||P-P^{\prime}||_{1}. \tag{1}\] **Self-Supervised Feature Learning.** Self-supervised learning (SSL) is an active area of research. The main idea of SSL is to define some proxy self-supervised tasks to learn feature representations from data without manual annotations. We can use any existing SSL task as one of our auxiliary tasks. In our work, we adapt BYOL [22] as our self-supervised task. Different from contrastive learning, BYOL does not require negative samples. This makes it suitable for test-time adaptation. The model architecture of BYOL consists of two networks, namely the online network and the target network. Each network predicts a representation of an augmented view of the same point cloud. The idea is to train the online network to predict representations similar to the target network's predictions, so that the representations of the two augmented views are closely similar. The online network parameterized by \(\theta\) consists of a feature encoder \(f_{\theta}\), a feature projector \(z_{\theta}\) and a predictor \(p_{\theta}\). Similarly, the target network parameterized by \(\xi\) has a feature encoder \(f_{\xi}\) and a feature projector \(z_{\xi}\). The online network \(\theta\) is trained based on the regression targets provided by the target network, while the target network \(\xi\) is the exponential moving average of the online parameters \(\theta\): \[\xi\leftarrow\tau\xi+(1-\tau)\theta, \tag{2}\] where \(\tau\in[0,1]\) is the target decay rate. Given a 3D point cloud \(P\), we perform augmentation to produce two augmented versions \(P_{v}\) and \(P_{v^{\prime}}\). The point \(P_{v}\) is passed to the online network to obtain the projection \(z_{\theta}=g_{\theta}(P_{v})\) and \(P_{v^{\prime}}\) is passed to the target network to obtain the projection \(z_{\xi}=g_{\xi}(P_{v^{\prime}})\). Then we minimize the mean squared error between the normalized predictions \(q_{\theta}(z_{\theta})\) and target projections \(z_{\xi}\) as follows: \[L_{\theta,\xi}=2-\frac{2q_{\theta}(z_{\theta})^{\top}z_{\xi}}{\|q_{\theta}(z_{ \theta})\|^{2}\|\|z_{\xi}\|^{2}\|}. \tag{3}\] We define another symmetric loss \(L^{\prime}_{\theta,\xi}\) by similarly passing \(P_{v^{\prime}}\) to the online network and \(P_{v}\) to the target network to compute \(L^{\prime}_{\theta,\xi}\). The final BYOL loss is defined as: \[\ell_{bvol}=L_{\theta,\xi}+L^{\prime}_{\theta,\xi}. \tag{4}\] **Correspondence Classification.** We introduce an additional self-supervised auxiliary task designed specifically for PCR. Given a 3D point cloud \(P\), we construct an augmented point cloud \(P^{\prime}\) using a randomly generated 3D transformation \(T\) by sampling a random rotation along three axes within \([0^{\circ}..360^{\circ}]\) and a random translation within [0cm..60cm]. The sampled transformation \(T\) is applied on each axis of point cloud to obtain \(P^{\prime}\). The feature encoder is used to extract the features of \(P\) and \(P^{\prime}\). Then these two sets of points are matched in the feature space using nearest neighbors to obtain the correspondences. Using the same outlier rejection network architecture of the primary task, this auxiliary task is trained to predict whether a correspondence is an inlier or an outlier. Since the transformation \(T\) of the point cloud is known, the ground-truth inlier correspondences \(C\) are available and the auxiliary loss does not require any manual supervision. Similar to [12, 2], the classification loss is defined as the binary cross entropy loss between the probability \(p^{i}_{(i,j)}\) that a correspondence \(C_{(i,j)}\) is an inlier and the ground-truth inliers \(C\). \[\ell_{cc}=\frac{1}{|M|}(\sum_{(i,j)\in C}\log p^{i}_{(i,j)}+\sum\log p^{o}_{(i,j) }), \tag{5}\] where \(p^{o}=1-p^{i}\). ### Model Architecture Our model architecture consists of a shared feature encoder and two branches for the primary and auxiliary tasks. The primary branch corresponds to the point cloud registration task. The auxiliary branch corresponds to three self-supervised auxiliary tasks defined in Section 3.1. We denote the model parameters as \(\theta\) = {\(\theta^{shar}\), \(\theta^{pri}\), \(\theta^{aux}\)}, where \(\theta^{shar}\) corresponds to the shared feature encoder, \(\theta^{pri}\) is the primary branch and \(\theta^{aux}\) is the auxiliary branch. Note that \(\theta^{aux}\) represents the parameters of three auxiliary tasks. **Auxiliary Tasks.** Our aim of the auxiliary tasks is to transfer rich and useful knowledge to improve the performance of the primary task. The overall auxiliary loss is the weighted sum of the losses for the three auxiliary tasks: \[L_{aux}=\lambda_{1}\ell_{rec}+\lambda_{2}\ell_{byl}+\lambda_{3}\ell_{cc}. \tag{6}\] Instead of fixing the values of the balancing weights \(\lambda_{i}\) (\(i=1,2,3\)), we treat them as learnable parameters and learn their values during training. This allows the learning algorithm to automatically choose the right weights that balance the relative importance of each auxiliary task. To train both primary and auxiliary tasks, we first follow the joint training approach in [47]. The loss of the joint training is simply the combination of the primary and auxiliary losses: \[L_{pri}(\theta^{shar},\theta^{pri};X,Y,T)+L_{aux}(\theta^{shar},\theta^{aux} ;X,Y). \tag{7}\] Note that since our auxiliary tasks are self-supervised, the auxiliary loss \(L_{aux}(\cdot)\) does not need the ground-truth transformation \(T\). To simplify the notation, we have assumed one training instance in Eq. 7. It is straightforward to generalize Eq. 7 to the entire training set by summing over all training instances. The model learned from Eq. 7 is then used as the initialization for the meta-auxiliary learning. **Primary Task.** We follow the standard learning-based network architecture for point cloud registration. First, a pair of 3D point clouds are passed to a fully convolutional network to extract the corresponding geometric pointwise features. Then the points are matched using the nearest neighbor in the feature space to obtain correspondences. These correspondences are fed to an outlier rejection network which predicts the confidence of each correspondence. Finally, given the correspondences with their associated probability weights resulting from the outlier rejection network, Figure 1: Overview of the proposed meta-auxiliary training framework. Given a pair of input point clouds during training, we first adapt the model by performing a small number of gradient updates using the auxiliary loss calculated via three auxiliary tasks including point cloud reconstruction, BYOL and correspondence classification. Then the adapted model is used to perform the primary registration task and is evaluated using the meta-objective. Finally, we update the model using the primary loss. the weighted Procrustes approach is used to align the paired 3D scans by estimating the transformation between the two point clouds. We use \(L_{pri}(\theta^{shar},\theta^{pri};X,Y,T)\) to denote the loss function that measures the difference between the ground-truth transformation \(T\) and the prediction \(F_{\theta}(X,Y)\). In this paper, we use Fully Convolutional Geometric Features (FCGF) [11] to extract pointwise features of the 3D point clouds. For the outlier rejection network, we adopt the architecture of three state-of-the-art methods including DGR [12], DHVR [32] and PointDSC [2]. However, it is importance to note that our proposed meta-auxiliary framework is agnostic to these choices and can be applied to any learning-based point cloud registration methods. ### Meta-Auxiliary Learning Our goal is to combine self-supervised auxiliary tasks along with the point cloud registration task to quickly adapt the model parameters for each test instance without the need for any extra supervision. Although jointly training the primary and auxiliary tasks will improve the generalization of our model to unknown test distribution, the updated parameters using auxiliary loss may be more biased during training to improve the auxiliary task and not the primary task. Following [9], we propose to use a meta auxiliary task scheme. **Training.** We use meta-learning to train the model parameters \(\theta\) to be quickly adaptable to different test distribution data, such that updating model parameters at test-time improve the primary point cloud registration task. Given a batch of paired point clouds \(X_{b},Y_{b}\), and the pre-trained model parameters \(\theta\) resulting from jointly training primary and auxiliary tasks on the 3DMatch dataset [56] by optimizing Eq. 7. We perform adaptation for small gradient updates using the auxiliary loss, in which all model parameters ( \(\phi_{b}^{shar},\phi_{b}^{pri},\phi_{b}^{aux}\) ) are updated: \[\phi_{b}\leftarrow\theta-\alpha\nabla_{\theta}L_{aux}(X_{b},Y_{b},\theta), \tag{8}\] where \(\alpha\) is the adaptation learning rate. Note that since Eq. 8 is based on the auxiliary task, the adaptation can be done at test-time since it does not require the ground-truth transformation. Then, the adapted model( \(\phi_{b}^{shar},\phi_{b}^{pri}\) ) will be used to perform the primary task and calculate the primary loss. This will enforce the adapted model to boost the primary task performance. The primary loss will be used to optimize the model parameters \(\theta\) as: \[\theta\leftarrow\theta-\beta\sum_{b=1}^{B}\nabla_{\theta}L_{pri}(X_{b},Y_{b}, T_{b},\phi_{b}^{shar},\phi_{b}^{pri}), \tag{9}\] where \(\beta\) is the meta-learning rate and \(B\) is the batch size. Note that \(L_{pri}(\cdot)\) in Eq. 9 is defined in terms of the updated model \(\phi_{b}\) for each instance, while the optimization is performed on the model parameters \(\theta\). The training process is summarized in Algorithm 1 and Figure 1. **Testing.** During test-time, the optimized meta-learned parameters \(\theta\) are adapted to a test instance that consists of a pair of 3D point clouds using the auxiliary loss as follows: \[\phi\leftarrow\theta-\alpha\nabla_{\theta}L_{aux} \tag{10}\] Then, the adapted model (\(\phi^{shar},\phi^{pri}\)) is used to perform point cloud registration. ``` 0:\(X,Y,T\): training pairs with their transformation 0:\(\alpha,\beta\): learning rates 0:\(\theta\): learned parameters 1: Initialize the network with pre-trained weights \(\theta\) 2:while not done do 3: Sample a training batch \(\{X_{b},Y_{b},T_{b}\}_{b=1}^{B}\) 4:for each example do 5: Evaluate the three auxiliary tasks: \(L_{aux}=\lambda_{1}\ell_{rec}+\lambda_{2}\ell_{bpol}+\lambda_{3}\ell_{cc}\) 6: Compute adapted parameters via gradient descent: \(\phi_{b}\leftarrow\theta-\alpha\nabla_{\theta}L_{aux}(X_{b},Y_{b},\theta)\) 7: Update auxiliary branch: \(\theta^{aux}\leftarrow\theta^{aux}-\alpha\nabla_{\theta}L_{aux}(X_{b},Y_{b}, \theta^{aux})\) 8:endfor 9: Evaluate the primary task using the adapted parameters and update: \(\theta\leftarrow\theta-\beta\sum_{b=1}^{B}\nabla_{\theta}L_{pri}(X_{b},Y_{b}, T_{b},\phi_{b}^{shar},\phi_{b}^{pri})\) 10:endwhile 11:return\(\theta\) ``` **Algorithm 1** Meta-auxiliary training ## 4 Experiments We first evaluate our method on a 3D indoor dataset for the pairwise registration task. Then we analyze the generalization of our proposed model to unseen 3D outdoor datasets. Additionally, we integrate our method into Figure 2: Overview of the proposed meta-auxiliary testing procedure. We use the auxiliary branch to fine-tune the model for each test instance using the auxiliary loss and the adapted model is used to register the input point clouds. a multi-way registration pipeline and evaluate its performance on generating final 3D reconstruction scenes. Finally, we perform extensive ablation studies to inspect each component of our approach. Additional results and ablation studies are provided in the supplementary document. ### Experimental Setup Dataset.We use the 3DMatch benchmark [56] for indoor pairwise registration. It consists of point cloud pairs with corresponding ground-truth transformations from real-world indoor scenes scanned by commodity RGB-D sensors. We follow the standard splitting strategy and evaluation protocol in 3DMatch [56], where the test data contain 1623 partially overlapping 3D point cloud scans from 8 different indoor scenes. For the outdoor dataset, we use the KITTI odometry benchmark [19] which consists of 3D outdoor scenes scanned using a Velodyne laser. We follow the train/test split in [11] to create pairwise splits, since the official benchmark does not have labels for pairwise registration. We perform voxel downsampling to generate point clouds with uniform density and set voxel size to 5cm for indoor dataset and 30cm for outdoor dataset. For multi-way registration experiment, we use the simulated Augmented ICL-NUIM dataset [10] which contains augmented indoor reconstruction scenes from RGB-D videos. Evaluation Metrics.Following [12, 2], we report Registration Recall (RR), Rotation Error (RE) and Translation Error (TE). RE and TE are defined as: \[RE=\arccos\frac{Tr(R^{T}R^{*})-1}{2},TE=||t-t^{*}||^{2}, \tag{11}\] where \(R^{*}\) and \(t^{*}\) are the ground-truth rotation and translation, respectively. Registration Recall (RR) is the ratio of successful pairwise registration that its rotation error and translation error are below predefined thresholds. These thresholds are set to (\(RE=15\), \(TE=30cm\)) for indoor scenes and (\(RE=5\), \(TE=60cm\)) for outdoor scenes. Implementation Details.We implement our framework in PyTorch and use the official implementation of DGR [12], DHVR [32], and PointDSC [2] as the backbones of our approach. We first jointly train primary and auxiliary tasks by optimizing the loss in Eq. 7 using the ADAM optimizer with an initial learning rate of \(10^{-4}\) and an exponentially decayed factor of \(0.99\). For meta-training, the learning rates \(\alpha\) and \(\beta\) are set to \(2.5\times 10e^{-5}\). We perform 5 gradient updates during training and testing to adapt the model parameters using the auxiliary loss in Eq. 6. All experiments are conducted on an NVIDIA TitanX GPU. ### Main Results We first evaluate our method on the 3DMatch dataset and report the results in Table 1. We compare our method with 5 traditional methods: FGR [58], TEASER [50], GC-RANSAC [4], RANSAC [17], CG-SAC [42], 2 unsupervised learning-based methods: LEAD [37], Ppf-foldnet Figure 3: We propose a generic test-time adaptation framework that can be applied to boost standard point cloud registration pipeline. Here we show qualitative comparisons between the baselines and our method. The first three columns are from 3DMatch dataset [56] and the last two columns are from 3DLoMatch dataset [23]. Our proposed framework can successfully align failure examples of DGR [12], DHVR [32], and PointDSC [2]. [14], and 4 supervised learning-based methods: 3DRegNet [38], DGR [12], DHVR [32], PointDSC [2]. All learning-based methods are trained on the 3DMatch dataset and follow the same experiment setup for fair comparison. As shown in Table 1, our method improves the registration recall of DGR [12] and DHVR [32] by about 1%, and PointDSC [2] by about 0.5%, as well as the RE and TE have significantly decreased for all three backbones (on average 7.3% and 21%). More importantly, our method with PointDSC [2] as backbone outperforms all other state-of-the-art methods. Figure 3 shows qualitative results comparison on challenging examples of 3DMatch [56] and 3DLoMatch [23] datasets when applying our TTA method to DGR [12]. Our meta-auxiliary framework enables the model to better capture the internal features of each test-instance, leading to performance improvement. Figure 4 shows the 3DMatch registration results per scene. As an ablation study, we remove the meta learning diagram and simply optimize the joint loss Eq. 7 during training1. By doing so, we obtain superior results than the backbone DGR [12], which demonstrates that our proposed auxiliary tasks can complement the primary PCR task and thus improves accuracy. Moreover, our final meta-auxiliary framework achieves the best. Figure 5 shows the robustness of our approach under different rotation and translation error thresholds. Footnote 1: At test-time, we still perform Eq. 10 for TTA. ### Ablation and Additional Results We perform additional experiments and ablation studies to further analyze our proposed method. **Outdoor Registration Generalization.** Although our method achieves the best performance over all the traditional and learning-based methods, the main advantage of our method is the ability to generalize to unseen test distribution. In order to evaluate the generalization of our method, we perform a cross-dataset experiment on both 3DMatch [56] and KITTI [19] datasets, where the trained model on 3DMatch [56] is used to test on KITTI [19] and vice versa. As shown in Table 2, our method shows a significant improvement on all evaluation metrics when \begin{table} \begin{tabular}{l|c c c} \hline \hline & Recall \(\uparrow\) & RE (deg) \(\downarrow\) & TE (cm) \(\downarrow\) \\ \hline \hline FGR [58] & 78.56 & 2.82 & 8.36 \\ TEASER [50] & 85.77 & 2.73 & 8.66 \\ GC-RANSAC [4] & 92.05 & 2.33 & 7.11 \\ RANSAC-1M [17] & 88.42 & 3.05 & 9.42 \\ RANSAC-2M [17] & 90.88 & 2.71 & 8.31 \\ RANSAC-4M [17] & 91.44 & 2.69 & 8.38 \\ CG-SAC [12] & 87.52 & 2.42 & 7.66 \\ LEAD [37] & 67.15 & 3.39 & 12.01 \\ Ppf-foldnet [14] & 71.82 & 3.45 & 9.16 \\ 3DRegNet [38] & 77.76 & 2.74 & 8.13 \\ \hline \hline DGR [12] & 91.30 & 2.40 & 7.48 \\ **Ours + DGR** & **92.45** & **1.71** & **6.39** \\ \hline DHVR [32] & 91.40 & 2.08 & 6.61 \\ **Ours + DHVR** & **92.28** & **1.75** & **6.42** \\ \hline PointDSC [2] & 92.85 & 2.08 & 6.51 \\ PointDSC-reported & 93.28 & 2.06 & 6.55 \\ **Ours + PointDSC** & **93.47** & **1.70** & **6.21** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with other state-of-the-art methods on the 3DMatch dataset [56]. \(\uparrow\) ( or \(\downarrow\)) indicates that a higher (or lower) number means better performance. \begin{table} \begin{tabular}{l|c c c} \hline \hline & Recall \(\uparrow\) & RE (deg) \(\downarrow\) & TE (cm) \(\downarrow\) \\ \hline DGR & 95.24 & 0.44 & 23.25 \\ **Ours + DGR** & **97.36** & **0.34** & **21.16** \\ \hline DHVR & 95.82 & 0.39 & 22.17 \\ **Ours + DHVR** & **98.01** & **0.32** & **21.18** \\ \hline PointDSC & 97.15 & 0.36 & 21.74 \\ **Ours + PointDSC** & **98.23** & **0.33** & **20.86** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of cross-dataset generalization. Here we train the model on 3DMatch [56] and evaluate on KITTI [19]. The results of training on KITTI and evaluating on 3DMatch can be found in the supplementary document. Figure 4: Registration results for different scenes on the 3DMatch dataset in terms of RE and TE. “w/o meta” refers to a variant of our method that simply optimizes Eq. 7 during training without adopting the meta-auxiliary training paradigm. training on 3DMatch dataset [56] and evaluating on KITTI dataset [19], demonstrating the effectiveness of the proposed framework. To further inspect the generalization of our method, we test our framework on another widely-used outdoor dataset, namely ETH dataset [21]. In Table 3, we report the registration results of our approach when evaluating on ETH dataset [21]. We can observe that our method improves the performance of the baselines across all scenes with a good margin. In particular, the average recall of DGR [12] is significantly improved by about 5%, which sheds light on the generalization capability of our approach. **Robustness to Low-Overlapping Point Clouds.** To further validate the robustness of our method, we evaluate our method on a dataset with low-overlapping ratio between input point clouds, namely 3DLoMatch [23]. This dataset is constructed from the 3DMatch benchmark [56] and has a low-overlapping ratio (10%-30%) between 3D point cloud fragments. Figure 6 compares the inlier ratio between 3DLoMatch [23] and 3DMatch [56], which shows that 3DLoMatch [23] is more challenging due to the lower inlier ratio. We use the model trained on 3DMatch [56] for evaluation and report our results in Table 4. Our approach outperforms all other methods, demonstrating the robustness of our approach to low-overlapping scenarios. More importantly, this validates the robustness of our approach to the percentage of template and target overlaps where 3Dmatch [56] contains overlapping ratios (\(\geq\)30%) and 3DLoMatch [23] contains low-overlapping ratios (10%-30%). The evaluation results on both datasets show the superiority of our approach among the baselines under different ratios. **Multiway Registration for 3D Reconstruction.** Point cloud registration is a critical step for various 3D applications. In this section, we present the effect of pairwise registration performance on obtaining more accurate and robust 3D reconstruction scenes. Following [12, 2], we integrate our method into a 3D reconstruction pipeline[10]. Given RGB-D scans, 3D fragments are generated from the scene. Next, we perform pairwise registration using our \begin{table} \begin{tabular}{l|c c c} \hline \hline & Recall \(\uparrow\) & RE (deg) \(\downarrow\) & TE (cm) \(\downarrow\) \\ \hline DGR & 43.80 & 4.17 & 10.82 \\ **Ours + DGR** & **50.73** & **4.06** & **10.52** \\ \hline DHVR & 54.46 & 4.13 & 10.54 \\ **Ours + DHVR** & **57.32** & **3.83** & **10.26** \\ \hline PointDSC & 56.10 & 3.87 & 10.39 \\ **Ours + PointDSC** & **57.81** & **3.79** & **10.15** \\ \hline \hline \end{tabular} \end{table} Table 4: Robustness to low-overlapping point clouds on the 3DLoMatch dataset [23] with low-overlapping ratio between 3D point cloud segments. We train on 3DMatch and evaluate on 3DLoMatch. Figure 5: Comparison of the registration recall on the 3DMatch dataset between our approach and other state-of-the-art methods by varying the translation error and the rotation error thresholds. Our framework with PointDSC as backbone outperforms all other methods for all thresholds. \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline & \multicolumn{2}{c}{**Gazebo**} & \multicolumn{2}{c}{**Wood**} & \multirow{2}{*}{**AVG**} \\ & Summer & Winter & Summer & & \multicolumn{1}{c|}{} \\ \hline DGR & 92.63 & 83.28 & 79.86 & 71.34 & 81.78 \\ **Ours + DGR** & **95.28** & **87.95** & **81.73** & **82.45** & **86.85** \\ \hline DHVR & 93.78 & 82.74 & 81.07 & 75.32 & 83.22 \\ **Ours + DHVR** & **95.96** & **87.13** & **82.25** & **80.79** & **86.53** \\ \hline PointDSC & 94.21 & 89.68 & 83.14 & 78.42 & 86.36 \\ **Ours + PointDSC** & **94.89** & **91.49** & **87.02** & **85.22** & **89.65** \\ \hline \hline \end{tabular} \end{table} Table 3: Registration results on ETH dataset [21]. We report the registration recall per scene as well as the average recall across all scenes. Figure 6: Comparison between the distribution of the inlier ratio of correspondences obtained by feature matching on 3DMatch [56] and 3DLoMatch [23] benchmarks. The registration task is more challenging with a lower inlier ratio. \begin{table} \begin{tabular}{l|c c c c} \hline \hline & Living1 & Living2 & Office1 & Office2 & AVG \\ \hline FGR & 78.97 & 24.91 & 14.96 & 21.05 & 34.98 \\ RANSAC & 110.9 & 19.33 & 14.42 & 17.31 & 40.49 \\ \hline DGR & 21.06 & 21.88 & 15.76 & 11.56 & 17.57 \\ **Ours + DGR** & **18.32** & **16.12** & **12.24** & **10.44** & **14.28** \\ \hline DHVR & 22.91 & 16.37 & 12.58 & 10.90 & 15.69 \\ **Ours + DHVR** & **18.46** & **13.59** & **12.43** & **9.56** & **13.51** \\ \hline PointDSC & 20.25 & 15.58 & 13.56 & 11.30 & 15.18 \\ **Ours + PointDSC** & **15.73** & **12.07** & **12.15** & **9.78** & **12.43** \\ \hline \hline \end{tabular} \end{table} Table 5: Multiway registration results on Augmented ICL-NUIM dataset evaluated by ATE(cm) where lower is better. method to align all fragments. Finally, multi-way registration [10] is used to optimize the fragment poses using pose graph optimization [31]. We use the model trained on 3DMatch to further demonstrate the generalization of our method and evaluate our approach on Augmented ICL dataset using Absolute Trajectory Error (ATE). As shown in Table 5, our method achieves the lowest error compared to all other methods. **Methodology Components.** To study the effectiveness of the proposed framework, we conduct ablation experiments on 3DMatch dataset [56] and evaluate the effect of each component of the proposed framework. We consider DGR [12] as the backbone in our experiments and report the results after applying each component to DGR [12]. Specifically, we compare the results between our method's three major components: Auxiliary Learning, Meta Learning, Test-time Adaptation. We first investigate the effect of our proposed Auxiliary Learning method by jointly training the primary and three auxiliary tasks by optimizing the loss in Eq. 7. This shows the strength of the proposed auxiliary tasks acting as a regularizer during training. Then, we study the impact of combining test-time adaptation with auxiliary learning, in which the auxiliary tasks are used to update the model parameters at test-time by optimizing the auxiliary loss in Eq. 6. Furthermore, we show the effect of the proposed meta-auxiliary learning paradigm elaborated in Algorithm 1 in learning optimal model parameters. However, we fixed the model parameters at test-time. Finally, we report the results of our final framework integrating Auxiliary Learning, Meta Learning, and Test-time Adaptation. As reported in Table 6, auxiliary learning improves the registration results across all evaluation metrics compared to DGR [12]. This demonstrates that the proposed auxiliary tasks can complement the registration task, leading to performance improvement. Combining auxiliary learning with TTA has further improved the performance of registration recall by 0.44%, and decreased the TE and RE by 0.52cm and 0.37 deg, respectively. As TTA allows the auxiliary tasks to transfer useful features of test-instance to the primary registration task, it enhances the registration performance. Moreover, the proposed meta-auxiliary training method greatly boosts the performance, which demonstrates the effectiveness of training tasks using meta-learning terminology such that the meta-objective enforces the auxiliary tasks to improve the primary task performance. Finally, our final framework further boosts the registration performance by fine-tuning the model parameters at test-time. **Analysis of Auxiliary Tasks.** We conduct an additional ablation study on the 3DMatch dataset [56] to investigate the importance of each auxiliary task in our approach in improving the registration performance. As shown in Table 7, the auxiliary reconstruction task significantly boosts the registration recall of DGR [12] by 0.92%. Also, Translation Error (TE) and Rotation Error (RE) greatly drop by 13% and 30%, respectively. These evaluation metrics are further improved when combining the auxiliary correspondence classification task to the reconstruction task. This demonstrates the impact of multiple auxiliary tasks in transferring additional features to the primary task and enhancing registration results. Finally, our final three auxiliary tasks achieve a higher registration recall of 92.45% and a lower Translation Error (TE) of 6.39cm. However, the Rotation Error (RE) was slightly worse when compared to the two auxiliary tasks results. ## 5 Conclusion We have introduced a novel test-time adaptation framework for point cloud registration using multitask meta-auxiliary learning. Previous work usually follows a supervised learning approach to train a model on a labeled dataset and fix the model during evaluation on unseen test data. In contrast, our framework is designed to effectively adapt the model parameters at test time for each test instance to boost the performance. We have introduced three self-supervised auxiliary tasks to improve the the primary registration task. Furthermore, we have used a meta-auxiliary learning paradigm to train the primary and auxiliary tasks, so that the adapted model using auxiliary tasks improve the performance of the primary task. Extensive experiments show the effectiveness of the proposed approach in improving the registration performance and outperforming state-of-the-art methods. \begin{table} \begin{tabular}{l|l l l} \hline \hline & Recall \(\uparrow\) & RE (deg) \(\downarrow\) & TE (cm) \(\downarrow\) \\ \hline DGR & 91.31 & 2.40 & 7.48 \\ DGR + Aux. & 91.42 & 2.25 & 7.06 \\ DGR + TTA (w/o meta) & 91.86 & 1.88 & 6.54 \\ DGR + Meta- Aux. (w/o TTA) & 92.28 & 1.71 & 6.40 \\ **DGR + full framework** & **92.45** & **1.71** & **6.39** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation studies on framework components: Auxiliary Learning, Meta Learning, and Test-time Adaptation. \begin{table} \begin{tabular}{l|l l l} \hline \hline & Recall \(\uparrow\) & RE (deg) \(\downarrow\) & TE (cm) \(\downarrow\) \\ \hline DGR [12] & 91.31 & 2.43 & 7.34 \\ DGR + rec & 92.24 & 1.71 & 6.42 \\ DGR + (rec, cc) & 92.38 & **1.69** & 6.40 \\ **DGR + (rec, cc, byol)** & **92.45** & 1.71 & **6.39** \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation studies on the three auxiliary tasks: Point Cloud Reconstruction (rec), Correspondence Classification (cc), and Feature Learning (byol).
2304.00136
Stabilisation, scanning and handle cancellation
In this note we describe a family of arguments that link the homotopy-type of a) the diffeomorphism group of the disc $D^n$, b) the space of co-dimension one embedded spheres in a sphere and c) the homotopy-type of the space of co-dimension two trivial knots in a sphere. We also describe some natural extensions to these arguments. We begin with Cerf's `upgraded' proof of Smale's theorem, that the diffeomorphism group of the 2-sphere has the homotopy-type of the isometry group. This entails a canceling-handle construction, related to the `scanning' maps of Budney-Gabai. We further give a Bott-style variation on Cerf's construction, and a related Embedding Calculus framework for these constructions. We use these arguments to prove that the monoid of Schoenflies spheres is a group with respect to the connect-sum operation. This last result is perhaps only interesting when in dimension four, as in other dimensions it follows from the resolution of the various generalized Schoenflies problems.
Ryan Budney
2023-03-31T21:22:43Z
http://arxiv.org/abs/2304.00136v3
# Stabilisation, scanning and handle cancellation ###### Abstract In this note we describe a family of arguments that allow us to deduce some unrelated-looking but basic results. The first is Cerf's 'upgraded' proof of Smale's theorem, that the diffeomorphism group of \(S^{2}\) has the homotopy-type of the isometry group. This entails a cancelling-handle construction, related to the'scanning' maps of [4]\(\operatorname{Emb}(D^{n-1},S^{1}\times D^{n-1})\to\Omega^{j}\operatorname{Emb }(D^{n-1-j},S^{1}\times D^{n-1})\). These proofs can frequently be used to describe homotopy-fibers of such maps. We further give a Bott-style variation on Cerf's construction, and a related Embedding Calculus framework for these constructions. We also use these arguments to prove that the monoid of Schonflies spheres \(\pi_{0}\operatorname{Emb}(S^{n-1},S^{n})\) is a group with respect to the connect-sum operation, for all \(n\geq 2\). This last result is perhaps only interesting when \(n=4\), as when \(n\neq 4\) it follows from the resolution of the various generalized Schonflies problems. Embeddings, diffeomorphisms 57M99 57R52, 57R50, 57N50 Introduction In Cerf's landmark paper [8], somewhat overlooked is a novel proof of Smale's theorem, that \(\operatorname{Diff}(S^{2})\) has the homotopy-type of its linear subgroup \(O_{3}\). Cerf argued that the Smale-Hirsch map \(\operatorname{Diff}(D^{2})\to\Omega^{2}GL_{2}^{+}(\mathbb{R})\), i.e. the map obtained by mapping a diffeomorphism to its pointwise derivative, this map is a homotopy-retract. Technically, the map is the inclusion portion of a retraction, up to a homotopy-equivalence of the domain and co-domain. In Cerf's language, he states that the homotopy-groups of \(\operatorname{Diff}(D^{2})\) inject into the homotopy-groups of \(\Omega^{2}GL_{2}(\mathbb{R})\simeq\Omega^{2}O_{2}\simeq\{*\}\). Given that a retract of a contractible space is contractible, this completes the proof of Smale's theorem. Moreover, if one follows Cerf's argument and writes-down the explicit deformation-retraction of \(\operatorname{Diff}(D^{2})\), one recovers Smale's deformation-retraction. The interesting feature of Cerf's argument is that his tools deduce consequences in arbitrary dimensions, whereas Smale's proof uses the Poincare-Bendixson Theorem, which is not available in higher dimensions. It is the purpose of this paper to describe the nature of Cerf's argument, and how one can apply it to a range of interesting problems. In this paper, if \(M\) is a compact manifold \(\operatorname{Diff}(M)\) denotes the group of all diffeomorphisms of \(M\) that restrict to the identity on the boundary. Another way to look at this paper is that it is both an addendum to [3], and a paper that highlights methods from [4] and [8] that deserve to be singled-out. Both [4] and [8] are long papers with many results, so it is easy to overlook this technique. We hope a shorter-format paper devoted to one tool does the ideas the justice they deserve. In [3], an attempt was made to describe the most basic relations between the homotopy-type of diffeomorphism groups and embedding spaces for the smallest manifolds, such as spheres and discs. These techniques were known to the author at the time, but - perhaps indicative of the techniques - the only consequences the author knew were already known, by other methods. So they were removed from the paper before publication. The core of Cerf's construction is a locally-trivial fibre bundle whose fiber and base space are spaces of embeddings, but the embeddings in the base space have higher co-dimension than the fibre. The construction is somewhat analogous to pseudo-isotopy fibrations (also called concordance embedding spaces), in that the total space of such fibrations are relatively highly connected, and such maps consist of restricting embeddings to part of their boundaries. With Cerf's construction the total space is contractible. The connectivity of pseudo-isotopy embedding spaces is a significant theorem of Goodwillie's [9]. Pseudo-isotopy fibrations, in contrast, keep a constant co-dimension for the embedding spaces used in the base, total space, and fiber. The Cerf construction appears as Propositions 5 and 6 (pgs 128-129 of [8]). It is a fibration where the total space is a space of embeddings of 'half-discs' or perhaps it should be thought of as a 'cancelling handle pair'. Cerf uses the argument to compare the homotopy-types of the three spaces \(\operatorname{Diff}(D^{n})\), \(\Omega\operatorname{Emb}(D^{n-1},D^{n})\) and \(\Omega^{2}\operatorname{Emb}(D^{n-2},D^{n})\) respectively. The space \(\operatorname{Emb}(D^{j},D^{n})\) denotes the space of embeddings of \(D^{j}\) in \(D^{n}\) such that the boundary is sent to the boundary in a prescribed linear manner. It is important to note we consider spaces like \(\operatorname{Emb}(D^{j},D^{n})\) to be based spaces, where the base-point is the linear inclusion, i.e. \(D^{j}\equiv D^{j}\times\{0\}\subset D^{n}\). Similarly, \(\operatorname{Diff}(D^{n})\) denotes the diffeomorphisms of \(D^{n}\) that restrict to the identity on the boundary. The relatively little attention Cerf's construction has been given over the years is likely due to the fact that most results one can immediately deduce from the technique, we could deduce earlier by other means. For example, the connection between the homotopy-type of the component of the unknot in \(\operatorname{Emb}(S^{1},S^{3})\) (or \(\operatorname{Emb}(D^{1},D^{3})\) respectively), with the homotopy-type of \(\operatorname{Diff}(S^{3})\) (or \(\operatorname{Diff}(D^{3})\)), which is immediate from Cerf's perspective, is historically derived using Hatcher's work on spaces of incompressible surfaces [13] (see the final pages). Also, Cerf's fibration is to some extent 'hidden' in the more-commonly used fibration \(\operatorname{Diff}(S^{n-j}\times D^{j})\to\operatorname{Diff}(D^{n})\to \operatorname{Emb}^{/r}(D^{j-1},D^{n})\). Since \(\operatorname{Diff}(D^{n})\) is generally not contractible, this fibration can naively appear more difficult to analyse. The purpose of this short note is to help explain the utility of Cerf's argument, via a mild generalization and some examples. Cerf's argument was re-discovered in [4] when investigating the homotopy-type of the space of 'compressing discs' \(\operatorname{Emb}(D^{n-1},S^{1}\times D^{n-1})\), i.e. embedded non-separating discs in \(S^{1}\times D^{n-1}\) that are required to be a fixed linear embedding on the boundary. An interesting observation in [4] is that the'stacking' operation, while appearing to be just a monoidal structure on the space \(\operatorname{Emb}(D^{n-1},S^{1}\times D^{n-1})\), using Cerf's argument one can show the space is group-like, i.e. the induced monoidal structure on \(\pi_{0}\operatorname{Emb}(D^{n-1},S^{1}\times D^{n-1})\) is that of a group, for all \(n\geq 2\). One consequence of this is an argument the monoid of Schonflies spheres \(\pi_{0}\operatorname{Emb}(S^{n-1},S^{n})\) is a group using the relative connect-sum operation. We should emphasize that this result is implicit in [4], and appears here for emphasis. It has been long known in all dimensions \(n\neq 4\) that \(\pi_{0}\operatorname{Emb}(S^{n-1},S^{n})\) is a group with the connect-sum operation, due to the positive resolution of the Schonflies Problem in those dimensions. When the generalized Schonflies problem has a positive resolution, one has an isomorphism \(\pi_{0}\operatorname{Emb}(S^{n-1},S^{n})\simeq\pi_{0}\operatorname{Diff}(D^{ n-1})\). The triviality of \(\pi_{0}\operatorname{Emb}(S^{n-1},S^{n})\) when \(n=4\) is equivalent to the Schonflies Problem, as \(\operatorname{Diff}(D^{3})\) is contractible [13]. We outline this argument in Section 4. An analogous technique was proposed for studying the homotopy-type of \(\operatorname{Diff}(S^{1}\times D^{n})\), by considering the chain of maps \[\operatorname{Diff}(S^{1}\times D^{n-1})\to\operatorname{Emb}(D^{n-1},S^{1} \times D^{n-1})\to\Omega\operatorname{Emb}(D^{n-2},S^{1}\times D^{n-1})\to \cdots\to\Omega^{n-2}\operatorname{Emb}(D^{1},S^{1}\times D^{n-1})\] in the sequence [4], [5]. Interestingly, an infinitely-generated subgroup of \(\pi_{n-4}\operatorname{Diff}(S^{1}\times D^{n-1})\) survives to the end of the sequence \[\pi_{n-4}\Omega^{n-2}\operatorname{Emb}(D^{1},S^{1}\times D^{n-1})\equiv\pi_{ 2n-6}\operatorname{Emb}(D^{1},S^{1}\times D^{n-1}),\] for all \(n\geq 4\). At present little is known about Cerf's scanning maps \(\operatorname{Diff}(D^{n})\to\Omega^{j}\operatorname{Emb}(D^{n-j},D^{n})\) when \(j\geq 3\), but these results suggest such maps have the potential to be homotopically non-trivial, and could be used to deduce results even about \(\pi_{0}\operatorname{Diff}(D^{n})\) for \(n\geq 4\). The transitional map \[\operatorname{Emb}(D^{n-2},D^{n})\to\Omega\operatorname{Emb}(D^{n-3},D^{n})\] is perhaps of greatest interest, as the target space can be studied with techniques such as the Embedding Calculus. Unfortunately little is known about \(\pi_{1}\operatorname{Emb}(D^{2},D^{4})\) at present. An impetus for studying such scanning maps is that these embedding spaces are highly structured objects. For example, \(\operatorname{Diff}(D^{n})\) is homotopy-equivalent to \(\operatorname{EC}(n,*)\) studied in [3]. These spaces admit actions of the operad of \((n+1)\)-cubes, making \(\operatorname{EC}(n,*)\) into an \((n+1)\)-fold loop space. Similarly, the spaces \(\operatorname{EC}(n-j,D^{j})\) fiber over \(\operatorname{Emb}(D^{n-j},D^{n})\) with fiber \(\Omega^{n-j}SO_{n-j}\) - indeed, these spaces can be thought of as the space of embeddings \(D^{j}\to D^{n}\) equipped with trivializations of their normal bundles. The spaces \(\operatorname{EC}(n-j,D^{j})\) admit actions of the operad of little \((n-j+1)\)-cubes, making the scanning maps \(\operatorname{EC}(n,*)\to\Omega\operatorname{EC}(n-1,D^{1})\to\cdots\to\Omega^{ j}\operatorname{EC}(n-j,D^{j})\) into maps of \((n+1)\)-fold loop spaces. Indeed, this sequence extends one step further to the Smale-Hirsch map \(\operatorname{Diff}(D^{n})\to\cdots\to\Omega^{n-1}\mathrm{EC}(1,D^{n-1})\to\Omega ^{n}GL_{n}(\mathbb{R})\), i.e. the space of relative bundle endomorphisms of \(TD^{n}\), which is also an \((n+1)\)-fold loop space. It remains an open question as to whether the Smale-Hirsch map for \(\operatorname{Diff}(D^{n})\) is non-trivial for \(n\geq 4\). Generally speaking, iterated loop spaces are highly structured objects, and finding maps between them is somewhat analogous to finding a homomorphism between rings: if the map is not zero, it is often highly non-trivial. In this paper we outline what is known about such scanning maps, and where some potentially interesting future computations sit. The author would like to thank David Gabai, Robin Koytcheff, Victor Turchin, Hyam Rubinstein and Alexander Kupers for helpful comments. In particular, this paper is largely exposition of a subset of results from a joint paper with David Gabai [4]. ## 2 Cancelling handles The space \(\operatorname{Emb}(D^{j},N)\) denotes the space of embeddings of \(D^{j}\) in \(N\) where the boundary of \(D^{j}\) is mapped to \(\partial N\) in some fixed, prescribed manner. In the case of \(\operatorname{Emb}(D^{j},D^{n})\) the embedding is required to restrict to the standard inclusion \(x\longmapsto(x,0)\) on the boundary. Cerf constructs an isomorphism [8] (Proposition 5, pg. 128) for all \(i\geq 0\) and \(n\geq 1\) \[\pi_{i}\mathrm{Diff}(D^{n})\simeq\pi_{i+1}\mathrm{Emb}(D^{n-1},D^{n})\] which we promote to a homotopy-equivalence \(\mathrm{Diff}(D^{n})\simeq\Omega\mathrm{Emb}(D^{n-1},D^{n})\). Equivalently, this can be stated as a description of the classifying space of \(\mathrm{Diff}(D^{n})\) \[B\mathrm{Diff}(D^{n})\simeq\mathrm{Emb}_{u}(D^{n-1},D^{n}).\] The subscript \(u\) indicates the component of the unknot in \(\mathrm{Emb}(D^{n-1},D^{n})\), i.e. the linear embedding. The above results were stated at least as far back as [3], but it would not be surprising if this observation had been written down earlier. The map \(\mathrm{Diff}(D^{n})\to\Omega\mathrm{Emb}(D^{n-1},D^{n})\) has a simple description thinking \(\mathrm{Diff}(D^{n})\) as the diffeomorphisms of \(\mathbb{R}^{n}\) with support contained in \(D^{n}\). One then considers \(D^{n}\) as a subset of \(I\times D^{n-1}\). Restriction to the fibers \(\{t\}\times D^{n-1}\) gives the \(1\)-parameter family of embeddings of \(D^{n-1}\) into \(D^{n}\). After suitably translating the embedding family to have fixed boundary conditions, this is an element of \(\Omega\mathrm{Emb}(D^{n-1},D^{n})\). The map back \(\Omega\mathrm{Emb}(D^{n-1},D^{n})\to\mathrm{Diff}(D^{n})\) is defined by an elementary isotopy-extension construction. Following Cerf, let \(\mathrm{HD}^{j}\) denote the \(j\)-dimensional _half-disc_, i.e. \[\mathrm{HD}^{j}=\{(x_{1},\cdots,x_{j})\in\mathbb{R}^{j}:\sum_{i=1}^{j}x_{i}^{2 }\leq 1,x_{1}\leq 0\}.\] The boundary \(\partial\mathrm{HD}^{j}\) consists of the two subspaces: (1) \(\partial D^{j}\cap\mathrm{HD}^{j}\), and (2) the subspace satisfying \(x_{1}=0\). The latter subspace we call the _flat face_, and the part on \(\partial D^{j}\) we call the _round face_. Let \(\operatorname{Emb}(\operatorname{HD}^{n},D^{n})\) be the space of embeddings of \(\operatorname{HD}^{n}\) into \(D^{n}\) that restrict to the identity map on \(\operatorname{HD}^{n}\cap\partial D^{n}\), i.e. acting as the identity on the round face. This gives us a Serre fibration \[\operatorname{Diff}(\operatorname{HD}^{n})\to\operatorname{Emb}(\operatorname{ HD}^{n},D^{n})\to\operatorname{Emb}(D^{n-1},D^{n}).\] The total space is contractible (this result is essentially uniqueness of collar neighbourhoods theorem 'with parameters') as described by Cerf [8]. This tells us that the connecting map \[\Omega\operatorname{Emb}(D^{n-1},D^{n})\to\operatorname{Diff}(\operatorname{ HD}^{n})\] is a homotopy-equivalence. The inclusion \(\operatorname{Diff}(\operatorname{HD}^{n})\to\operatorname{Diff}(D^{n})\) is a homotopy-equivalence via a rounding-the-corners argument. To make the connecting map concrete, the map \(\Omega\operatorname{Emb}(D^{n-1},D^{n})\to\operatorname{Diff}(\operatorname{ HD}^{n})\) comes from lifting the path of embeddings of \(D^{n-1}\to D^{n}\) to a path of embeddings of \(\operatorname{HD}^{n}\to D^{n}\), where the initial embedding is the standard inclusion. The end lift, since it is standard on the flat face, would be a diffeomorphism of \(\operatorname{HD}^{n}\). The proof of the isotopy extension theorem given by Hirsch [14] suffices for this construction, moreover it proves these restriction fibrations are Serre fibrations. We can justify why scanning \(\operatorname{Diff}(D^{n})\to\Omega\operatorname{Emb}(D^{n-1},D^{n})\) is the homotopy-inverse to the connecting map \(\Omega\operatorname{Emb}(D^{n-1},D^{n})\to\operatorname{Diff}(D^{n})\) via Figure 2. We have a central square whose horizontal axis is labelled \(t\), and whose vertical axis is labelled \(\alpha\). Given \(t\in[0,1]\) let \(f_{t}:\operatorname{HD}^{n}\to D^{n}\) denote the embedding whose restriction to \(\{0\}\times D^{n-1}\) is a given element of \(\Omega\operatorname{Emb}(D^{n-1},D^{n})\). Let \(\simeq\) be the equivalence relation on \([0,1]\times D^{n-1}\) generated by the equivalence classes \([0,1]\times\{p\}\) for all \(p\in\partial D^{n-1}\), thus \([0,1]\times D^{n-1}\) can be identified with \(\operatorname{HD}^{n}\), i.e. we collapse all the edges \([0,1]\times\{p\}\) for all \(p\in\partial D^{n-1}\). Under the identification \([0,1]\times D^{n-1}/\simeq\equiv\operatorname{HD}^{n}\), the corner strata corresponds to the collapsed edges, \(\{0\}\times D^{n-1}\) to the round face, and \(\{1\}\times D^{n-1}\) to the flat face. In the Figure 2\(f_{t}(\{\alpha\}\times D^{n-1})\) is denoted via a thick red curve. The upper line of our square therefore denotes \(f_{t}(\{0\}\times D^{n-1})\), our element of \(\Omega\operatorname{Emb}(D^{n-1},D^{n})\). This is homotopic to the concatenation of the other three boundary segments of the square. The rightmost segment of the square is'sweep-out' portion of scanning, and the left-most segment is the sweep-out of the standard inclusion. The lower edge is constant. Figure 1: The half-disc fibration. To extrapolate, let \(\operatorname{Emb}^{+}(D^{j-1},N)\) denote the space of smooth embeddings of \(D^{j-1}\) in \(N\) such that the boundary is sent to the boundary in a prescribed manner, and the embedding comes equipped with a normal vector field, then we have a restriction (Serre) fibration \[\operatorname{Emb}(D^{j},N\setminus\nu D^{j-1})\to\operatorname{Emb}(\operatorname {HD}^{j},N)\to\operatorname{Emb}^{+}(D^{j-1},N)\] where the total space is contractible, and \(\nu D^{j-1}\) indicates an open tubular neighbourhood in \(N\) corresponding to the base-point element of \(\operatorname{Emb}^{+}(D^{j-1},N)\). We keep track of the normal vector field in the base space, as otherwise the fiber would be an embedding space where the discs are not neatly embedded. One can of course argue the above is not literally the fiber - it should be the subspace of \(\operatorname{Emb}(\operatorname{HD}^{j},N)\) that agrees with a fixed embedding on the flat boundary. That said, blowing up the flat boundary or a tubular neighbourhood argument together with drilling the open tubular neighbourhood completes the identification of the fibre. This gives us an analogous homotopy-equivalence \[\Omega\operatorname{Emb}^{+}(D^{j-1},N)\simeq\operatorname{Emb}(D^{j},N \setminus\nu D^{j-1}).\] Figure 2: Homotopy-inverse of isotopy extension. The space \(N\setminus\nu D^{j-1}\) is \(N\) with a \((j-1)\)-handle drilled-out, and the embedding of \(D^{j}\) is a cancelling handle for the \((j-1)\)-handle, thus the \((j-1)\)-handle is parallel to the boundary. As another model for \(N\setminus\nu D^{j-1}\), we turn the handle upside-down and think of this manifold as \(N\cup H^{n-j}\), i.e. \(N\) union a \((n-j)\)-handle. Since the handle attachment is trivial, this manifold is diffeomorphic to \(N\natural(S^{n-j}\times D^{j})\). From this perspective, the embeddings of \(\operatorname{Emb}(D^{j},N\natural(S^{n-j}\times D^{j}))\) can be thought of as a space of embeddings of cocores for the \((n-j)\)-handle attachment of the boundary connect-sum \(N\natural(S^{n-j}\times D^{j})\), i.e. even including cocores that reach into \(N\). This last interpretation is perhaps the most convenient for stating the homotopy-equivalence \(\Omega\operatorname{Emb}^{+}(D^{j-1},N)\simeq\operatorname{Emb}(D^{j},N\natural S ^{n-j}\times D^{j})\) as the boundary condition on the latter embedding space sends \(\partial D^{j}\) to \(\{p\}\times\partial D^{j}\subset S^{n-j}\times D^{j}\). By design, the embeddings in \(\operatorname{Emb}^{+}(D^{j-1},N)\) are isotopically trivial on the boundary \(S^{j-2}\to\partial N\). **Theorem 2.1**: _There is a homotopy-equivalence_ \[\Omega\operatorname{Emb}^{+}(D^{j-1},N)\simeq\operatorname{Emb}(D^{j},N\natural S ^{n-j}\times D^{j})\] _where \(\operatorname{Emb}^{+}(D^{j-1},N)\) is the space of smooth embeddings of \(D^{j-1}\) in \(N\) such that the pre-image of the boundary of \(N\) is the boundary of \(D^{j-1}\). The embedding of \(\partial D^{j-1}\) is required to be a fixed embedding, and isotopically trivial, i.e. bounding an embedded \(D^{j-1}\to\partial N\). The \(+\) indicates the embedding comes equipped with a normal vector field. The space \(\operatorname{Emb}(D^{j},N\natural S^{n-j}\times D^{j})\) is a space of cocores for the handle attachment \(N\natural S^{n-j}\times D^{j}=N\cup H^{n-j}\), i.e. it is the space of smooth embeddings of \(D^{j}\) in \(N\natural S^{n-j}\times D^{j}\) such that the boundary of \(D^{j}\) is sent to \(\{*\}\times\partial D^{j}\) where \(*\in S^{n-j}\) is some point disjoint from the mid-ball of the boundary connect-sum._ Alternatively, one could express the theorem in the'reductionist' form \[\operatorname{Emb}(D^{j},N)\simeq\Omega\operatorname{Emb}^{+}(D^{j-1},N\cup H ^{n-j+1})\] i.e. by thinking of the original manifold \(N\) as \(N\natural S^{n-j}\times D^{j}\cup H^{n-j+1}\) where the latter handle is a cancelling handle. In this language, the homotopy-equivalence is possible provided the attaching sphere for \(H^{n-j+1}\) is dual to the boundary sphere in the embedding space \(\operatorname{Emb}(D^{j},N)\), i.e. intersecting transversely in a single point in \(\partial N\). This is a version of this result that appears in [15]. A homotopy-equivalence can be expressed as a map in either direction. The map \(\Omega\operatorname{Emb}^{+}(D^{j-1},N)\to\operatorname{Emb}(D^{j},N\setminus \nu D^{j-1})\) is induced by isotopy extension i.e. one lifts the element of \(\Omega\operatorname{Emb}^{+}(D^{j-1},N)\) to a path in \(\operatorname{Emb}(\operatorname{HD}^{j},N)\), starting at the base-point of \(\operatorname{Emb}(\operatorname{HD}^{j},N)\). Drilling the flat face from the endpoint of this path gives the element of \(\operatorname{Emb}(D^{j},N\setminus\nu D^{j-1})\). The map back \(\operatorname{Emb}(D^{j},N\setminus\nu D^{j-1})\to\Omega\operatorname{Emb}^{+ }(D^{j-1},N)\) involves thinking of \(D^{j}\) as fibered by parallel copies of \(D^{j-1}\) and taking those restrictions, and composing with the inclusion \(N\setminus\nu D^{j-1}\to N\). The paper [4] gives a detailed account in the \(\operatorname{Emb}(\operatorname{HD}^{j},D^{n})\) case, and [15] gives a detailed account using the'reductionist' perspective for \(\operatorname{Emb}(\operatorname{HD}^{j},N)\). **Proposition 2.2**: _The co-dimension 2 scan_ \[\operatorname{Diff}(D^{n})\to\Omega^{2}\operatorname{Emb}(D^{n-2},D^{n})\] _induces a split injection on all homotopy and homology groups, for \(n\geq 2\). This map is the inclusion part of a homotopy-retract._ Proposition 2.2 is a space-level statement of Proposition 6 of [8]. Do note, if one equips the target space \(\Omega^{2}\mathrm{Emb}(D^{n-2},D^{n})\) with normal framings, this does not affect the homotopy-type of the double-loop space, i.e. \(\Omega^{2}\mathrm{Emb}(D^{n-2},D^{n})\simeq\Omega^{2}\mathrm{Emb}^{+}(D^{n-2},D ^{n})\), moreover in the \(n=2\) case, this framed two-fold scan map \(\mathrm{Diff}(D^{2})\to\Omega^{2}\mathrm{Emb}^{+}(D^{0},D^{2})\equiv\Omega^{2} GL_{2}(\mathbb{R})\) is the Smale-Hirsch map. Since \(\Omega^{2}GL_{2}(\mathbb{R})\) is contractible, this is Smale's theorem that \(\mathrm{Diff}(D^{2})\) is contractible. Since \(\mathrm{Diff}(S^{2})\simeq O_{3}\times\mathrm{Diff}(D^{2})\) (this is a standard linearisation argument, see [3]), this proves Smale's Theorem \(\mathrm{Diff}(S^{2})\simeq O_{3}\). **Corollary 2.3**: _(Smale) \(\mathrm{Diff}(D^{2})\) is contractible, i.e. \(\mathrm{Diff}(S^{2})\) has the homotopy-type of its linear subgroup \(O_{3}\)._ **Proof** (of Proposition 2.2) The proof follows from forming a composite of functions involving the homotopy-equivalence \(\mathrm{Diff}(D^{n})\to\Omega\mathrm{Emb}(D^{n-1},D^{n})\) (i.e. Theorem 2.1, \(N=D^{n}\), \(j=n\)) with the induced map on loop spaces from Theorem 2.1, where \(N=D^{n}\) and \(j=n-1\), \[\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\to\Omega\mathrm{Emb}^{+}(D^{n-2},D^ {n}).\] Given that the unit normal fibers are copies of \(S^{1}\), we can discard the normal vector fields, i.e. the forgetful map \(\Omega^{2}\mathrm{Emb}^{+}(D^{n-2},D^{n})\to\Omega^{2}\mathrm{Emb}(D^{n-2},D^ {n})\) is a homotopy-equivalence. Think of \(S^{1}\times D^{n-1}\) as \(D^{n}\) union a \(1\)-handle, this gives an inclusion \(\mathrm{Emb}(D^{n-1},D^{n})\to\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\). Thus we have a composable triple \[\mathrm{Diff}(D^{n})\to\Omega\mathrm{Emb}(D^{n-1},D^{n})\to\Omega\mathrm{Emb} (D^{n-1},S^{1}\times D^{n-1})\to\Omega^{2}\mathrm{Emb}(D^{n-2},D^{n}).\] The map in the middle comes from thinking of the universal cover of \(S^{1}\times D^{n-1}\) as a copy of \(\mathbb{R}\times D^{n-1}\) which could also be thought of as as \(D^{n}\) remove two points on its boundary, i.e. this gives us our map back \(\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\to\mathrm{Emb}(D^{n-1},D^{n})\), the homotopy-retraction. Since the two maps on the ends are homotopy-equivalences, this gives us the result. Cerf's proof of Smale's theorem (Corollary 2.3) is also highlighted in [18] SS6.2.4. When the co-dimension of the embeddings are three or larger, sharp connectivity estimates for the scanning map exist. See for example [7] pgs. 23-25, the initial pages of Goodwillie's Ph.D thesis [9] and [11] for details. The deloopings of the spaces \(\mathrm{Diff}(D^{n})\) and \(\mathrm{Emb}(D^{j},D^{n})\) are studied in [21] and [22]. It would be interesting to see if there are analogous retraction results for the deloopings of the scanning maps \(\mathrm{Diff}(D^{n})\to\Omega^{n-j}\mathrm{Emb}^{+}(D^{j},D^{n})\). It is perhaps unlikely, but it is a basic question that deserves investigation. For some context, if one equips the embeddings with framed normal bundles appropriately, there are homotopy-equivalences \(\mathrm{Emb}^{+}(D^{j},D^{n})\simeq\mathrm{EC}(j,D^{n-j})\) making the scanning maps \(\mathrm{Diff}(D^{n})\to\Omega^{n-j}\mathrm{EC}(j,D^{n-j})\) into maps of \((n+1)\)-fold loop spaces, so one would expect such maps to either be trivial or relatively rich in information [3]. **Theorem 2.4**: _The scanning map \(\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\to\Omega\mathrm{Emb}(D^{n-2},S^{1} \times D^{n-1})\) is the inclusion portion of a homotopy-retraction, i.e. it induces split injections on all homotopy-groups for all \(n\geq 2\)._ **Proof** By Theorem 2.1, scanning gives us a homotopy equivalence \(\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\to\Omega\mathrm{Emb}^{+}(D^{n-2},D^ {n})\). The inclusion map \(\mathrm{Emb}^{+}(D^{n-2},D^{n})\to\mathrm{Emb}^{+}(D^{n-2},S^{1}\times D^{n-1})\) (thinking of \(S^{1}\times D^{n-1}\) as \(D^{n}\) union a \(1\)-handle) is the inclusion portion of a homotopy-retract \(\mathrm{Emb}^{+}(D^{n-2},S^{1}\times D^{n-1})\to\mathrm{Emb}^{+}(D^{n-2},D^{n})\) Notice when \(n=2\), the above scanning map is a homotopy-equivalence by Gramain [12]. When \(n=3\) it follows by Hatcher's work [13] that the scanning map is a homotopy-equivalence, indeed, both spaces are contractible. When \(n\geq 4\) far less is known about such scanning maps [4], [5]. In [4] and [5] the mapping-class group \(\pi_{0}\mathrm{Diff}(S^{1}\times D^{3})\) was shown to be not finitely generated via the map \(\pi_{0}\mathrm{Diff}(S^{1}\times D^{3})\to\pi_{2}\mathrm{Emb}(D^{1},S^{1} \times D^{3})\). Above, we see that the intermediate map \(\pi_{0}\mathrm{Diff}(S^{1}\times D^{3})\to\pi_{1}\mathrm{Emb}(D^{2},S^{1} \times D^{3})\) has kernel isomorphic to \(\pi_{0}\mathrm{Diff}(D^{4})\), this follows from 'handle-attachment homotopy-equivalence' \(\mathrm{Diff}(S^{1}\times D^{n-1})\to\mathrm{Diff}(D^{n})\times\mathrm{Emb}(D^ {n-1},S^{1}\times D^{n-1})\) described in [4]. The study of our scanning map \(\pi_{0}\mathrm{Diff}(S^{1}\times D^{3})\to\pi_{2}\mathrm{Emb}(D^{1},S^{1} \times D^{3})\) is thus reduced to the final step \(\pi_{1}\mathrm{Emb}(D^{2},S^{1}\times D^{3})\to\pi_{2}\mathrm{Emb}(D^{1},S^{1} \times D^{3})\), i.e. apply the loop space functor applied to the map \[\mathrm{Emb}(D^{2},S^{1}\times D^{3})\to\Omega\mathrm{Emb}(D^{1},S^{1}\times D ^{3}).\] One might attempt to apply the reductionist version of Theorem 2.1 to construct a homotopy-equivalence \(\mathrm{Emb}(D^{2},S^{1}\times D^{3})\simeq\Omega\mathrm{Emb}^{+}(D^{1},S^{1} \times D^{3}\cup H^{3})\), but since the boundary condition circle for the embedding space \(\mathrm{Emb}(D^{2},S^{1}\times D^{3})\) is homologically trivial, it does not have a normal 2-sphere. Alternatively, the embeddings in \(\mathrm{Emb}(D^{2},S^{1}\times D^{3})\) are not the cocore of a 2-handle attachment, so we can not appeal to the primary version of Theorem 2.1, either. That said, we do know that the map \(\mathrm{Emb}(D^{2},S^{1}\times D^{3})\to\Omega\mathrm{Emb}(D^{1},S^{1}\times D ^{3})\) is homotopically non-trivial ([4], [5]) as the induced map on \(\pi_{1}\) maps onto an infinitely-generated subgroup, so further study of these scanning maps is warranted. ## 3 Bott handles and miscellaneous The homotopy-equivalence \(\mathrm{Diff}(D^{n})\simeq\Omega\mathrm{Emb}(D^{n-1},D^{n})\) can be extrapolated to a homotopy-equivalence \(\mathrm{Diff}(I\times N)\simeq\Omega\mathrm{Emb}(N\times\{\frac{1}{2}\},I \times N)\), and scanning maps \[\mathrm{Diff}(D^{k}\times N)\to\Omega\mathrm{Emb}(D^{k-1}\times N\to D^{k} \times N)\to\cdots\to\Omega^{j}\mathrm{Emb}(D^{k-j}\times N,D^{k}\times N).\] Whereas the scanning of Section 2 could be viewed as an argument where the intermediate space is that of the space of cancelling handles, i.e. vanilla Morse theory, the scanning above has intermediate space Bott-style handles, i.e. the kinds of handles that occur with Bott-style Morse functions (functions on manifolds where the critical point sets are manifolds and the Hessian is non-degenerate on these critical submanifolds [1]). For Bott-style Morse functions 'handle' attachments are disc bundles over manifolds, whereas in standard Morse theory one attaches disc bundles over points, i.e. plain discs. Specifically, an adjunction where one attaches a disc-bundle over \(M\), \(M\ltimes D^{k}\) to another manifold \(N\) along an embedding \(M\ltimes\partial D^{k}\to\partial N\) is what is called a Bott-style handle attachment [1], as these sorts of attachments occur for Bott-type Morse functions, i.e. functions \(W\to\mathbb{R}\) whose critical points are manifolds and the Hessian is non-degenerate on the normal bundle fibers. Bott-style Morse functions typically occur when functions have symmetry, for example the trace of a matrix is a Bott-style Morse function on the orthogonal group \(O_{n}\). The critical points of this function are the square roots of the identity matrix \(I\), thus copies of Grassman manifolds. As a concrete example, the trace functional expresses \(SO_{3}\) as the tautological line bundle over \(\mathbb{R}P^{2}\) union a 3-handle. The analogue to Theorem 2.1 in the Bott case has the form of a homotopy-equivalence \[\operatorname{Emb}(M\ltimes D^{k},N\setminus\nu(M\ltimes D^{k-1}))\simeq \Omega\operatorname{Emb}(M\ltimes D^{k-1},N).\] Given that our scanning maps are highly structured, they would appear to be a potentially useful device for exploring the homotopy-types of diffeomorphism groups like \(\operatorname{Diff}(D^{n})\), \(\operatorname{Diff}(S^{1}\times D^{n})\) and generally product manifolds \(\operatorname{Diff}(N\times D^{k})\), i.e. for studying things such as spaces of pseudoisotopies. From this perspective there is perhaps a similarly overlooked element of Embedding Calculus [20] that is relevant. For example, given a manifold \(M\), let \(\mathcal{O}_{k}(M)\) be the category of open subsets of \(M\) diffeomorphic to a disjoint union of at most \(k\) open balls, arrows given by inclusion maps. Given \(U\in\mathcal{O}_{k}(M)\) let \(F(U)\) be \(\operatorname{Emb}(U\times D^{j},M\times D^{j})\), i.e. smooth embeddings of \(U\times D^{j}\) in \(M\times D^{j}\) that restrict to the standard inclusion on \(U\times\partial D^{j}\). The \(k\)-th stage of the Taylor tower could be taken to be \(T_{k}F(U)=\operatorname{holim}_{V\in\mathcal{O}_{k}(U)}F(V)\). From this perspective, the scanning map is the first stage of the Taylor tower. Higher stages of the Taylor tower are built from spaces of generalized string links. Spaces of string links have been the subject of some recent investigations by Koytcheff [16], Turchin and Tsopmene [24][16], including a description of some of their low-dimensional homotopy groups [16] as well as an operad action [17]. String links appear in two essential ways in [4] and [5]. Barbell diffeomorphism families are the induced diffeomorphisms coming from the low-dimensional homotopy group of spaces of 2-component string-links. Moreover, the map we use to detect our diffeomorphisms of \(S^{1}\times D^{n-1}\) has the form \(\operatorname{Diff}(S^{1}\times D^{n-1})\to\Omega^{n-2}\operatorname{Emb}(D^{1 },S^{1}\times D^{n-1})\). If we take the lifts of an element of \(\operatorname{Emb}(D^{1},S^{1}\times D^{n-1})\) to the universal cover, we get an equivariant, infinite-component string link in \(\mathbb{R}\times D^{n-1}\). Thus string links would appear to be a relatively efficient machine for investigating embedding spaces and diffeomorphism groups. It would be very interesting to see the relative rate of convergence of the above Taylor towers, compared to the standard Embedding Calculus [10]. ## 4 The Schonflies monoid We end with the observation, implicit in [4], that the monoid \(\pi_{0}\operatorname{Emb}(S^{n-1},S^{n})\) using the connect-sum operation, that this is a group for all \(n\geq 2\), as it is unclear if a proof of this statement exists in the literature. For \(n\neq 4\) this group is known to be isomorphic to \(\pi_{0}\operatorname{Diff}(D^{n-1})\). In dimension \(n=4\) the Schonflies problem is equivalent to stating this group is trivial. The proof is a small variation on the proofs of Proposition 2.2 and Theorem 2.4. First, observe the homotopy-equivalence [3] \[\operatorname{Emb}(S^{n-1},S^{n})\simeq SO_{n+1}\times\operatorname{Emb}(D^{ n-1},D^{n}).\] This follows from a linearisation argument. Given an embedding \(S^{n-1}\to S^{n}\) one can restrict it to a fixed hemi-sphere of \(S^{n-1}\), this gives an embedding of an \((n-1)\)-disc in \(S^{n}\). The space of embedded \((n-1)\)-discs in \(S^{n}\) has the homotopy-type of \(V_{n+1,n}\simeq SO_{n+1}\), i.e. the Stiefel manifold of orthonormal \(n\)-frames in \(\mathbb{R}^{n+1}\). That this is a group gives the above product decomposition, as one can use an element of \(SO_{n+1}\) to standardize an embedding \(S^{n-1}\to S^{n}\) on the hemi-sphere. The connect-sum monoid structure on \(\pi_{0}\mathrm{Emb}(S^{n-1},S^{n})\) corresponds to the'stacking' operation on \(\mathrm{Emb}(D^{n-1},D^{n})\). To make this explicit consider the homotopy-equivalent embedding space \(\mathrm{Emb}(I^{n-1},I^{n})\) and simply stack cubes side-by-side, and identify with \(I^{n}\) using a linear diffeomorphism. The inclusion from Proposition 2.2 \[\mathrm{Emb}(D^{n-1},D^{n})\to\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\] is compatible with stacking, i.e. it induces a map of monoids on path-components. The space \(\mathrm{Emb}(D^{n-1},S^{1}\times D^{n-1})\) has the homotopy-type of \(\Omega\mathrm{Emb}^{+}(D^{n-2},D^{n})\) by Theorem 2.1. The space \(\Omega\mathrm{Emb}^{+}(D^{n-2},D^{n})\) has _two_ stacking operations, i.e. one can'stack' using the loop-space parameter, or stack using the analogous stacking operation on the space \(\mathrm{Emb}^{+}(D^{n-2},D^{n})\). These two operations are homotopic. In introductory algebraic topology courses, one uses this type of argument to show the fundamental group of a topological group must be abelian. It is often called an Eckmann-Hilton argument. Another way to say this is that the space \(\Omega\mathrm{Emb}^{+}(D^{n-2},D^{n})\) has an action of the operad of 2-cubes, where the action restricts to either concatenation construction, depending on the position of the cubes. **Theorem 4.1**: _The monoid structure on \(\pi_{0}\mathrm{Emb}(S^{n-1},S^{n})\) coming from the connect-sum operation, this is a group for all \(n\geq 2\). Moreover, there is an onto-homomorphism_ \[\pi_{1}\mathrm{Emb}^{+}(D^{n-2},D^{n})\to\pi_{0}\mathrm{Emb}(D^{n-1},D^{n}) \simeq\pi_{0}\mathrm{Emb}(S^{n-1},S^{n}).\] Technically \(\pi_{0}\mathrm{Emb}(S^{0},S^{1})\) is also known to be a group, as it has only a single-element. The group \(\pi_{1}\mathrm{Emb}^{+}(D^{n-2},D^{n})\) is known to be non-trivial when \(n=4\)[4] although all presently-known elements map to zero in \(\pi_{0}\mathrm{Emb}(D^{n-1},D^{n})\). In principle one can use Theorem 4.1 to give an alternative proof that the image of every smooth embedding \(S^{n-1}\to S^{n}\) is the boundary of a smoothly embedding \(D^{n}\to S^{n}\), avoiding the topological category argument of Brown [2] and Mazur [19] when \(n\geq 6\), i.e. a pure smooth-category argument. Such an argument will appear in [6]. The homomorphism \(\pi_{1}\mathrm{Emb}^{+}(D^{n-2},D^{n})\to\pi_{0}\mathrm{Emb}(D^{n-1},D^{n})\) has this description. Take a linearly-embedded copy of \(\mathrm{HD}^{n-1}\) in \(D^{n}\), i.e. the half-disc in \(D^{n-1}\times\{0\}\subset D^{n}\). Given a loop of embeddings of \(D^{n-2}\) (with normal vector field) in \(D^{n}\), lift that path of embeddings to a path in \(\mathrm{Emb}(\mathrm{HD}^{n-1},D^{n})\) that begins at the linear embedding. At the end of this path, we have a smooth embedding \(\mathrm{HD}^{n-1}\to D^{n}\) which agrees with our standard inclusion on the boundary, including its normal derivative. Via a small isotopy, we can ensure this embedding \(\mathrm{HD}^{n-1}\to D^{n}\) agrees with the standard inclusion in a neighbourhood of the boundary. Drill the flat face of the embedded \(\mathrm{HD}^{n-1}\) from \(D^{n}\), this results in a copy of \(S^{1}\times D^{n-1}\) together with a smoothly-embedded \(D^{n-1}\to S^{1}\times D^{n-1}\) which agrees with the standard inclusion \(\{1\}\times D^{n-1}\subset S^{1}\times D^{n-1}\) on the boundary. Lift this embedding to the universal cover of \(S^{1}\times D^{n-1}\) and identify the universal cover with a subspace of \(D^{n}\) (\(D^{n}\) with two boundary points removed). This embedding \(D^{n-1}\to D^{n}\) is the value of our map \(\pi_{1}\mathrm{Emb}^{+}(D^{n-2},D^{n})\to\pi_{0}\mathrm{Emb}(D^{n-1},D^{n})\).
2309.09688
Granular dilatancy and non-local fluidity of partially molten rock
Partially molten rock is a densely packed, melt-saturated, granular medium, but it has seldom been considered in these terms. In this manuscript, we extend the continuum theory of partially molten rock to incorporate the physics of granular media. Our formulation includes dilatancy in a viscous constitutive law and introduces a non-local fluidity. We analyse the resulting poro-viscous--granular theory in terms of two modes of liquid--solid segregation that are observed in published torsion experiments: localisation of liquid into high-porosity sheets and radially inward liquid flow. We show that the newly incorporated granular physics brings the theory into agreement with experiments. We discuss these results in the context of grain-scale physics across the nominal jamming fraction at the high homologous temperatures relevant in geological systems.
Richard F. Katz, John F. Rudge, Lars N. Hansen
2023-09-18T11:50:58Z
http://arxiv.org/abs/2309.09688v2
[ ###### Abstract Partially molten rock is a densely packed, melt-saturated, granular medium, but it has seldom been considered in these terms. In this paper, we extend the continuum theory of partially molten rock to incorporate the physics of granular media. Our formulation includes dilatancy in a viscous constitutive law and introduces a non-local fluidity. We analyse the resulting poro-viscous-granular theory in terms of two modes of liquid-solid segregation that are observed in published torsion experiments: localisation of liquid into high-porosity sheets and radially inward liquid flow. We show that the newly incorporated granular physics brings the theory into agreement with experiments. We discuss these results in the context of grain-scale physics across the nominal jamming fraction at the high homologous temperatures relevant in geological systems. A]Granular dilatancy and non-local fluidity of partially molten rock R. F. Katz]Richard F. Katz\({}^{1}\)+, John F. Rudge\({}^{2}\) and Lars N. Hansen\({}^{3}\) Footnote †: Email address for correspondence: [email protected] A]John F. Rudge\({}^{2}\) and L]Lars N. Hansen\({}^{3}\) ## 1 Introduction Partially molten rock is a physical system that is central to many geological and planetary processes. It is a densely packed, melt-saturated, granular medium, but it has seldom been considered in these terms. Continuum models of partially molten rock treat the solid and liquid phases as interpenetrating fluids in a poro-viscous, zero-Reynolds-number theory (e.g., McKenzie, 1984; Fowler, 1990). In such models, effects arising from the discrete grains are neglected, except insofar as they affect the creep viscosity. For example, during Coble creep, the melt phase provides a fast pathway for mass diffusion around grains (e.g., Takei & Holtzman, 2009; Rudge, 2018). Deformation experiments on partially molten rock are typically parameterised in terms of an isotropic flow law with a weakening factor that depends on the volume fraction of melt within pores (e.g., Kohlstedt & Zimmerman, 1996; Kelemen _et al._, 1997). These physics were reviewed by Katz _et al._ (2022). However, deformation of partially molten rock inevitably includes a component of sliding along grain boundaries (e.g., Hansen _et al._, 2011; Rudge, 2021). The granular origins and importance of such sliding in partially molten rock were recognised by Paterson (1995) and elaborated by Paterson (2001). In those works, the geometric grain-compatibility problem arising from grain-boundary sliding is assumed to be entirely resolved by shape-change of the grains, which occurs by diffusion or lattice dislocations (Langdon, 2006). A third possibility is noted by Paterson (2001) but then neglected: that incompatibility is resolved by relative motion of undeforming grains in a granular flow. This mechanism is fundamental in the physics of athermal granular media (e.g., Forterre & Pouliquen, 2008); it gives rise to dilatancy and non-local granular fluidity. In this paper, we extend the continuum theory of partially molten rock to incorporate the physics of a granular medium. As a hypothesis for the essential granular physics, we adapt and include theory for dilatancy and non-local fluidity. We test this hypothesis by modelling published laboratory experiments in which partially molten rock is subjected to torsional deformation. The deformation drives liquid-solid segregation and yields robust patterns of melt localisation. We show that the inclusion of granular physics brings model predictions into agreement with laboratory data. The laboratory experiments, detailed in King _et al._ (2010) and reviewed in SS2 below, are conducted on synthetic rocks comprising solid olivine grains and liquid basaltic melt. Hot-pressed, nominally uniform, cylindrical samples of this aggregate are sheared in a torsion apparatus at high temperature and confining pressure. During shear, two modes of liquid-solid segregation occur simultaneously. The first is a pattern-forming localisation of the liquid into high-porosity sheets that form at 15-20\({}^{\circ}\) to the shear plane (Holtzman _et al._, 2003). The sheets are typically measured in their cross section, where they appear as bands with a characteristic spacing. The second is a radially inward porous flow of liquid, accommodated by a radially outward flow of solid (Qi _et al._, 2015). A satisfactory physical understanding of these flow phenomena has been elusive, though much has been learned through theoretical analysis. Localisation of the liquid phase into sheets at 45\({}^{\circ}\) to the shear plane was predicted by Stevenson (1989) and Spiegelman (2003) to be a consequence of a porosity-weakening viscosity of the solid aggregate. Katz _et al._ (2006) and Rudge & Bercovici (2015) showed that if this viscosity is (effectively) non-Newtonian with a power-law exponent of \(\sim\)6, the angle is reduced to match the observations. However, this exponent was measured by King _et al._ (2010) to be \(\sim\)\(1.5\pm 0.3\) at 95% confidence--almost Newtonian. Furthermore, these isotropic theories cannot explain the radial melt segregation in torsion experiments. In contrast, a theory of anisotropic Coble creep with Newtonian viscosity can explain the radial segregation (Qi _et al._, 2015). This theory was derived by Takei & Holtzman (2009) from grain-scale considerations of anisotropic solid contiguity under deviatoric stress. It predicts that viscous resistance to deformation is reduced in the direction of minimum contiguity. It also predicts the emergence of porosity bands and, if the contiguity tensor aligns with the principal stress directions, that the bands grow fastest at low angles, consistent with experiments (Takei & Katz, 2013). However, in laboratory experiments that produce bands, the grain-scale contiguity is misaligned by about 15\({}^{\circ}\)(Qi _et al._, 2015; Qi & Kohlstedt, 2018), which corresponds to a theoretical prediction of high-angle porosity bands (this discrepancy is resolved by better measurement of contiguity, according to Seltzer _et al._ (2023)). Nonetheless, viscous-anisotropy theory gives rise to an effective dilatancy that we discuss in SS6.1, below. Another challenge is to explain the characteristic wavelength of the high-porosity bands observed in experiments. All of the theories noted above lack mode selection; instead, they predict the rate of band growth to plateau at decreasing wavelength. Several studies have invoked processes driven by interfacial energy to regularise the growth-rate spectrum. Bercovici & Rudge (2016) incorporated capillary effects in a diffuse-interface approximation of a sharp porosity interface (Sun & Beckermann, 2004)--however, sharp interfaces emerge in experiments only long after the onset of instability. Takei & Hier-Majumder (2009) and King _et al._ (2011) hypothesised that variation of surface tension drives dissolution/precipitation reactions. When coupled with chemical diffusion in the melt phase, these reactions damp instability growth at small wavelengths. The theory of dense granular suspensions, as reviewed by Guazzelli & Pouliquen (2018), holds promise in providing a simple and unified explanation for all of these observed patterns. A central feature is the anisotropic compressive stress between solid particles caused by shearing flow (Bagnold, 1954). The coupling between shear and compression is the consequence of microphysical interaction of suspended particles (Brady & Morris, 1997). This behaviour is demonstrated empirically in various studies, but Deboeuf _et al._ (2009) provide a particularly fascinating example and discussion. If the suspended solid phase is not rigidly confined, it can undergo a net dilation due to shear (Reynolds, 1885; Boyer _et al._, 2011). In a suspension contained within a constant volume, net dilatancy is prohibited but the solid fraction can vary internally. Besseling _et al._ (2010) shows that shearing, dense suspensions are susceptible to a banding instability; this instability is modelled in terms of a suspension viscosity that increases with solid fraction. Their results run parallel to the theory for band emergence in partially molten rock (Stevenson, 1989) except that in a suspension, the growth rate of bands also depends on the dilatancy. Moreover, Morris & Boulay (1999) shows that for suspension flows in cylindrical geometry (i.e., pipe flow, parallel-plate or cone-and-plate torsion), radial segregation of liquid and solid phases is predicted, consistent with experiments. Again, there is a parallel with results for partially molten rock in torsion (Qi _et al._, 2015; Qi & Kohlstedt, 2018) and pipe-flow (Quintanilla-Terminel _et al._, 2019) configurations. In all of these flowing suspensions, the dilatancy stress plays a central role. Another aspect of granular physics that may be relevant here is non-local fluidity (inverse viscosity). This concept was developed in the context of emulsions (e.g., Goyon _et al._, 2008; Bocquet _et al._, 2009) and adapted to granular suspensions (Kamrin & Koval, 2012). The theory states that the flow-response to stress at a point in the granular medium is sensitive to the fluidity in a neighbourhood around that point. This neighbourhood has a typical size, \(\xi\), of order \(10\times\) the grain size and decreasing with the square root of shear stress. Henann & Kamrin (2013) demonstrate that simulated shear zones, forced by a spatial discontinuity in boundary velocity, are regularised by non-local fluidity to a width that is consistent with experiments. Hence non-local fluidity appears promising in regularising the growth spectrum of shear bands in experiments on partially molten rock. The fundamentally granular nature of partially molten rock and the relevant predictions from theories of dense granular suspensions motivate the present work. Our aims are to develop a theory for partially molten rock that incorporates granular dilatancy and non-local fluidity, and to compare predictions of that theory to the results of laboratory experiments. We note that in doing so, we are applying granular physics at solid fractions above what is typically considered the jamming fraction, at which the solid phase becomes immobile. However, in crystalline materials at high homologous temperatures, grain boundaries behave as a viscous fluid that allows grains to slide past each other (Ashby, 1972). In this context, the solid phase can still be mobilized if sufficient shear stress is applied (Heussinger & Barrat, 2009). Moreover, _in-situ_ observations of polycrystalline aggregates deforming at high solid fraction show a clear link between grain-boundary sliding and dilatancy (Walte _et al._, 2005; Kareh _et al._, 2017). Hence, we assert that although the grains of partially molten rocks are not rigid, they nonetheless undergo grain-boundary sliding that is associated with a compressive intergranular stress and may lead to dilatancy. Mechanical decreases in solid fraction, including by dilatancy, are here referred to as decompaction. The poro-viscous theory of partially molten rock relates the decompaction rate to the pressure difference between the liquid and solid phases in a viscous constitutive law (McKenzie, 1984). This approach differs from suspension theory, in which the solid phase exerts zero resistance to changes in solid fraction (Guazzelli & Morris, 2011). It also differs from theories for dry granular media, where the solid fraction is a decreasing function of the shear-strain rate (Forterre & Pouliquen, 2008), and from soil mechanics, where the solid fraction is predicted to evolve toward a critical state as a function of the total strain (Oda & Iwashita, 2020). However, it seems that these isotropic dynamics may be incompletely understood. For example, Kabla & Senden (2009) found empirical evidence that dilatancy and shear-independent compaction compete in the evolution of solid fraction. By combining poro-viscous decompaction with dilatancy stress, our theory may provide new insight in this regard. Previous authors have incorporated granular dilatancy into discussions and models of geological materials, going back at least to Mead (1925). It has been invoked in crystal-rich deforming magma (e.g., Smith, 1997; Petford _et al._, 2020), in lower-crustal shear zones (Menegon _et al._, 2015), and in gouge-filled fault zones (e.g., Marone _et al._, 1990; Segall _et al._, 2010). Dilation has been considered in competition with compaction (Paterson, 2001; Niemeijer & Spiers, 2007) and as a microphysical mechanism responsible for rate-and-state friction (Chen & Spiers, 2016). It may play a role in regulating glacial sliding (Warburton _et al._, 2023) and in a range of geomorphological processes (Jerolmack & Daniels, 2019). Dilation is associated with Riedel shear zones (e.g., Dresen, 1991; Bedford & Faulkner, 2021), which appear at the same angle as bands in partially molten rock. It might be expected that partially molten rock shares certain behaviour with other granular, geological materials. In the present work, we find that incorporation of granular dilatancy and non-local fluidity brings predictions of a poro-viscous compaction theory into quantitative agreement with experimental results. The paper is organised as follows. In SS2 we review torsion experiments on partially molten rocks and highlight their key results. We present our rheological model in SS3. Then, in SS4, we provide the governing equations and analyse them in terms of radial segregation and band formation. This analysis is followed by quantitative comparison with experiments in SS5 and a discussion in SS6. ## 2 Laboratory experiments and key observations Previously described laboratory experiments provide a motivation and context for testing the theory developed here. We focus on experiments conducted on partially molten rock, typically synthesized from mixtures of \(\sim\)95% olivine grains and \(\sim\)5% mid-ocean ridge basalt, sometimes with a small percentage of chromite (e.g., Holtzman _et al._, 2003; King _et al._, 2010; Qi _et al._, 2015). The olivine grains are polydisperse, typically with a mean diameter of \(\sim\)10 \(\mu\)m. Samples are hydrostatically hot-pressed to remove gas-filled bubbles prior to deformation. After hot-pressing, they have a nominally uniform melt fraction, \(\phi_{0}\). The experiments are conducted in a gas-medium, triaxial-deformation apparatus (Paterson, 1990) with a confining pressure of 300 MPa and temperatures of \(\sim\)1225degC. The samples are jacketed to separate them from the confining gas. Under these conditions, the basalt is molten and the olivine (and chromite) grains are solid. Torsional deformation is imposed on the sample, although some experiments have also been conducted in direct shear (Holtzman _et al._, 2003; Holtzman & Kohlstedt, 2007). The distribution of porosity within the sample is not measured _in situ_. Rather, the experiment is quenched, sectioned, and imaged at high resolution to calculate the porosity field. The essential characteristics of these torsional experiments are outlined in figure 1a. Cylindrical samples with height \(H\) and outer radius \(R\) are deformed by a circular platen that turns with angular velocity \(\dot{\Omega}\hat{\mathbf{z}}\) about the axis of the cylinder. At low strains, when the sample remains nominally uniform, the imposed twist induces an azimuthal velocity field \(U(r,z)\hat{\mathbf{\varphi}}\) that is axisymmetric. The velocity component \(U\) increases linearly in the \(\hat{\mathbf{z}}\) and \(\hat{\mathbf{r}}\) directions. On cylindrical surfaces, the deformation is approximately that of simple shear; the magnitude of the shear strain (and its rate) increase from zero at the twist axis to a maximum value at the outer boundary of the sample. The outer boundary of the sample is sealed in an impermeable, nickel jacket. The radial normal stress at the jacket is maintained constant by the confining gas pressure. At the temperatures of the experiments, the viscosity of nickel is greater than that of the basaltic liquid and less than the granular olivine aggregate. This viscosity contrast enables the jacket to shear with negligible resistance, but discourages its intrusion into the pore space of the sample. Hence its effect on the sample falls somewhere between two limiting cases. In one limit, the jacket inhibits all radial flow at the boundary by isolating the volume of the sample. In the other limit, it transmits the full confining pressure into the pore space between olivine grains. There is empirical evidence that the reality is closer to the first of these limits, but the details have not been measured or quantified. Two critical observations have arisen from torsion experiments on partially molten rocks. The first is the emergence of high-porosity sheets separated by compacted, low-porosity lenses after a shear strain of \(\gamma\sim 1\)(Holtzman _et al._, 2003). These sheets are usually measured in cross-section, as in figure 1b, and hence referred to as bands. They have a characteristic spacing and form at 15-20\({}^{\circ}\) to the shear plane (Holtzman & Kohlstedt, 2007). This angle is similar to that of Riedel shear zones (Dresen, 1991), Figure 1: Experimental configuration and representative results. _(a)_ Schematic diagram of a deforming experimental sample and the emergent patterns of melt segregation. Experiments are conducted at high confining pressure and high temperature. After achieving a specified twist, the sample is quenched, sectioned, and polished to reveal the distribution of melt (solidified to glass) and crystalline, granular solid. _(b)_ A tangential section showing high-porosity bands (black) at low angle to the shear plane (\(\phi_{0}=0.04,\ \gamma=1.5\); King _et al._, 2010). _(c)_ A transverse section showing radially inward migration of melt (\(\phi_{0}=0.10,\ \gamma=5.0\); Qi _et al._, 2015). Cracks visible in panels (b) and (c) are a consequence of the rapid quench and decompression after deformation. but significantly lower than what would be expected if the bands were normal to the direction of maximum tension (45\({}^{\circ}\)). Furthermore, individual bands are embedded in a nominally simple-shear flow and hence with time, they are rotated to higher angles. However, despite this necessary rotation, the band-angle distribution remains roughly unchanged with increasing strain (King _et al._, 2010). The second critical observation is that with progressive twist, liquid melt segregates from the solid grains and migrates toward the center of the cylinder (Qi & Kohlstedt, 2018). This migration leads to azimuthally averaged porosity, measured over the transverse section shown in figure 1c, that decreases with radius. Experiments to increasing values of total twist (reported as shear strain \(\gamma\) at the outer radius) exhibit greater segregation and a steeper radial porosity gradient (Qi _et al._, 2015). The present study aims to explain these observations in terms of the physics of dense granular suspensions. ## 3 Rheological model Our rheological model of a two-phase aggregate, comprising a contiguous matrix of solid grains and its melt-saturated, permeable pore-space, is based on the poroviscous theory derived by McKenzie (1984) and reviewed by Katz (2022). The melt is present with volume fraction \(\phi\) (the porosity) that varies in space and time. This variation is accommodated by (de)compaction of the solid matrix, but both phases are incompressible. To incorporate dilatancy effects, we take inspiration from theories of suspensions (Brady & Morris, 1997; Fang _et al._, 2002; Guazzelli & Pouliquen, 2018) and append a term that hypothetically quantifies the normal stresses generated by grain-grain interactions during shearing flow. The constitutive law for the effective stress is then \[\boldsymbol{\sigma}^{\rm eff}=\zeta_{\phi}\mathcal{C}\boldsymbol{I}+2\eta_{ \phi}\dot{\boldsymbol{\varepsilon}}-D_{\phi}\boldsymbol{\Lambda}\dot{ \varepsilon}_{II}, \tag{1}\] where \(\zeta_{\phi}\), \(\eta_{\phi}\) and \(D_{\phi}\) are dynamic viscosities for isotropic, deviatoric and dilational deformation, respectively, \(\boldsymbol{I}\) is the identity tensor, and where \[\mathcal{C}\equiv\boldsymbol{\nabla}\cdot\boldsymbol{v}^{s},\qquad\dot{ \boldsymbol{\varepsilon}}\equiv\tfrac{1}{2}\left[\boldsymbol{\nabla} \boldsymbol{v}^{s}+\left(\boldsymbol{\nabla}\boldsymbol{v}^{s}\right)^{T}- \tfrac{2}{3}\mathcal{C}\boldsymbol{I}\right],\qquad\boldsymbol{\Lambda}\equiv \left(\begin{array}{ccc}1&0&0\\ 0&\Lambda_{\perp}&0\\ 0&0&\Lambda_{\times}\end{array}\right), \tag{2}\] are the decompaction rate, deviatoric strain-rate tensor, and particle-stress anisotropy tensor, respectively. We have introduced \(\boldsymbol{v}^{s}\), the solid velocity field, and \(\dot{\varepsilon}_{II}\equiv\sqrt{\dot{\boldsymbol{\varepsilon}}:\dot{ \boldsymbol{\varepsilon}}/2}\) is the second invariant of the deviatoric strain-rate tensor. Deboeuf _et al._ (2009) provides theoretical context, insightful commentary, and empirical justification for the dilatancy term in (1). The particle-stress anisotropy tensor \(\boldsymbol{\Lambda}\) is used to model the normal stresses generated by a particle-laden flow that is locally approximated as simple shear (Guazzelli & Morris, 2011; Guazzelli & Pouliquen, 2018). It is written with reference to a coordinate system aligned with the simple shear. The \(\Lambda_{11}\) direction is taken to be the direction of flow (indicated by \(\parallel\)); the \(\Lambda_{22}\) direction is normal to the shear plane (hence we denote it \(\Lambda_{\perp}\)); the \(\Lambda_{33}\) direction is the direction of the vorticity vector (and hence denoted \(\Lambda_{\times}\)). The \(\Lambda_{11}\) entry is factored out and lumped with \(D_{\phi}\). Therefore, \(\Lambda_{\perp}\) and \(\Lambda_{\times}\) are dimensionless particle-normal-stress ratios. Previous work has shown that the values of these parameters may be constrained by comparison of model predictions with carefully designed experiments (Morris & Boulay, 1999; Fang _et al._, 2002; Guazzelli & Pouliquen, 2018). In the flow geometries considered below (Cartesian or cylindrical), the particle-stress anisotropy tensor can be straightforwardly aligned with the experimental deformation geometry; in general, it must be aligned with respect to the principal axes of the flow (Miller _et al._, 2009). The isotropic part of the effective stress is where dilatancy modifies the physics. We see this by taking \(-1/3\) the trace of the effective stress tensor in equation (1), \[D_{\phi}\dot{\varepsilon}_{II}\operatorname{tr}\left(\boldsymbol{\Lambda} \right)/3=\left(1-\phi\right)\left(P^{s}-P^{\ell}\right)+\zeta_{\phi}\mathcal{ C}, \tag{3}\] where \(P^{j}=-\operatorname{tr}\left(\boldsymbol{\sigma}^{j}\right)/3\) is the pressure of phase \(j\). This equation states that the shear-strain rate has two possible consequences for isotropic deformation. If \(\zeta_{\phi}=0\), there is no viscous resistance to compaction and shear generates a positive effective pressure. This is equivalent to suspension theory (e.g., Deboeuf _et al._, 2009). If, in contrast, there is zero effective pressure (\(\Delta P=0\)), then shear causes dilation. This has a parallel in soil mechanics, where the dilatancy angle \(\psi\) gives the kinematic relationship between shear strain and dilation (e.g., Oda & Iwashita, 2020). In the present context, we can compute the dilatancy angle as \(\tan\psi\equiv D_{\phi}\text{tr}\left(\boldsymbol{\Lambda}\right)/3\zeta_{\phi}\). The more general case, of interest here, is where both the effective pressure and the compaction viscosity are nonzero. To complete the rheological model, we require expressions for the dependency of the three viscosities on melt fraction \(\phi\). Empirical constraints and theoretical models of the shear viscosity \(\eta_{\phi}\) and compaction viscosity \(\zeta_{\phi}\) are summarised by Katz _et al._ (2022). Shear viscosity has been measured over a range of melt fractions; Kelemen _et al._ (1997) showed that it is well-described by an exponential decrease with liquid fraction \(\phi\). Theory for Coble creep, where compaction is accommodated by diffusion of grain mass along grain boundaries and through the melt-filled pores, indicates that the compaction viscosity is a multiple of \(\sim\)5/3 larger than the shear viscosity (Takei & Holtzman, 2009; Rudge, 2018). There are no empirical measurements of the dilitation viscosity \(D_{\phi}\) of partially molten rock, nor are there are microstructural models. Experiments on particle suspensions by Deboeuf _et al._ (2009) show an exponential weakening of particle normal stress with liquid fraction. On this basis, and for simplicity in the absence of further information, we take \(D_{\phi}\) to be an unknown multiple of \(\eta_{\phi}\). Hence the viscosities are given by \[\eta_{\phi}=\eta_{0}\text{e}^{-\lambda(\phi-\phi_{0})},\qquad\zeta_{\phi}=5 \eta_{\phi}/3,\qquad D_{\phi}=D_{0}\eta_{\phi}, \tag{4}\] where \(\eta_{0}\) is a reference value of shear viscosity at reference melt fraction \(\phi_{0}\), \(\lambda\approx 27\) is the porosity-weakening factor, and \(D_{0}\) is an unknown, dimensionless constant. We obtain a constraint on \(D_{0}\) by requiring positive entropy production under any combination of shear and isotropic deformation. The dissipation-rate density arising from (1) is \[\Psi=\zeta_{\phi}\mathcal{C}^{2}+4\eta_{\phi}\dot{\varepsilon}_{II}^{2}-D_{ \phi}\dot{\varepsilon}_{II}\left(\dot{\varepsilon}_{\parallel}+\Lambda_{ \perp}\dot{\varepsilon}_{\perp}+\Lambda_{\times}\dot{\varepsilon}_{\times} \right), \tag{5}\] where \(\dot{\varepsilon}_{\parallel},\,\dot{\varepsilon}_{\perp},\,\dot{\varepsilon }_{\times}\) are the normal components of the strain-rate tensor in a coordinate system aligned with simple shear. Assuming isotropic dilatancy \(\boldsymbol{\Lambda}=\boldsymbol{I}\), equation (5) becomes \(\Psi=\zeta_{\phi}\mathcal{C}^{2}+4\eta_{\phi}\dot{\varepsilon}_{II}^{2}-D_{ \phi}\dot{\varepsilon}_{II}\mathcal{C}\), and into this we substitute the viscosities of equation (4). We find that \(\Psi\) is positive definite if \(0\leqslant D_{0}<4\sqrt{5/3}\approx 5\) and therefore limit consideration to values of \(D_{0}\) within this range. Finally, in combining our rheological model with conservation equations governing the flow, we consider the granular physics discussed by Kamrin & Koval (2012), which adopts a model for emulsions by Goyon _et al._ (2008). They show that macroscopic, irreversible shear is accommodated by grain-rearrangement events at the microscopic scale. In partially molten rock, geometric compatibility of the grain packing dictates that grains cannot rotate freely; their rotations must be compatible with those of neighbouring grains (Rudge, 2021). Hence deformation is necessarily dispersed by grain grain interaction during rearrangement events. This non-local interaction means that the viscosity at a point in the medium is influenced by the viscosity at points within a distance \(\xi\), known as the cooperativity length. Kamrin & Koval (2012) express this interaction in terms of a non-local fluidity--the inverse of the non-local shear viscosity \(\widetilde{\eta}_{\phi}\). We rewrite their fluidity equation in terms of a non-local viscosity, \[\widetilde{\eta}_{\phi}^{-1}=\eta_{\phi}^{-1}+\xi^{2}\nabla^{2}\left(\widetilde {\eta}_{\phi}^{-1}\right). \tag{10}\] Evidently, if \(\xi=0\), then the non-local viscosity reduces to \(\eta_{\phi}\). For \(\xi>0\), this equation imposes a minimum scale of viscosity variation. Goyon _et al._ (2008) measure \(\xi\) as a function of \(1-\phi\) and find that it increases to about \(5\times\) the grain diameter at a solid fraction of \(85\%\). We shall see below in SS4.2 that \(\xi>0\) serves to regularise the spectrum of instability growth. ## 4 Analysis To explore the consequences of the hypothesised rheological model, we adopt the formulation of mass and momentum conservation for a partially molten rock deforming at zero Reynolds number (McKenzie, 1984). We make a Boussinesq approximation, taking density as constant for both phases, assume zero mass transfer between phases, and neglect gravitational body forces on the basis that they are much weaker than the shear tractions imposed in experiments. With these assumptions, the phase densities vanish from the equations. The coupled system of conservation equations becomes \[\mathcal{C} =\mathbf{\nabla}\cdot M_{\phi}\mathbf{\nabla}P^{\ell}, \tag{11a}\] \[\mathbf{\nabla}P^{\ell} =\mathbf{\nabla}\cdot 2\widetilde{\eta}_{\phi}\dot{\mathbf{\varepsilon}}+ \mathbf{\nabla}\widetilde{\zeta}_{\phi}\mathcal{C}-\mathbf{\nabla}\cdot\widetilde{D}_ {\phi}\mathbf{\Lambda}\dot{\mathbf{\varepsilon}}_{II},\] (11b) \[\frac{\mathrm{D}_{s}\phi}{\mathrm{D}t} =(1-\phi)\mathcal{C}. \tag{11c}\] The first, known as the compaction equation, is obtained from Darcy's law by eliminating the liquid velocity using the two-phase continuity equation. It includes the fluid mobility \(M_{\phi}=M_{0}(\phi/\phi_{0})^{n}\), which represents the ratio of the porosity-dependent permeability and the constant liquid viscosity. The second equation is a statement of force balance in the two-phase aggregate. The third equation is mass conservation for the solid phase (porosity is transported by the solid velocity). These equations are standard (Katz, 2022), except for two modifications. The first modification is use of the non-local viscosity \(\widetilde{\eta}_{\phi}\) in equation (11b), which couples it to equation (10) governing the non-local viscosity. For consistency, we use the non-local viscosity in (10) to compute non-local compaction \(\widetilde{\zeta}_{\phi}\) and dilitation \(\widetilde{D}_{\phi}\) viscosities. The second modification is the last term on the right-hand side of equation (11b), which captures the hypothesised dilatancy effects. The classical model is recovered for \(\xi,D_{0}\to 0\). In the subsections below, we investigate the consequences of these two modifications. We do so in the context of torsional deformation and boundary conditions that mimic the laboratory experiments described in section 2. ### Radial segregation in parallel-plate torsion Torsional flow embeds simple shear into a cylindrical geometry with the potential for hoop stress. The experiments described in section 2 demonstrate that parallel-plate torsional flow drives solid radially outward and liquid radially inward. This phenomenon is consistent with the behaviour of dense suspensions undergoing parallel-plate torsional flow (Merhi _et al._, 2005) but in contrast to the Poiseuille flow of appendix D. We consider cone-and-plate torsional flow (where the plates are _not_ parallel) in appendix C. To understand the radially outward transport of solid grains in terms of dilatancy and particle-stress anisotropy, we work in a cylindrical geometry with coordinates \((r,\varphi,z)\), as shown in figure 1a. We consider a cylinder of partially molten rock with outer radius \(R\), azimuthal symmetry in \(\varphi\) and, instantaneously at \(t=0\), with uniform porosity \(\phi_{0}\). At this instant, the solid flow is assumed to have zero \(\hat{\mathbf{z}}\) component, a fixed azimuthal component, and an unknown radial component. This flow is described by \[\mathbf{v}^{s}=V\hat{\mathbf{r}}+\frac{\dot{\Omega}\,rz}{H}\hat{\mathbf{\varphi}},\qquad \mathcal{C}=\frac{1}{r}\frac{\partial}{\partial r}rV,\qquad\dot{\mathbf{\varepsilon }}=\left(\begin{array}{ccc}\frac{\partial V}{\partial r}-\frac{\mathcal{C}} {3}&0&0\\ 0&\frac{V}{r}-\frac{\mathcal{C}}{3}&\frac{\partial r}{2H}\\ 0&\frac{\dot{\Omega}r}{2H}&-\frac{\mathcal{C}}{3}\end{array}\right), \tag{10}\] where \(V(r)\) is the unknown radial component of the solid velocity field, \(\dot{\Omega}\) is the constant twist rate and \(H\) is the uniform gap between the parallel plates. We linearise the strain-rate intensity under the assumption that dilatancy is driven by the forced shear such that \[\dot{\varepsilon}_{II}\sim\dot{\Omega}r/2H. \tag{11}\] This choice eliminates a feedback whereby the anisotropic part of the dilatancy drives additional dilatancy. While it may be physically reasonable, it will reduce the predicted dilatancy at a given value of \(D_{0}\) relative to the case where the feedback is included. We use \(\dot{\varepsilon}_{II}\) in the radial component of force-balance equation (10) to write \[\frac{\partial P^{\ell}}{\partial r}=3\eta_{0}\frac{\partial}{\partial r} \frac{1}{r}\frac{\partial}{\partial r}rV-\frac{D_{0}\dot{\Omega}R^{2}}{6H}(2 \Lambda_{\times}-1). \tag{12}\] We then combine this with the compaction equation (10a) to eliminate the liquid pressure and integrate once. Rescaling \(r\) with the outer radius \(R\) and \(V\) with the characteristic scale \[[V]=\frac{D_{0}\dot{\Omega}R^{2}}{6H}, \tag{13}\] we obtain the dimensionless equation \[\frac{\partial}{\partial r}\frac{1}{r}\frac{\partial}{\partial r}rV-\frac{V} {\mathcal{R}^{2}}=2\Lambda_{\times}-1. \tag{14}\] Here we have introduced \[\mathcal{R}\equiv\frac{\sqrt{3\eta_{0}M_{0}}}{R}, \tag{15}\] the ratio of the compaction length to the outer radius. The compaction length is an emergent length scale over which perturbations to the solid-liquid pressure difference are relaxed by decompaction (McKenzie, 1984; Spiegelman, 1993; Katz, 2022). The normal-stress difference on the right-hand side of equation (14) arises from the particle-stress anisotropy tensor \(\mathbf{\Lambda}\), with the coordinates aligned such that the flow direction is \(\hat{\mathbf{\varphi}}\) and the vorticity direction is \(\hat{\mathbf{r}}\). The boundary condition at the centre of the cylinder is \(V(0)=0\). With this constraint, equation (14) admits the solution \[V(r)=\pi\mathcal{R}^{2}\left(\tfrac{1}{2}-\Lambda_{\times}\right)\left[AI_{1}(r /\mathcal{R})-L_{1}(r/\mathcal{R})\right], \tag{16}\] where \(I_{n}(z)\) is the modified Bessel function of the first kind, \(L_{n}(z)\) is the modified Struve function, and \(A\) is a constant to be determined by matching the boundary condition at the dimensionless outer radius \(r=1\). Two end-member cases can be considered for this outer boundary condition. #### 4.1.1 Outer boundary condition: no normal flow In this case, a rigid outer cylinder requires that at \(r=R\), the radial component of velocity is \(V(R)=0\). Then the analytical solution to dimensionless equation (10) is \[V(r)=\pi\mathcal{R}^{2}\left(\tfrac{1}{2}-\Lambda_{\times}\right)\left[\frac{L_ {1}(1/\mathcal{R})}{I_{1}(1/\mathcal{R})}I_{1}(r/\mathcal{R})-L_{1}(r/ \mathcal{R})\right]. \tag{12}\] This result demonstrates that the sign of \(V(r)\) is determined by the size of \(\Lambda_{\times}\). In figure 2(a), we have chosen \(\Lambda_{\times}=0.45\) such that \(V(r)>0\). This choice is qualitatively consistent with experimental results (SS2) where the solid is observed to move outward with progressive twist. Evidently, for the outer boundary condition \(V(1)=0\), any choice of \(\Lambda_{\times}\) satisfying \[\Lambda_{\times}<1/2 \tag{13}\] is also qualitatively consistent. For this range of \(\Lambda_{\times}\), the hoop stress generated by dilatancy in the flow direction is stronger than the dilatant normal stress in the radial (vorticity) direction. As noted by Takei & Katz (2013), a compressive hoop stress drives solid radially outward. Figure 2: Parallel-plate torsion flow at \(t=0\). Nondimensional solutions of eqn. (10) with uniform \(\eta_{\phi}=\eta_{0}\). Top panels have outer boundary condition \(V(R)=0\); bottom panels have zero-effective-stress outer boundary condition given in eqn. (11). _(a)_ Analytical solutions (12) with outer boundary condition \(V(R)=0\) and with \(\Lambda_{\times}=0.45\). In the limit of \(\mathcal{R}\gg 1\), the dimensionless solution is asymptotic to \(V(r)\sim(2\Lambda_{\times}-1)(r^{2}-r)/3\). In the other limit, \(\mathcal{R}\ll 1\), the matched asymptotic solution is \(V(r)\sim\mathcal{R}^{2}(2\Lambda_{\times}-1)\left[-1+\exp(-r/\mathcal{R})+ \exp(-(1-r)/\mathcal{R})\right)\right]\). _(b)_ Decompaction rate with \(\Lambda_{\times}=0.45\). _(c)_ Analytical solution (12) with outer boundary condition (11) and with \(\mathcal{R}=0.3\). _(d)_ Decompaction rate with \(\mathcal{R}=0.3\). #### 4.1.2 Outer boundary condition: no normal effective stress Alternatively, we can consider the case where the partially molten cylinder is surrounded by an inviscid fluid, held at a dimensional confining pressure \(P_{c}\). This pressure must be balanced by the phase-averaged traction at the boundary and therefore, \(\hat{\mathbf{r}}\cdot\overline{\mathbf{\sigma}}\cdot\hat{\mathbf{r}}=-P_{c}\). Assuming that the liquid pressure is continuous at \(r=R\), we obtain the boundary condition \[\hat{\mathbf{r}}\cdot\mathbf{\sigma}^{\rm eff}\cdot\hat{\mathbf{r}}=0\quad\text{at}\quad r =R. \tag{4.11}\] Expanding this condition using (3.1) and the approximate second invariant (4.3), then non-dimensionalising \(r\) with \(R\) and \(V\) with \([V]\), we obtain the dimensionless boundary condition \[\frac{V}{3}+\frac{\partial V}{\partial r}=\Lambda_{\times}\quad\text{at} \quad r=1. \tag{4.12}\] This condition yields a different value of \(A\) and the general solution (4.8) becomes \[V(r)=\pi\mathcal{R}^{2}\left(\tfrac{1}{2}-\Lambda_{\times}\right)\left[\frac{ 3L_{0}\left(\frac{1}{\mathcal{R}}\right)-2\mathcal{R}L_{1}\left(\frac{1}{ \mathcal{R}}\right)+\frac{3}{\pi\mathcal{R}}\frac{\Lambda_{\times}}{1/2- \Lambda_{\times}}}{3I_{0}\left(\frac{1}{\mathcal{R}}\right)-2\mathcal{R}I_{1} \left(\frac{1}{\mathcal{R}}\right)}I_{1}\left(\frac{r}{\mathcal{R}}\right)-L _{1}\left(\frac{r}{\mathcal{R}}\right)\right]. \tag{4.13}\] This function is plotted in figure 2(c)-(d); the curves are computed with \(\mathcal{R}=0.3\) and values of \(\Lambda_{\times}\) that span \(1/2\). The radial component of solid velocity \(V\) is generally positive, indicating outward solid flow and decompaction across all radii. This outward flow is again driven by the compressive hoop stress. Distinct from the rigid outer boundary condition, however, condition (4.12) allows the solid to move outward at the outer boundary. Hence in this case, torsion with any \(\Lambda_{\times}\) causes the solid cylinder to expand radially, imbibing liquid across the outer boundary. The pattern of flow in figure 2(c) is slightly different for \(\Lambda_{\times}=0.75\), where there is a region of \(V<0\) at inner radii. The inner part of this region is associated with compaction \(\mathcal{C}<0\), as shown by the solid curve in panel (d). The driving force is again dilatancy, but in this case with \(\Lambda_{\times}>1/2\), the radial dilatant normal stress plays a significant role. As is the case for Poiseuille flow in appendix D, faster shear at larger radii drives solid inward. But with zero radial effective stress at the outer boundary, dilatancy also drives outward solid flow, radial expansion of the cylinder, and radial imbibition of liquid. #### 4.1.3 Outward force on pistons due to dilatancy The results depicted in figure 2 for both outer boundary conditions are valid instantaneously at \(t=0\), when all properties are uniform with radius. At this initial instant, we compute an axial force outward on the plates, parallel to \(\hat{\mathbf{z}}\), that arises from dilatancy. This calculation provides a prediction to be compared with laboratory measurements. Details of the calculation are in appendix A. The main result is that the axial force is dominated by the direct effect of dilatancy in the flow-perpendicular direction, and hence scales as \[\Delta F\approx\mathcal{T}D_{0}\Lambda_{\perp}/3R, \tag{4.14}\] where \(\mathcal{T}\) is the torque that causes a twist-rate of \(\hat{\Omega}\) at \(t=0\). \(\Delta F\) is the outward force in excess of that due to the confining pressure surrounding the sample. Using typical laboratory values for \(\mathcal{T}\) and \(R\) in (4.14) (appendix A), we find that the excess force is on the order \(10\%\) of the force due to the typical confining pressure. In detail, the excess force \(\Delta F\) can deviate from the simple prediction of (4.14). Figure 8 in appendix A plots this deviation for both outer-boundary-condition cases over a range of \(\mathcal{R}\) and \(\Lambda_{\times}\). When the outer boundary is closed to solid flow (\(V(R)=0\)), the excess force is close to the simple scaling above; when the outer boundary has zero effective stress (eqn. (4.12)), dilatancy in the radial direction leads to a net decompaction of the sample and a reduction in the excess axial force by approximately one half. #### 4.1.4 Finite time and steady state The analysis of parallel-plate torsion to this point has considered the instantaneous problem at \(t=0\), when the domain is uniform in porosity. The instantaneous flow requires that this uniform state is subsequently lost by radial segregation of solid and liquid (and, as shown empirically and below in SS4.2, by a banding instability). Appendix B derives the system of equations, simplified from (4.1), that governs the finite-time evolution of the radial distribution of porosity. The non-uniform porosity \(\phi(r)\) leads to a radial dependence of mobility \(M_{\phi}\) and aggregate viscosity \(\widetilde{\eta}_{\phi}\). A series of time-dependent numerical solutions are plotted in figure 3a, coloured according to the shear strain \(\gamma\) at the outer radius of the domain \(R\). Details of the numerical method are in appendix B; code is available in an online repository (Katz _et al._, 2023). This calculation uses \(\mathcal{R}=0.3\) and \(\Lambda_{\times}=0.4\) with boundary condition \(V(R)=0\). At the smallest finite strains, the porosity distribution has the shape of the \(t=0\) solution for \(\mathcal{C}(r)\), shown in fig. 2b. With increasing strain (time), the porosity contrast between the centre and outer radius increases. However, the evolution slows and ceases as the porosity distribution approaches a steady state. In the steady state and with \(\xi=0\), the radial component of the solid velocity is zero and radial force-balance equation (4.1_b_) reduces to \(\hat{\boldsymbol{r}}\cdot(\boldsymbol{\nabla}\cdot D_{\phi}\boldsymbol{\Lambda }\dot{\varepsilon}_{II})=0\) or, after expanding and rearranging, \[D_{\phi}=\Lambda_{\times}\left(2D_{\phi}+rD_{\phi}^{\prime}\frac{\partial \phi}{\partial r}\right), \tag{4.15}\] where \(D_{\phi}^{\prime}\) is the derivative of the dilation viscosity with respect to its argument, \(\phi\). In solutions to this equation, a steady state is reached when the force of the hoop stress (left-hand side) balances the force of the radial normal stresses (right-hand side). The compressive hoop force, associated with a coefficient of unity in \(\boldsymbol{\Lambda}\), is due to azimuthal dilatancy that pushes solid radially outward. The radial normal force, associated with the coefficient \(\Lambda_{\times}\) in \(\boldsymbol{\Lambda}\), has two causes: first, the gradient in radial dilatancy due to Figure 3: Parallel-plate torsion at \(t\geq 0\) and \(t\to\infty\). Coloured curves show the time-dependent, numerical solution to the system (B.3) for porosity \(\phi(r,t)/\phi_{0}\). Black curves show the analytical, steady-state solution (4.16) for \(\xi=0\). Both panels use empirically motivated values \(\lambda=27\) and \(\phi_{0}=0.07\). _(a)_ Solutions with \(\Lambda_{\times}=0.4\), \(\mathcal{R}=0.3\) and \(\xi/R=0.03\) at various outer-radius strains \(\gamma(R)\). _(b)_ Steady solutions for four values of \(\Lambda_{\times}\). the torsional shear (\(\dot{\varepsilon}_{II}\propto r\)), and second, the gradient in radial dilatancy due to the steady-state radial gradient in porosity. Laboratory experiments that impose torsional deformation, discussed in SS2, have an azimuthally averaged porosity that decreases with radius. On the basis of the predicted compaction rate at \(t=0\), shown in figure 9, we can infer that this porosity structure is consistent with the no-normal-flow boundary condition and \(\Lambda_{\times}<1/2\). It is unclear whether these experiments approach a steady-state radial porosity profile, or would do so at larger strains. If a steady state can be achieved, then equation (4.15) requires that \(D_{\phi}^{\prime}<0\) independent of the specific form of \(D_{\phi}\); in other words, it requires that the dilatancy stress at fixed strain rate decreases as porosity increases. In equation (3.4) we specified that \(D_{\phi}\) has the form \(D_{\phi}=D_{0}\eta_{\phi}\). Given the exponential dependence of \(\eta_{\phi}\) on \(\phi\), it follows that \(D_{\phi}^{\prime}=-\lambda D_{\phi}\). With this we can solve (4.15) to give \[\phi(r,t\to\infty)=\phi_{0}-\frac{1/2-\Lambda_{\times}}{\Lambda_{\times}\lambda /2}\left(\ln r+\frac{1}{2}\right), \tag{4.16}\] where \(r\) has been non-dimensionalised with the outer radius \(R\) and we have used global conservation of liquid mass to determine the constant of integration (see appendix B for details). This function is plotted as black curves in figure 3 for various values of \(\Lambda_{\times}\); in SS5 we compare it with measurements from laboratory experiments. The logarithmic singularity in (4.16) for \(r\to 0\) is removed by non-local viscosity when cooperativity length \(\xi>0\). This is evident in figure 3a by comparison between the numerical solution at late time (red curve; \(\xi=0.1\delta\)) and the steady solution (black curve; \(\xi=0\)). ### Simple shear between parallel plates Here, following the analysis of Spiegelman (2003), we investigate the stability of a two-dimensional simple-shear flow with initial melt fraction \(\phi_{0}+\phi_{1}(\mathbf{x},t)\), where \(|\phi_{1}|\ll\phi_{0}\) is a perturbation. A schematic diagram is shown in figure 4(a). The coordinate system is oriented such that \(\hat{\mathbf{x}}\) is in the flow direction and \(\hat{\mathbf{y}}\) is in the direction perpendicular to the shear plane. We assume invariance in the \(\hat{\mathbf{z}}\) (vorticity) direction and take the \(x\)-\(y\) plane to be infinite; hence there is no need to impose boundary conditions. The procedure is a standard linearised stability analysis, detailed in Katz (2022, Chap. 7) and sketched in the next paragraph. Alisic _et al._ (2016) provides a three-dimensional analysis for torsion in cylindrical coordinates, but this adds mathematical complexity without additional physical insight. We use equation (4.1_b_) to eliminate the pressure gradient from (4.1_a_) and obtain an equation governing the irrotational part of the velocity field. Then we take the curl of equation (4.1_b_) to obtain an equation governing the solenoidal part. These are coupled to equations (3.6) and (4.1_c_) for viscosity and solid mass. We expand variables into a steady, background state and a time-dependent perturbation that is arbitrarily small at \(t=0\). The perturbations are assumed to be proportional to \(\exp[i\mathbf{k}(t)\cdot\mathbf{x}+s(t)]\), where \(\mathbf{k}(t)\) is a time-dependent wave vector that changes direction and magnitude with the background flow. After linearising the governing equations, we solve the leading-order balance for the base state. This is a simple-shear flow with velocity gradient \(\dot{\gamma}\) and zero compaction rate. Using the base-state solution, we solve the perturbation equations to obtain the dimensionless growth rate \(\dot{s}\) as a function of dimensionless wave-vector magnitude \(k\) and wavefront angle to the shear plane \(\theta\equiv\tan^{-1}k_{x}/k_{y}\). In the present case, we obtain \[\dot{s}=(1-\phi_{0})\frac{\lambda}{3}\frac{k^{2}}{(1+k^{2})(1+\epsilon^{2}k^{2 })}\left[\sin 2\theta-\frac{D_{0}}{2}\frac{\left(\sin^{2}\theta+\Lambda_{ \perp}\cos^{2}\theta\right)\sin^{2}2\theta}{1-\frac{D_{0}}{4}(1-\Lambda_{ \perp})\sin 4\theta}\right]. \tag{4.17}\] In this equation, \(\dot{s}\) has been made dimensionless by scaling with the background rate of shear \(\dot{\gamma}\). Wavenumber \(k\) has been made dimensionless by scaling with the compaction length \(\delta\equiv\sqrt{3M_{0}\eta_{0}}\). We have introduced the ratio \(\epsilon\equiv\xi/\delta\) representing the dimensionless cooperativity scale. Localisation phenomena in partially molten rock can emerge at scales smaller than the compaction length. In this case, localisation occurs because positive perturbations have lower viscosity, and hence decompact under resolved tension (Stevenson, 1989; Spiegelman, 2003). We refer to these perturbations as 'bands,' making explicit reference to the high-porosity bands seen in experimental cross-sections. Below we discuss the dependence of band growth rate on wavenumber, angle and physical parameters. Figure 4(b) shows the wavenumber dependence of the growth rate for several values of \(\epsilon\) (assuming optimal orientation, \(\theta=\theta^{*}\)). The curve for \(\epsilon=0\) is the case with zero cooperativity of the viscosity field. It shows the classical result that all wavelengths smaller than the compaction length (\(k\gg 1\)) grow equally fast (Stevenson, 1989). The use of the non-local viscosity with finite \(\epsilon\) regularises the spectrum, imposing a short-wavelength cutoff at a dimensional wavelength \(\sim\xi\), the cooperativity scale of the non-local viscosity. Growth-rate curves in fig. 4(b) have a maximum at a dimensional wavenumber \[k^{*}=1/\sqrt{\xi\delta}. \tag{18}\] Figure 4: Growth rate of sinusoidal perturbations under a simple-shear flow from eqn. (17). _(a)_ Schematic diagram showing a finite region of the infinite domain. Grayscale shows the perturbed porosity field. _(b)_ Growth rate as a function of wavenumber \(k\) for \(D_{0}=0\) at \(\theta=45^{\circ}\). Circles represent the growth rate computed at \(k^{*}=\epsilon^{-1/2}\). _(c)_ Growth rate angular factor as a function of \(\theta\) with \(\Lambda_{\perp}=1\), and values of \(D_{0}\) given in the legend. _(d)_ Growth rate angular factor as a function of \(\theta\) with \(D_{0}=2\), and values of \(\Lambda_{\perp}\) given in legend. Figure 4(c) shows the growth rate as a function of the angle \(\theta\) between wavefronts and the shear plane. Four curves show different values of the dilatancy viscosity prefactor \(D_{0}\geq 0\) (assuming \(\boldsymbol{\Lambda}=\boldsymbol{I}\), i.e., isotropic dilatancy). The curve for \(D_{0}=0\) corresponds to the case with no dilatancy, as studied by Spiegelman (2003). This case has positive growth rates between zero and \(90^{\circ}\), a range over which bands are subject to tension, and negative rates for angles greater than \(90^{\circ}\), which are subject to compression. The maximum growth rate occurs at \(\theta^{*}=45^{\circ}\), where band wavefronts are perpendicular to the principal tension axis. For larger \(D_{0}\) in fig. 4(c), dilatancy leads to peak growth rate at low and high angles. In particular, the two maxima of growth rate occur at angles \(\theta_{1,2}^{*}\) that vary with \(D_{0}>1\), \[\theta_{1}^{*}=\arcsin(1/D_{0})/2,\qquad\theta_{2}^{*}=\pi/2-\arcsin(1/D_{0}) /2. \tag{23}\] A growth-rate peak at \(\theta^{*}=15^{\circ}\), which is roughly that observed in experiments, corresponds to \(D_{0}=2\). Dilatancy has two competing effects that combine to produce this spectral shift with increasing \(D_{0}\). First, due to \(D_{\phi}^{\prime}<0\), the background simple-shear flow causes a dilatancy perturbation that is exactly anti-phase with the porosity perturbation. This causes perturbations to decay at a rate that is independent of band angle \(\theta\). Second, the porosity perturbations create variations in shear viscosity, which in turn create perturbations in the rate of shear strain. These drive variations in dilatancy that are exactly in-phase with variations in porosity; they hence contribute to perturbation growth. Critically, however, the growth rate associated with this second mechanism depends on band angle \(\theta\). Shear localises when bands are at low or high angle to the shear plane and therefore this effect is proportional to \(\cos^{2}2\theta\). The combination of the two contributions of isotropic dilatancy is negative, overall, and proportional to \(\sin^{2}2\theta\). Dilatancy in partially molten rock may be anisotropic, however. Experiments on dense granular suspensions, reviewed by Guazzelli & Pouliquen (2018), have obtained inconclusive estimates of \(\Lambda_{\perp}\), with some reporting \(\Lambda_{\perp}\) increasing from unity with solid fraction, some reporting the opposite, and others reporting \(\Lambda_{\perp}\) within error of unity for all solid fractions. In figure 4(d) we compare growth-rate curves as a function of band angle for \(\Lambda_{\perp}=0.8,\,1,\,1.2\) for \(D_{0}=2\). The differences in the growth-rate peaks are subtle--likely indistinguishable on the basis of measured band-angle histograms from experiments. ## 5 Comparison with laboratory data Published results from laboratory experiments (SS2) provide an opportunity to test our theory. We begin by considering the best-established outcome of experiments, the localisation of melt into high-porosity bands. In particular, we first consider the angle that the bands make to the shear plane. In figure 5, blue points with error bars indicate the mean and standard deviation of band angles from experiments quenched at shear strains between about 1 and 4. Each panel presents the same data set. The points form a coherent array with mean angles in the range 15-20\({}^{\circ}\), except at strains greater than about 3, at which the points spread out into a range of 10-25\({}^{\circ}\). The dashed lines, also identical in each panel, indicate trajectories that band angles would follow if rotated passively in the simple-shear flow (Katz _et al._, 2006). Comparison of the array of blue points with the adjacent dashed lines clearly demonstrates that the evolution cannot be characterised as passive rotation of an initial set of bands. The maintenance of low angles over large strains has been attributed to successive generations of low-angle bands that draw melt from (and hence replace) previous generations as they undergo rotation to angles unfavourable for growth (Holtzman _et al._, 2005; Katz _et al._, 2006). This process occurs at a finite perturbation amplitude and, strictly speaking, should not be described by solutions of linearised governing equations. However, if the angular spectrum of growth rate (i.e., fig. 4c) remains approximately independent of strain, then a forward integration of \(\dot{s}\) with respect to time (strain) along the trajectories of passive rotation might approximate the evolving angular spectrum of amplitude. In this context, normalisation of the amplitude spectrum at each increment of strain might qualitatively represent melt redistribution. The contours of this finite-strain, normalised perturbation amplitude spectrum are plotted in figure 5 for three different values of \(D_{0}\) (each with \(\boldsymbol{\Lambda}=\boldsymbol{I}\)). In panel (a), for \(D_{0}=0\), we see that without dilatancy, peak predicted amplitudes are far from observations. In panel (b), for \(D_{0}=2\), and in panel (c), for \(D_{0}=3\), we see that peak predicted amplitudes occur at angles close to measured values. The \(D_{0}=2\) case is in better alignment at lower strains where nonlinear effects might be less important, and hence we take this value of the dilatancy pre-factor to be most appropriate in the context of the present model assumptions. The linearised theory for porosity-band growth also provides a prediction of the wavelength with the largest growth rate. Again, this prediction is strictly valid only near the onset of instability when the porosity contrast remains small. Laboratory experiments that produce bands provide the opportunity to measure mean band width and spacing; summing these provides an estimate of the band wavelength. However, as with band angles, these measurements are made in a nonlinear regime when the porosity contrast is large. So a comparison with theory is not strictly valid. However, as with band angles, if the growth-rate spectrum remains roughly independent of strain, then a correspondence between theory and experiment might be expected even at larger strains. In that case, we would expect the observed wavelength to be proportional to \(1/k^{*}=\sqrt{\xi\delta}\), where \(\xi\) is the cooperativity length scale and \(\delta\) is the compaction length. Figure 6 suggests that this correspondence may hold. The blue symbols represent the wavelength of bands from experiments as a function of the geometric mean of the Figure 5: Angle spectra of porosity-band amplitude as a function of shear strain \(\dot{\gamma}t\). The data points are the same in each panel. They record the mean angle from band-angle histograms of individual, published experiments (see legend); error bars are one standard deviation of the histogram. Solid lines are contours of the band amplitude \(\exp[s(t)]\), normalised over angles at each increment of strain. Dashed lines are passive advection trajectories (see text). Amplitude is computed by quadrature of the growth rate \(\dot{s}(t)\) from equation (4.17) with \(\Lambda_{\perp}=1\) and dimensionless \(k(t=0)=k^{*}=\epsilon^{-1/2}\). Each panel has a different magnitude of dilatancy: _(a)_\(D_{0}=0\); _(b)_\(D_{0}=2\); _(c)_\(D_{0}=3\). empirically known grain diameter \(d\) and compaction length \(\delta\). The cooperativity scale \(\xi\) is understood to be proportional to grain diameter (Henann & Kamrin, 2013) and hence \(1/k^{*}\propto\sqrt{d\delta}\). Although there is considerable uncertainty on the experimental estimates (particularly \(\delta\)), a linear trend is compatible with the data. Finally, we evaluate model predictions of the radial distribution of porosity in torsion experiments quenched at different strains. The experiments are a subset of those published by Qi _et al._ (2015) and Qi & Kohlstedt (2018); we select only those in which the liquid phase is basaltic melt. We re-analyse high-resolution binary images of transverse sections, following the authors' published protocol, but averaging azimuthally over fewer, wider rings. Recalculated porosities are presented as a function of normalized sample radius in figure 7. The data are colored by the magnitude of shear strain at the outer radius, which ranges from zero to \(\sim\)14. Three experiments with strains between 5 and 6 are averaged to produce one radial series. Error bars represent the standard deviation of the averaged and normalised porosity at \(t=0\), at which the porosity should be uniform. A qualitative conclusion can be immediately drawn by examining the data. The boundary condition imposing zero effective stress at \(r=R\) is not consistent with the experiments. This is because of the pattern of decompaction shown in figure 2d, which has the most rapid decompaction at the outer boundary. In contrast, the empirical data uniformly exhibit reduced porosity due to compaction there. Hence we proceed using only the \(V(R)=0\) condition of no radial flow at the outer boundary. Model predictions are overlayed onto the laboratory data in figure 7. The black dotted line is the steady-state solution from equation (4.16). This is computed with \(\Lambda_{\times}=0.4\), a value chosen to give an approximate match with the highest-strain experiment. (Note, however, that we have no evidence that the empirical porosity distribution at \(\gamma\approx 14\) is in steady state.) The dashed curves are numerical solutions of the time-dependent model (appendix B), plotted at values of outer-radius strain indicated by the colour of the curve. The shape of the model curves is in qualitative agreement with the data trends, Figure 6: Wavelength of porosity bands in laboratory experiments (see legend) plotted against the geometric mean of the grain size \(d\) and the compaction length. The data-source publications provide mean estimates of band spacing, band width, grain size, and compaction length. Band wavelength is calculated as the sum of mean band spacing and mean band width. Errors on measurements are propagated to give the error bars. The dashed line is a fit to the data respecting uncertainties on both axes (York _et al._, 2004; Wiens, 2023). within error. However, model porosity evolves more rapidly as a function of strain than the porosity in experiments. There are two main difficulties in interpreting this mismatch in terms of parameter values or deficiencies of the theory. The first is that we do not know whether the porosity distributions from experiments shown in fig. 7 are approaching a steady state, as predicted by the theory. The uncertainties in the porosity and the coarse sampling in total strain make any inferences speculative. Second, supposing that the experiments are evolving toward a steady state, we do not have a reliable estimate of the porosity distribution in that state. If we had constraints on that distribution (and knowledge of \(\phi_{0}\) and \(\lambda\)), we could use equation (4.16) to infer \(\Lambda_{\times}\). Advancing speculatively, we assume that the experiments do approach a steady state. We note that smaller \(\Lambda_{\times}\) corresponds to larger \(\mathrm{d}\phi/\mathrm{d}r\) at steady state (fig. 3b). Assuming that our estimates for \(\phi_{0}\approx 0.04\) and \(\lambda\approx 27\) are sufficiently accurate, the data in figure 7 are indicative of \(\Lambda_{\times}\lesssim 0.4\). We can estimate the timescale of porosity adjustment to steady state in the numerical solutions shown in figure 7. Referring to the porosity evolution equation (4.1), we approximate \(\mathrm{D}_{s}\phi/\mathrm{D}t\) by \(\Delta\phi/\tau\), where \(\tau\) is the timescale over which porosity changes by \(\Delta\phi\) from its initial value \(\phi_{0}\) to its steady value at some given radius. According to (4.1), this change is driven by decompaction at a rate \((1-\phi)\mathcal{C}\); we approximate this rate as \(\tilde{\mathcal{C}}\), which we take to be the decompaction rate at \(r=0\), \(t=0\) (c.f. fig. 2b). This rate is obtained by calculating \(\mathcal{C}(r)\) at \(t=0\) from the analytical solution (4.9) for radial velocity, re-dimensionalising, and evaluating at \(r=0\) with \(\mathcal{R}\sim 1\). We then form the outer-radius strain at time \(\tau\) as \(\gamma(\tau)=\tau/\dot{\gamma}=\Delta\phi/\dot{\gamma}\tilde{\mathcal{C}}\), where \(\dot{\gamma}=\dot{\Omega}R/H\) is the outer-radius strain rate. This obtains \[\gamma(\tau)\sim\frac{18}{D_{0}\Lambda_{\times}\lambda}, \tag{5.1}\] Figure 7: Radial distribution of porosity \(\phi(r)\) normalised by the initial porosity \(\phi_{0}\) in experiments and theory. Symbols represent porosity from laboratory experiments obtained by reprocessing high-resolution scans of transverse sections (Qi _et al._, 2015; Qi & Kohlstedt, 2018); values are averages over rings of equal radial span. Error bars show the standard deviation of porosity in the undeformed (\(\gamma=0\)) experiment. Colours represent the shear strain at the outer radius \(\gamma(R)\). The black, dotted line represents the steady-state solution (4.15) with \(\Lambda_{\times}=0.4\), \(\phi_{0}=0.04\), \(\lambda=27\). Dashed curves are numerical solutions to the system (B.3) with the same parameters as for the steady curve and also \(D_{0}=2\), \(\mathcal{R}=0.3\). The numerical solution is plotted at finite values of \(\gamma(R)\), as given by their colour. Appendix B gives details of the numerical method. where we have used (4.16) to evaluate \(\Delta\phi\) at \(r=1/\mathrm{e}^{2}\). For the parameters used in figure 7, this gives an outer-radius strain of \(\gamma\sim 1\) at which the simulated porosity has evolved to within a factor of 1/e of its steady value. This estimate is comparable to the numerical solution of fig. 7, but it is smaller than the empirical timescale by a factor of 10. We cannot bring these timescales into agreement by changing \(D_{0}\) because this is constrained by the angle of porosity bands (fig. 5), and we have already assumed that we know \(\lambda\). Reducing \(\Lambda_{\times}\) thus appears to be an option. According to figure 3b, reducing \(\Lambda_{\times}\) by a factor of 2 predicts a steady-state, radial porosity gradient much larger than observed at the largest empirically attained strain \(\gamma\). Experiments to larger \(\gamma\) are needed to test this prediction. Alternatively, the discrepancy in timescales may be a consequence of nonlinear interaction of radial segregation with emergence of high-porosity layers, which is not captured in our models. ## 6 Discussion We hypothesised that granular physics (i.e., dilatancy and non-local fluidity) shapes the patterns that emerge when partially molten rock is deformed in laboratory experiments. Our model predictions, based on a rheological formulation combining theories for poro-viscous compaction and dense granular suspensions, can be made quantitatively consistent with most aspects of the empirical data. This consistency arises through four key choices. First is the choice of a rigid boundary condition at the outer radius of the cylinder. Second is the choice of a reduced dilatancy in the vorticity direction of the particle-stress anisotropy tensor (\(\Lambda_{\times}<1/2\)). Together, these enable the prediction of radially inward melt segregation and compaction at outer radii, both of which are observed in experiments. The third choice is for \(D_{0}\approx 2\), which predicts the emergence of porosity bands at 15-20\({}^{\circ}\) to the shear plane, as observed in experiments. The fourth choice is for a finite cooperativity length \(\xi\), which regularises the growth-rate spectrum. These choices are neither physically implausible nor empirically unreasonable. Therefore we assert that laboratory experiments provide support for our hypothesis, under the conditions (i.e., the strain rate) at which they are conducted. How (indeed, if) our theory extrapolates to the much slower strain rates under natural conditions depends on the physical processes that are occurring at the grain scale. The grain-scale physics is discussed below, after we consider the relationship of our theory with that of anisotropic viscosity. ### Relationship to anisotropic viscosity A theory for anisotropic viscosity (Takei & Holtzman, 2009) is also capable of explaining band angles and radial segregation (Takei & Katz, 2013). The basis for this theory is a model of Coble creep, where melt provides a fast pathway for circum-grain mass diffusion. In small-strain experiments (Takei, 2010), deviatoric stress causes the melt to preferentially coat grain boundaries that have normal vectors in the direction of maximum tension. The theory predicts that this anisotropy in solid-phase contiguity causes an anisotropic creep response to deviatoric stress: the deviatoric-compression direction has higher viscosity than the deviatoric-tension direction. This anisotropy gives rise to an effective dilatancy that drives radial segregation and band angle (Qi _et al._, 2015; Takei & Katz, 2015), as also obtained here. It may therefore make sense to think of the present, direct formulation of dilatancy as an effective description of underlying physics that is more fully described by anisotropic, Coble-creep viscosity. However, there are reasons to doubt this view. The first is that the bands that emerge in experiments on hot, partially molten rock are similar to Riedel shear zones in cold granular media (Schmocker _et al._, 2003; Bedford & Faulkner, 2021), suggesting a common mechanism. Although both are cases of deforming granular media, Coble creep is thermally activated and does not contribute at low temperature. A second reason is that the solid-phase contiguity tensor, measured in band-producing experiments, is not suitably oriented to predict low-angle bands in viscous anisotropy theory (Takei & Katz, 2013; Qi _et al._, 2015) (however, see Seltzer _et al._, 2023). And a third reason is that viscous anisotropy theory has no inherent mechanism to regularise the band growth-rate spectrum. So it may be that the granular-medium hypothesis considered here represents a distinct physical mechanism, albeit with similar implications for observable features. Further work is required to develop experiments and analyses that can distinguish between these competing hypotheses. For example, granular physics should be tested against the hysteresis measured in oscillating stress experiments (Takei, 2010). To capture this behaviour may require extension of the present theory to include a fabric tensor (Mehrabadi _et al._, 1982) and its evolution. A more fundamental approach, however, is to develop grain-scale models that consistently integrate a set of plausible physical processes. This could clarify the conditions under which grain-boundary sliding is accommodated by dilatancy and/or Coble creep. ### Physics at the grain scale The creeping deformation of polycrystalline aggregates is known to occur by several grain-scale mechanisms: deformation of grains by the motion of lattice dislocations, shape-change of grains by mass diffusion down gradients of chemical potential induced by deviatoric stress, and grain-boundary sliding whereby the centre of mass of adjacent grains moves relative to one-another. This latter mechanism is typical of athermal granular flow, in which an aggregate of rigid grains is required to (locally) dilate to accommodate the geometric incompatibilities of relative motion. At higher effective stress, geometric incompatibility might instead be resolved by cataclasis or by shape-change of grains through diffusion or dislocations. A sub-set of these mechanisms may simultaneously contribute to macroscopic deformation of the aggregate. There is currently no unified model to predict the relative contributions of different mechanisms across a broad range of conditions. However, three basic expectations are relevant. First, low effective stress relative to the driving shear stress will favour dilatancy over shape-change of grains. Second, higher resistance to grain-grain sliding along a grain boundary (whether viscous or associated with a frictional yield stress) will favour shape change over sliding. And third, higher homologous temperatures will favour thermally activated processes of mass diffusion and dislocation motion over cataclasis. Saturation of the dense granular medium with a mobile, incompressible liquid may also be an important factor. (For this discussion, we consider the presence of liquid as being independent of homologous temperature even though, in the case of partial melt, the two are linked.) A greater volume fraction of liquid means that less geometric incompatibility is incurred by relative motion of grains, and hence grain-boundary sliding is promoted. Furthermore, a greater liquid pressure relative to the mean compressive stress of the solid framework (i.e., a low effective stress) also promotes sliding. However, there are two other points to consider. First, in a poro-viscous context, non-zero effective stress causes (de)compaction, even in the absence of shear. Second, liquid transport over a finite distance through a porous medium occurs on a timescale and with a resistance controlled by the ratio of liquid viscosity to permeability. This transport is required to accommodate (de)compaction and is therefore a control on the evolution of the effective pressure. These considerations become important in the context of dilatancy within a sealed domain of fixed volume. If the enclosed, saturated granular medium is undergoing shear, then throughout the volume there is a compressive solid stress arising from grain interactions. Dilation can occur locally within the volume, but only if it is balanced by compaction elsewhere. There are two relevant cases. If the strain rate is nominally uniform within the domain, then dilating and compacting regions emerge by instability on length and time scales that are set internally, as in SS4.2. Alternatively, if the strain rate has an imposed gradient, then this will organise the spatial pattern of dilation and compaction, as in SS4.1 and appendices C and D. The particle-stress anisotropy is of fundamental importance in this latter case. ### Implications for natural systems In the shallow mantle near mid-ocean ridges, huge volumes of partially molten rock undergo deformation. These regions bear some similarity to the laboratory experiments considered here, but the strain rates are orders of magnitude smaller. Because the melt-filled pore network is vast and isolated, the effective pressure \(\Delta P\) of the solid phase may be low. Does shear cause dilatancy in this natural system? We consider this by rearranging equation (10) with \(\Delta P\approx 0\), \[\frac{\mathcal{C}}{\dot{\varepsilon}_{II}}\sim\frac{D_{\phi}}{\zeta_{\phi}}, \tag{12}\] where in this simple relationship, \(\mathcal{C}\geq 0\) is entirely due to dilatancy. On the right-hand side is a ratio of the dilation and compaction viscosities. Our results here suggest this ratio is \(O(1)\) for experiments; it might be much smaller in the mantle. Hence the question of dilatancy in a natural system may reduce to understanding how \(D_{\phi}\) and \(\zeta_{\phi}\) properties differ in the natural system from in the laboratory. How they scale with temperature, grain size and porosity, and whether they have a (non-linear) dependence on strain rate are questions to be resolved by future laboratory experiments and grain-scale physical models. At depths shallower than the mantle, crustal magmatic systems can have larger strain rates when crystal-laden melt is injected into dikes and sills (Rivalta _et al._, 2015). These flows have lower solid fractions, at or below jamming, and hence more closely resemble granular suspensions (Smith, 1997; Petford _et al._, 2020). Dilatancy should therefore be expected, and may drive crystals away from the walls of magma-filled fractures. This would reduce the effective viscosity of the magma and promote propagation. At shallower depths and lower temperatures in the crust, seismogenic faults may experience the effects of dilatancy. The slip across faults is often accommodated by rupture of asperities or shear of a granular medium called gouge. In both cases, a compressive stress will arise (Chen & Spiers, 2016). If dilation of the pore-space is possible, either by expansion of the contained air or inflow of water, there may be a decrease in friction and an unstable acceleration of slip. In contrast, if dilation is prohibited, the compressive stress may increase friction and promote stable sliding. In either case, once sliding has ceased, viscous creep may lead to slow compaction of the gouge (or asperities) to increase contact area and harden the fault. In this case, the compaction viscosity would control the timescale for frictional state evolution (Chen _et al._, 2017; Thom _et al._, 2023). Indeed, the viscous constitutive law (10) relating compaction, dilatancy and the interphase pressure difference may be the most significant novelty of the present paper. This formulation bridges soil mechanics, suspension theory and theories for granular media with deformable grains. It may be broadly relevant in systems where deformation can occur on long time scales by irreversible creep. Such systems may be more common than is widely appreciated because slow granular processes have, until recently, gone largely unnoticed (e.g., Deshpande _et al._, 2021; Houssais _et al._, 2021). ## Appendix A Force on the plates in parallel-plate torsion We consider a torsion cell with radius \(R\) and height \(H\), aligned with a cylindrical coordinate system \((r,\varphi,z)\). The bottom plate is fixed at \(z=0\) and the top plate has angular velocity \(\dot{\Omega}\hat{\mathbf{z}}\). At the instant \(t=0\), the porosity is assumed to be uniformly \(\phi_{0}\) and the shear viscosity is uniformly \(\eta_{0}\). The normal force on the top plate is given by \[F=-\int_{0}^{R}\int_{0}^{2\pi}\hat{\mathbf{z}}\cdot\overline{\mathbf{\sigma}}\cdot\hat {\mathbf{z}}\,r\,\mathrm{d}\varphi\,\mathrm{d}r=2\pi\int_{0}^{R}(P^{\ell}-\hat{ \mathbf{z}}\cdot\mathbf{\sigma}^{\mathrm{eff}}\cdot\hat{\mathbf{z}})\,r\,\mathrm{d}r, \tag{10}\] where the negative sign gives the compression force (with a tension-positive sign convention for stress) and we have used \(\mathbf{\sigma}^{\mathrm{eff}}=\overline{\mathbf{\sigma}}+P^{\ell}\mathbf{I}\). Using equations (1), (2) and (3) this becomes \[F=2\pi\int_{0}^{R}\left[P^{\ell}+\eta_{0}\left(D_{0}\Lambda_{\perp}\dot{\varepsilon }_{II}-\mathcal{C}\right)\right]r\,\mathrm{d}r. \tag{11}\] For parallel-plate torsion, we assumed \[\mathbf{v}^{s}=V(r)\hat{\mathbf{r}}+\dot{\Omega}\frac{rz}{H}\hat{\mathbf{\varphi}} \tag{12}\] and approximated \(\dot{\varepsilon}_{II}\sim\dot{\Omega}r/2H\). These are valid for a cylinder of finite height if the radial shear stress on the plates is zero. Using equations (1a) and (2) we can write \[P^{\ell}(r) =\frac{1}{M_{0}}\int_{0}^{r}V(r^{\prime})\mathrm{d}r^{\prime}+P^ {\ell}(0),\] \[=P^{\ell}(R)-\frac{1}{M_{0}}\int_{r}^{R}V(r^{\prime})\mathrm{d}r^ {\prime}. \tag{13}\] We recall that \(M_{0}\) is the ratio of permeability to melt viscosity at \(\phi=\phi_{0}\) and we assume that \(P^{\ell}(R)=P_{c}\), the confining pressure of the experiment. Using (13) and (2) in (11) we obtain \[F-F_{c}=2\pi\int_{0}^{R}\left[-\frac{r}{M_{0}}\int_{r}^{R}V(r^{\prime}) \mathrm{d}r^{\prime}+\eta_{0}\left(\frac{D_{0}\Lambda_{\perp}\dot{\Omega}r^{2 }}{2H}-\frac{\partial}{\partial r}rV\right)\right]\mathrm{d}r, \tag{14}\] where \(F_{c}\equiv\pi R^{2}P_{c}\) is the force due to the confining pressure around the cylinder. Integrating the second and third terms in (14) and non-dimensionalising \(r\) with \(R\) and \(V\) with \([V]=D_{0}\dot{\Omega}R^{2}(2\Lambda_{\times}-1)/6H\) we obtain \[\Delta F=\frac{\mathcal{T}D_{0}\Lambda_{\perp}}{3R}\left[1+\frac{2\Lambda_{ \times}-1}{\Lambda_{\perp}}\left(\frac{3\pi}{2}\mathcal{I}-V(1)\right)\right], \tag{15}\] where \(\mathcal{T}\equiv\pi\eta_{0}R^{4}\dot{\Omega}/H\) is the torque exerted to twist the sample and \[\mathcal{I}\equiv-\frac{2}{\pi\mathcal{R}^{2}}\int_{0}^{1}\int_{r}^{1}V(r^{ \prime})\,\mathrm{d}r^{\prime}\,r\,\mathrm{d}r \tag{16}\] is a dimensionless integral of dimensionless quantities. We emphasise that all three terms in the square brackets of equation (15) are dimensionless, but the factor outside the brackets is dimensional with units of force. The first term in (15) represents the direct effect of dilatancy; the second term represents the liquid pressure acting on the plate; the third term is the indirect effect of dilatancy, which causes a net decompaction of the cylinder. To evaluate the second and third terms in (11), it remains to specify \(V(r)\). We consider two cases with different boundary conditions at the outer edge of the cylinder. ### No radial flow at \(r=R\) For the condition \(\hat{\mathbf{r}}\cdot\mathbf{v}^{s}(R)=0\) and uniform porosity (e.g., at \(t=0\)), we obtained the dimensionless solution given in equation (10). For this case we have \(V(0)=V(1)=0\) and hence the third term of (11) gives zero contribution. We use the solution (10) for dimensionless \(V\) in (10) to obtain \[\mathcal{I}=\int_{0}^{1}\int_{r}^{1}\left[\frac{L_{1}(1/\mathcal{R})}{I_{1}(1 /\mathcal{R})}I_{1}(r^{\prime}/\mathcal{R})-L_{1}(r^{\prime}/\mathcal{R}) \right]\mathrm{d}r^{\prime}\,r\,\mathrm{d}r. \tag{12}\] This integral is evaluated by quadrature. Empirically reasonable values of the non-dimensional compaction length \(\mathcal{R}\) are \(O(1)\). Figure 8(a) shows the non-dimensional force multiplier (in square brackets in (11)) for \(\mathcal{R}\) in this neighbourhood and three values of \(A_{\times}\). Depending on whether \(A_{\times}\) is greater than or less than \(1/2\), the liquid pressure (via \(\mathcal{I}\)) contributes negatively or positively to the excess force. Recall that for this boundary-condition case, only \(A_{\times}<1/2\) gives the sign of \(V\) consistent with experiments. If \(A_{\times}\lesssim 1/2\), the nondimensional force multiplier is \(\sim 1\), meaning the the excess force is dominated by the direct effect of dilatancy. For this boundary-condition case, we therefore approximate the normal force on the parallel plates as \[\Delta F\approx\mathcal{T}D_{0}A_{\perp}/3R. \tag{13}\] To estimate \(\Delta F\), we take \(D_{0}=3\) and \(A_{\perp}=1\), consistent with considerations developed in the main text; we adopt values representative of laboratory experiments \(R=5\times 10^{-3}\) m and \(\mathcal{T}=10\) N m (King _et al._, 2010). Using these, the excess outward force exerted by the sample is about 2 kN, which is about 10% of the force \(F_{c}\) due to the experimental confining pressure (300 MPa). ### No radial effective stress at \(r=R\) For the boundary condition (12) representing zero radial effective stress at the outer edge of the cylinder and with uniform porosity, the analytical solution for \(V(r)\) is equation Figure 8: Dimensionless multiplier of the outward force on the parallel plates in torsion, plotted against the non-dimensional compaction length \(\mathcal{R}\) for \(A_{\perp}=1\). _(a)_ The boundary-condition case of \(V(1)=0\) discussed in §A.1. _(b)_ The boundary-condition case of \(\hat{\mathbf{r}}\cdot\mathbf{\sigma}^{\mathrm{eff}}(1)\cdot\hat{\mathbf{r}}=0\) discussed in §A.2. For this latter case, \(V(1)>0\). (4.13). The excess axial force outward on the parallel plates is given by using this solution in equation (A.6). This case differs from that considered in SSA.1 by the non-zero contribution of net decompaction, represented by the dimensionless \(V(1)>0\). It also differs in the contribution of the liquid pressure, associated with the integral \[\mathcal{I}=\int_{0}^{1}\int_{r}^{1}\left[\frac{3L_{0}\left(\frac{1}{\mathcal{R }}\right)-2\mathcal{R}L_{1}\left(\frac{1}{\mathcal{R}}\right)+\frac{3}{\pi \mathcal{R}}\frac{\Lambda_{\times}}{1/2-\Lambda_{\times}}}{3I_{0}\left(\frac{ 1}{\mathcal{R}}\right)-2\mathcal{R}I_{1}\left(\frac{1}{\mathcal{R}}\right)}I_ {1}\left(\frac{r^{\prime}}{\mathcal{R}}\right)-L_{1}\left(\frac{r^{\prime}}{ \mathcal{R}}\right)\right]\mathrm{d}r^{\prime}\,r\,\mathrm{d}r.\] (A.10) The liquid pressure (and hence \(\mathcal{I}\)) now depends on \(\Lambda_{\times}\) due to its appearance in the boundary condition. The dimensionless force multiplier (in square brackets in (A.6)) is plotted in 8(b) for three values of \(\Lambda_{\times}\). Across the range of \(\mathcal{R}\) and \(\Lambda_{\times}\) considered, the force multiplier ranges from about 20% to about 60%. Dilatancy in the radial direction evidently reduces the outward normal force on the parallel plates. Using the experimental and theoretical values quoted in SSA.1, the expected outward excess force is about 1 kN, which is about 5% of the force due to confining pressure. ## Appendix B Finite-time and steady models of parallel-plate torsion By considering the system of conservation equations (4.1), we can derive a model for the evolution of the radial distribution of porosity in parallel-plate torsion. We again assume the simplified, axisymmetric torsional flow considered in SS4.1, \[\mathbf{v}^{s}=V(r)\hat{\mathbf{r}}+\dot{\Omega}\frac{rz}{H}\hat{\mathbf{\varphi}},\qquad \dot{\varepsilon}_{II}\sim\dot{\Omega}r/2H.\] (B.1) This flow is substituted into the compaction equation (4.1_a_), which is then integrated subject to boundary conditions \(V(R)=0\) and \(\partial P^{\ell}/\partial r|_{R}=0\). The radial component of the momentum conservation equation (4.1_b_) is simplified with (B.1) and used to eliminate the pressure gradient. We non-dimensionalise the result, along with equation (4.1_c_) and equation (3.6) using characteristic scales \[[r]=R,\quad[\eta_{\phi},\widetilde{\eta}_{\phi}]=\eta_{0},\quad[V]=\frac{D_{0 }\dot{\Omega}R^{2}}{6H},\quad[t]=R/[V]\] (B.2) to obtain \[\frac{V}{\mathcal{R}^{2}}=\left(\phi/\phi_{0}\right)^{n}\left[ \widetilde{\eta}_{\phi}\left(\frac{\partial}{\partial r}\frac{1}{r}\frac{ \partial}{\partial r}rV-(2\Lambda_{\times}-1)\right)+\frac{\partial\widetilde{ \eta}_{\phi}}{\partial r}\left(\frac{\partial V}{\partial r}+\frac{V}{3r}-r \Lambda_{\times}\right)\right],\] (B.3_a_) \[\frac{\partial\phi}{\partial t}+V\frac{\partial\phi}{\partial r }=\left(1-\phi\right)\frac{1}{r}\frac{\partial}{\partial r}rV,\] (B.3_b_) \[\widetilde{\eta}_{\phi}^{-1}=\eta_{\phi}^{-1}+\frac{(\epsilon \mathcal{R})^{2}}{r}\frac{\partial}{\partial r}r\frac{\partial}{\partial r}( \widetilde{\eta}_{\phi}^{-1}),\] (B.3_c_) where the dimensionless local viscosity is \(\eta_{\phi}=\exp[-\lambda(\phi-\phi_{0})]\) and \(\epsilon\equiv\xi/\delta\). It is important to note that the dilation-viscosity coefficient \(D_{0}\) does not appear in these equations; it appears only in the relationship between strain at the outer radius \(\gamma\) and dimensionless time \(t\), \[t=\gamma D_{0}/6.\] (B.4) This relationship indicates that a given dimensionless time (and hence amount of porosity change) is reached at smaller outer-radius strain when \(D_{0}\) is larger. In other words, increasing \(D_{0}\) promotes radial melt segregation. Boundary and initial conditions imposed on the system (19) are \[V=0,\quad\frac{\partial\widetilde{\eta}_{\phi}}{\partial r}=0 \quad\quad\text{at }r=0, \tag{20a}\] \[V=0,\quad\widetilde{\eta}_{\phi}=\eta_{\phi} \quad\quad\text{at }r=1,\] (20b) \[\phi(r)=\phi_{0} \quad\quad\text{at }t=0. \tag{20c}\] This system (19) with conditions (20) can be solved numerically by one-dimensional finite-volume discretisation on a grid with uniform intervals in \(r\) and \(t\). Velocities are stored at nodes (the points that connect intervals); porosity and viscosity are stored at interval centres. The code, developed in the framework of the Portable, Extensible Toolkit for Scientific Computation (PETSc, Balay _et al._, 2023, 1997), is available in an online repository (Katz _et al._, 2023). Numerical solutions indicate that \(\phi(r,t)\) tends toward a steady state in which \(V(r)=0\). This state can be determined analytically for \(\epsilon=0\) and hence when \(\widetilde{\eta}_{\phi}=\eta_{\phi}\). Then, (19a) reduces to \[\frac{1}{\eta_{\phi}}\frac{\partial\eta_{\phi}}{\partial r}=\frac{1-2\Lambda_{ \times}}{r\Lambda_{\times}}. \tag{21}\] Solving and rewriting in terms of the porosity gives \[\phi(r)=\phi_{0}-\frac{1-2\Lambda_{\times}}{\lambda\Lambda_{\times}}\left( \ln r+\frac{1}{2}\right), \tag{22}\] where we have determined the constant of integration using global conservation of liquid mass, \(2\int_{0}^{1}r\phi\,\mathrm{d}r=\phi_{0}\). ## Appendix C Cone-and-plate torsional flow Cone-and-plate torsional flow is another geometry that has been used in experiments on dense granular suspensions (Guazzelli & Pouliquen, 2018). Although there are currently no deformation experiments on partially molten rock in this configuration, we consider the predicted radial flow for completeness. We use the flow described by (18) but now take the gap \(H\) to be increasing linearly with radius, \(H=r\). With this choice, the shear-strain rate is independent of radius. As a consequence, the non-dimensional governing equation becomes \[\frac{\partial}{\partial r}\frac{1}{r}\frac{\partial}{\partial r}rV-\frac{V} {\mathcal{R}^{2}}=\frac{\Lambda_{\times}-1}{r}, \tag{23}\] where we have scaled the radial velocity component \(V\) with \[[V]=\frac{D_{0}\dot{\Omega}R}{6}. \tag{24}\] Equation (23) with inner boundary condition \(V(0)=0\) has the general solution \[V(r)=\mathcal{R}(\Lambda_{\times}-1)\left[K_{1}(r/\mathcal{R})-\mathcal{R}/r \right]+BI_{1}(r/\mathcal{R}), \tag{25}\] where \(I_{n}(z)\) is the modified Bessel function of the first kind and \(K_{n}(z)\) is the modified Bessel function of the second kind. The solution with outer boundary condition \(V(1)=0\) is \[V(r)=\mathcal{R}(\Lambda_{\times}-1)\left[\frac{\mathcal{R}-K_{1}(\frac{1}{ \mathcal{R}})}{I_{1}(\frac{1}{\mathcal{R}})}I_{1}\left(\frac{r}{\mathcal{R}} \right)+K_{1}\left(\frac{r}{\mathcal{R}}\right)-\frac{\mathcal{R}}{r}\right]. \tag{26}\] In the large-compaction-length limit of \(\mathcal{R}\gg 1\), the radial velocity solution is asymptotic to \(V(r)=(\Lambda_{\times}-1)(r\ln r)/2\). A small-\(\mathcal{R}\) matched asymptotic solution exists but converges only for very small \(\mathcal{R}\). Figure 9(a) shows plots of solution (14) for three values of \(\mathcal{R}\) and for \(\mathcal{R}\to\infty\). We have chosen \(\Lambda_{\times}=0.45\) for plotting purposes. The pattern of radial flow is similar to the parallel-plate case, but with a maximum speed that is larger and shifted toward the centre of the cylinder. The associated decompaction rate (panel (b)) increases sharply near \(r=0\). This decompaction is balanced by weak compaction near \(r=1\). For solutions as in fig. 9(a) with no flow at the outer boundary, we require \(\Lambda_{\times}<1\) to achieve a dimensional \(V>0\) consistent with experiments. This is less restrictive than the constraint (29) obtained from parallel-plate torsion with the same boundary conditions. The zero-effective-stress boundary condition \(\hat{\mathbf{r}}\cdot\mathbf{\sigma}^{\rm eff}\cdot\hat{\mathbf{r}}=0\) at the outer boundary, after non-dimensionalising with \([V]\) from eqn. (12), is written \[\frac{V}{3}+\frac{\partial V}{\partial r}=\Lambda_{\times}\quad\mbox{at} \quad r=1. \tag{15}\] The analytical solution in this case is \[V(r)=\mathcal{R}(\Lambda_{\times}-1)\left[\frac{3K_{0}(\frac{1}{\mathcal{R}}) +2\mathcal{R}K_{1}(\frac{1}{\mathcal{R}})-2\mathcal{R}^{2}+3\frac{\Lambda_{ \times}}{\Lambda_{\times}-1}}{3I_{0}(\frac{1}{\mathcal{R}})-2\mathcal{R}I_{1} (\frac{1}{\mathcal{R}})}I_{1}\left(\frac{r}{\mathcal{R}}\right)+K_{1}\left( \frac{r}{\mathcal{R}}\right)-\frac{\mathcal{R}}{r}\right]. \tag{16}\] Plots of this solution with \(\mathcal{R}=0.3\) are shown in figure 9(c). The radial pattern of flow and compaction are different from the zero-solid-flow boundary condition. It is important to Figure 9: Cone-and-plate torsion flow. Nondimensional solutions of equation (11) with uniform \(\eta_{\phi}=\eta_{0}\). Top panels have outer boundary condition \(V(1)=0\); bottom panels have outer boundary condition as given in eqn. (12). _(a)_ Analytical solutions (14) for \(V\) with \(\Lambda_{\times}=0.45\). Asymptotic solution \(V(r)=(\Lambda_{\times}-1)(r\ln r)/2\) for \(\mathcal{R}\to\infty\). _(b)_ Decompaction rate with \(\Lambda_{\times}=0.45\). _(c)_ Analytical solution (16) for \(V\) with \(\mathcal{R}=0.3\). _(d)_ Decompaction rate with \(\mathcal{R}=0.3\). note that for the zero-stress boundary condition, the compaction rate need not integrate to zero over the domain because the flow (solid and liquid) at the outer boundary is non-zero. The results in this appendix demonstrate that the radial profile of flow is sensitive to the geometry of deformation, as anticipated from previous work (e.g., Morris & Boulay 1999). They also show that the radial flow is sensitive to the outer boundary condition. These results are valid at \(t=0\), when the porosity, permeability, shear viscosity, and dilatancy viscosity are uniform. At later times, melt segregation causes spatial variations in porosity and hence in these coefficients. ## Appendix D Poiseuille flow through a pipe Here we consider Poiseuille-like flow of partially molten rock along an infinite, straight pipe with circular cross-section and radius \(R\). At \(t=0\), the porosity is uniformly \(\phi_{0}\). A cylindrical coordinate system \((r,\varphi,z)\) is aligned with the the axis of the pipe. The radial direction is shear-plane perpendicular and the azimuthal direction aligns with the vorticity. An axisymmetric flow is driven by a pressure gradient \(-G\) in the \(z\) direction. We assume translational invariance in \(z\). The solid velocity, compaction rate and deviatoric strain-rate tensor are then \[\mathbf{v}^{s}=V(r)\hat{\mathbf{r}}+W(r)\hat{\mathbf{z}},\quad\mathcal{C}=\frac{1}{r} \frac{\partial}{\partial r}rV,\quad\dot{\mathbf{\varepsilon}}=\left(\begin{array} []{ccc}\frac{\partial V}{\partial r}-\frac{\mathcal{C}}{3}&0&\frac{1}{2} \frac{\partial W}{\partial r}\\ 0&\frac{V}{r}-\frac{\mathcal{C}}{3}&0\\ \frac{1}{2}\frac{\partial W}{\partial r}&0&-\frac{\mathcal{C}}{3}\end{array} \right). \tag{44}\] We seek a solution for \(V,W\). The \(z\)-component of the bulk force balance (28b) is \[-G=\eta_{0}\frac{\partial}{\partial r}\frac{1}{r}\frac{\partial}{\partial r}rW. \tag{45}\] Integrating this twice subject to a symmetry condition at \(r=0\) and a no-slip condition at \(r=R\) gives the standard Poiseuille solution \(W(r)=G\left(R^{2}-r^{2}\right)/4\eta_{0}\). As we did for torsion, we linearise by neglecting the contribution of dilatant flow to the second invariant of the strain rate and obtain \(\dot{\varepsilon}_{II}\sim\frac{1}{2}\left|\partial W/\partial r\right|=Gr/4 \eta_{0}\). We use this in the radial component of force-balance equation (28b) to write \[\frac{\partial P^{\ell}}{\partial r}=3\eta_{0}\frac{\partial}{\partial r} \frac{1}{r}\frac{\partial}{\partial r}rV-\frac{D_{0}G}{4}\left(2\Lambda_{ \perp}-\Lambda_{\times}\right). \tag{46}\] Then we combine this with the compaction equation (28a) to eliminate the liquid pressure and integrate once. Rescaling \(r\) with the outer radius \(R\) and \(V\) with the characteristic scale \[[V]=\frac{GR^{2}D_{0}}{12\eta_{0}}=W(0)\frac{D_{0}}{3}, \tag{47}\] we obtain the dimensionless equation \[\frac{\partial}{\partial r}\frac{1}{r}\frac{\partial}{\partial r}rV-\frac{V}{ \mathcal{R}^{2}}=(2\Lambda_{\perp}-\Lambda_{\times}). \tag{48}\] Here, again, \(\mathcal{R}\equiv\sqrt{3\eta_{0}M_{0}}/R\) is the ratio of the compaction length to the outer radius. For a cylindrical domain with azimuthal symmetry that extends inward to \(r=0\), the radial velocity must vanish on the axis, \(V(0)=0\). The rigid outer wall requires that \(V(1)=0\). With these constraints, equation (43) admits the solution \[V(r)=\pi\mathcal{R}^{2}\left(\tfrac{\Lambda_{\times}}{2}-\Lambda_{\perp}\right) \left[\frac{L_{1}(1/\mathcal{R})}{I_{1}(1/\mathcal{R})}I_{1}(r/\mathcal{R})-L_{ 1}(r/\mathcal{R})\right]. \tag{45}\] Plots of this solution and its compaction rate (not shown) are identical in shape to those for torsion in figure 2, but are scaled with \((\Lambda_{\times}/2-\Lambda_{\perp})\) instead of \((1/2-\Lambda_{\times})\). Results from laboratory experiments by Quintanilla-Terminel _et al._ (2019) that approximate Poisseuile flow indicate that \(V<0\). In light of (45), this requires \(\Lambda_{\perp}>\Lambda_{\times}/2\). If empirical constraints from granular suspensions are relevant (Morris & Boulay, 1999; Fang _et al._, 2002), this condition is readily satisfied. In that case, we can summarise the physics at \(t=0\) as follows. Dilatant normal stress in the \(r\) direction increases with radius because the shear stress increases with radius. This pushes the solid radially inward with a stress in proportion to \(\Lambda_{\perp}\). The same dilatancy creates a compressive hoop stress in proportion to \(\Lambda_{\times}\) that pushes solid radially outward. However, if condition \(\Lambda_{\perp}>\Lambda_{\times}/2\) is met, the net stress on the solid is radially inward. The solution (45) predicts radially inward flow of the solid with decompaction at outer radii and compaction at inner radii. **Acknowledgements.** The authors thank A. Dillman, D. Hewitt, R. Juanes, K. Kamrin, D. Kohlstedt, J. Martin and M. Zimmerman for helpful discussions, two anonymous reviewers for their insightful suggestions, and J. Morris for his editorial efficiency. **Funding.** This research received funding from the European Research Council under Horizon 2020 research and innovation program grant to RFK, agreement 772255. **Declaration of interests.** The authors declare no conflict of interest. **Data availability statement.** Code and data to reproduce all figures is available at [https://doi.org/10.5281/zenodo.10075195](https://doi.org/10.5281/zenodo.10075195) (Katz _et al._, 2023). **Author ORCID.** R.F. Katz [https://orcid.org/0000-0001-8746-5430](https://orcid.org/0000-0001-8746-5430), J.F. Rudge [https://orcid.org/0000-0002-9399-7166](https://orcid.org/0000-0002-9399-7166), L.N. Hansen [https://orcid.org/0000-0001-6212-1842](https://orcid.org/0000-0001-6212-1842). **Author contributions.** RFK conceived the study, developed the theory, analysed the theory with input from JFR, made comparison to experiments with input from LNH, and wrote the paper with input from JFR and LNH.
2309.08582
Gate-tunable Superconductivity in Hybrid InSb-Pb Nanowires
We present a report on hybrid InSb-Pb nanowires that combine high spin-orbit coupling with a high critical field and a large superconducting gap. Material characterization indicates the Pb layer of high crystal quality on the nanowire side facets. Hard induced superconducting gaps and gate-tunable supercurrent are observed in the hybrid nanowires. These results showcase the promising potential of this material combination for a diverse range of applications in hybrid quantum transport devices.
Yan Chen, David van Driel, Charalampos Lampadaris, Sabbir A Khan, Khalifah Alattallah, Lunjie Zeng, Eva Olsson, Tom Dvir, Peter Krogstrup, Yu Liu
2023-09-15T17:40:32Z
http://arxiv.org/abs/2309.08582v2
# Gate-tunable Superconductivity in Hybrid InSb-Pb Nanowires ###### Abstract We present a report on hybrid InSb-Pb nanowires that combine high spin-orbit coupling with a high critical field and a large superconducting gap. Material characterization indicates the Pb layer of high crystal quality on the nanowire side facets. Hard induced superconducting gaps and gate-tunable supercurrent are observed in the hybrid nanowires. These results showcase the promising potential of this material combination for a diverse range of applications in hybrid quantum transport devices. nanowire growth, hybrid nanowires, in-situ nanofabrication, superconducting proximity, quantum transport Superconducting electronics is a rapidly evolving field with multiple potential applications. Such applications include parametric-amplifiers for qubit readout and control [1], and dissipation-less diodes [2, 3]. Among candidate materials used for such electronic applications, the semiconductor-superconductor hybrids stand out, due to their ability to be tuned in-situ between various desirable regimes. The semiconductor-superconductor hybrids have also been proposed as a leading platform for the realization of Majorana zero modes, with potential application for fault-tolerant quantum computations. The elemental superconductor Al has so far proven to give highly transparent interface with semiconductors such as InAs or InSb, which has been demonstrated to be beneficial in various applications, such as superconducting diodes 4, gatemon and fluxonium qubits [5, 6], parametric amplifiers [7], and "poor man's Majorana" states [8]. However, the small spectral gap and low critical field of Al might be limiting factors in pushing such applications forward. An alternative superconductor could be the elemental Pb, which has a significantly large gap and a high critical field. The Pb monolayers are known to exhibit strong Rashba spin-orbit coupling [9, 10]. The further study has revealed that the magnetic Co-Si clusters can drive a Pb monolayer into a topological superconducting state on Si(111) [11]. The InAs-Pb hybrid island devices have shown correlated 2 electron transport, establishing the possibility to induce superconductivity from Pb to semiconducting platforms [12]. With CdTe shells as a tunnel barrier, it is also possible to grow epitaxial Pb layers on the side facets of InSb nanowires (NWs) [13]. Another system of interest is a hybrid between Pb and PbTe NWs, which are also used for Majorana-based research [14, 15, 16]. In this work we report the development of hybrid NWs composed of Pb and InSb. We grew InSb-Pb NWs using molecular beam epitaxy and in-situ metallic deposition. To avoid ex-situ etching while making Josephson junctions and tunnel junctions, a process that significantly reduces the quality and yield of the produced devices, we used a shadow technique to partially mask the deposition of Pb on some segments of the NWs. This approach allows us to obtain in-situ junctions. The structural characterization of the NWs shows the InSb core of single crystalline zinc-blende phase and the Pb layer of high crystal quality. However, the Pb layer is not epitaxial on the NW side facets. The transport characterization shows that the Pb layer has a superconducting transition temperature similar to that of bulk Pb. The spectrum of the hybrid system, measured using tunnel junctions, reveals a hard superconducting gap. Moreover, the observation of the oscillations of the supercurrent in the in-situ Josephson junctions under both in-plane and out-of-plane magnetic fields confirms the extent of the proximity effect from the Pb layer into the InSb core. Stem-assistant InSb NWs were grown on pre-fabricated 1/4 2-inch undoped n-type (100) zinc-blende InAs wafers (Semiconductor Wafer Inc.) with the vapor-liquid-solid (VLS) method in III-V growth chamber of a solid-source Varian GEN-II MBE system with the background pressure below 1E\(-\)10 Torr, which is similar to our previous work [17, 18, 19]. The as-grown InSb NWs were transferred from the III-V growth chamber to the dedicated chamber for metal deposition after growth. Prior to the Pb deposition, the sample holder and the sample were left for cooldown and temperature stabilization for \(\sim\) 6 hours. The substrates were about 27\({}^{\circ}\) tilted towards the Pb source. The Pb layers were nominally deposited on two side facets of NWs with in-situ electron beam evaporation with a substrate temperature of -144 \({}^{\circ}\)C, as measured by a thermo-coupling back sensor. Calibration with a dummy wafer shows the surface temperature is actually around -20 \({}^{\circ}\)C. The average Pb deposition rate was 1.6 nm/s. The oxygen with 10 Torr partial pressure was used to in-situ oxidize the NW surface for 10 s, which helped to stabilize the Pb layers. As the last step, the load lock was vented with N2 (500 Torr), and the sample was placed in this environment for 5 min, allowing the sample temperature to raise above 0 \({}^{\circ}\)C before unloading. In this work, there are several batches of NWs, which have slight differences among each other. In structural characterization, the nominal 40 nm Pb was deposited on the NWs to increase image contrast and get better view of Pb layers and in-situ grown junctions. The NWs for studying Pb superconductivity have 20 nm Pb and 2 nm AlOx capping (the 2 nm Al layer was deposited with a growth rate of 0.06 nm/s with in-situ electron beam evaporation, while keeping the sample on the same sample stage so that the Al layers could be perfectly overlapping with the Pb layers. This is to avoid possible Pb dewetting during sample transferring and unloading). The Pb layer of NWs used for hard gap and supercurrent study is nominal 30 nm. The growth time of NWs used for hard gap and supercurrent study was 60 min while that of others was only 40 min. The 30 kV field emission scanning electron microscope (SEM) JEOL 7800F was used to investigate how the NWs grew in the trenches and to locate the NWs for device fabrication. During device fabrication, the locations of junctions were also rechecked with SEM after NW transfer. The FE1 Titan 80-300 transmission electron microscope (TEM) was used at 300 kV for scanning transmission electron microscopy (STEM) imaging, electron diffraction and energy-dispersive X-ray spectroscopy (EDS). The microscope is equipped with a Schottky field emission gun (FEG), a CEOS probe Cs corrector as well as STEM annular dark field (ADF) detectors. The system for bulk Pb superconductivity characterization is a modified Dynacool cryo-cooled physical property measurement system (PPMS) with a temperature range of 1.7 K - 400 K and perpendicular magnetic field in a range of 9 T. When measuring the differential conductance, the lock-in amplifiers (SR830) were employed to enable the PPMS to measure tiny AC signal. The SIS device presented in Fig. 2 was measured in a Janis cryostat with a vector magnet and at a base temperature of 300 mK. The standard lock-in amplifier techniques were used to obtain the differential conductance. The SNS device of Fig. 3 was measured in an Oxford Triton dilution refrigerator with a base temperature of approximately 30 mK. Each contact was routed to two bondpads to perform four-point measurements. For each measurement, two of the four bondpads were used to apply a current bias while the other two were used to measure the voltage drop. The DC differential resistance was obtained using a numerical derivative of the VI-curves. All devices were tuned using a global back gate. As shown in Fig. 1, the as-grown InSb NWs are typically about 200 nm of diameter and around 3.5 \(\upmu\)m of length. The Pb layers with a nominal thickness of 40 nm were deposited on two side facets of InSb NWs. The in-situ grown junctions are visible in Fig. 1b, where the Pb beam is blocked by other NWs during deposition. The remaining part of the Pb layers is complete and continuous. There are small features on the surface of the Pb layers along NWs, which suggests that the substrate temperature is not low enough to reach the range suitable for Pb deposition. To understand the lattice match Figure 1: Structure of InSb-Pb NWs. **a.** SEM image (viewing angle 36\({}^{\circ}\) from normal) of InSb-Pb NWs. **b.** Enlarged SEM image (viewing angle 36\({}^{\circ}\) from normal) of InSb-Pb NWs to show their junctions. **c.** Atomic resolution STEM ADF image of InSb lattice in InSb-Pb NWs. **d.** Atomic resolution STEM ADF image of Pb lattice in InSb-Pb NWs. between InSb and Pb, we performed STEM ADF imaging on the InSb-Pb NWs at atomic resolution. In Fig. 1c, which shows a transverse view of the NW, we can see the single crystalline zinc-blende InSb lattice without dislocation when the zone axis is along InSb [1-10]. Under the same zone axis, the lattice fringes in the Pb shell are not visible, and the InSb/Pb interface is not sharp at the atomic scale (Fig. S1). Based on cross-sectional EDS in Fig. S2, it seems that part of InSb moves from the interface between InSb and Pb to the surface of the Pb shell, so the interface is planarized. Therefore, it is probably needed to introduce barrier layers like CdTe to protect InSb [13]. In comparison, both the Pb and InAs lattices are visible when zone axis is along InAs [11-20] on the InAs stem of the NWs (Fig. S3). It shows the Pb layer can grow epitaxially on InAs, which is consistent with the previous report [12]. We then rotated the NW and aligned the Pb crystal orientation with the electron beam. It is found that the **Fig. 2 transport measurements of InSb-Pb hybrid NWs.****a.** False-colored SEM micrograph of an InSb-Pb NW without junctions, where the contacts are schematically indicated by golden rectangles. **b.** Four-point measurements of zero-bias differential resistance as a function of out-of-plane magnetic field \(B_{\perp}\) and temperature \(T\) of the device presented in panel a. **c.** False-colored SEM micrograph of an InSb-Pb NW SIS device with Ti/Al contacts and a global back gate, where a magnetic field \(B_{//}\) is applied parallel to the NW axis. **d.** Tunneling spectroscopy \(G=dI/dV\) of the SIS device of panel c at \(B=0\) for varying voltage bias \(V_{bias}\) measured at T \(=300\) mK. An AC voltage \(V_{AC}=30\mu V\) is applied on top of the voltage bias. **e.** Same as panel d, for varying both \(B_{//}\) and \(V_{bias}\). Pb [1-10] zone axis is tilted 5-10\({}^{\circ}\) with respect to the InSb [1-10] zone axis based on analysis of 5 NWs, so the Pb layer is not epitaxial on InSb side facets. The STEM imaging (Figure 1d) shows the crystalline structure of Pb shell. The Pb layers usually prefer to grow with (111) surface.[12] In this case, the surface of Pb layer is a few degrees off from (111), so the lattice mismatch at the interface between InSb and Pb can probably be reduced. Meanwhile, the diffraction patterns acquired along the Pb layer suggest its single crystallinity, as shown in Fig. S4. After growth of InSb-Pb NWs, we used the NWs without junction to study the properties of the Pb shell. Figure 2a shows an InSb-Pb NW contacted with four normal Ti/Au contacts. Based on the device, the 4-probe differential resistance of the Pb shell on top of the NW as a function of the temperature (\(T\)) and magnetic field applied in the out-of-plane direction (\(B_{\perp}\)) was measured as shown in Fig. 2b. An AC voltage excitation from a lockin amplifier was applied to one of the outer leads. The resulting current was measured at the other outer lead by a lockin, while the voltage drop between the two inner leads was measured by another lockin. The differential resistance was then computed by dividing the measured voltage drop by the measured current. No DC current or voltage bias was applied to the device. We observed a region of zero resistance, which turned into finite upon reaching a critical value of \(B_{\perp}\). Moreover, this critical field decreases with increasing temperature, which shows characteristics of type-I superconductors [20]. The critical temperature \(T_{C}\) is \(\sim\) 7.0 K, which is similar to the corresponding value for bulk Pb [21]. We extrapolate a zero-temperature critical field of \(\sim\)1.3 T from the empirical, parabolic law for finite temperature critical fields \(B_{c}(T)=B_{c}^{0}(1-T^{2}/T_{c}^{2})\)[20]. This is much larger than the bulk Pb value of \(B_{c}^{0}=0.08T\) at 0 K [21]. Figure S5 shows similar four-point measurements performed on two other InSb-Pb NWs, with comparable values for \(T_{c}\), but having different critical fields due to different orientations with respect to the magnetic field. After having focused on the intact Pb layer, we now turn ourselves to InSb-Pb NWs with junctions. Figure 2c shows an InSb-Pb NW with an in-situ grown junction [18], contacted by 10 nm Ti + 50 nm Al leads on the bare InSb and the Pb respectively. We set \(V_{bg}=0.1~{}V\), so the bare InSb acts as a tunnel barrier and perform superconductor-insulator-superconductor (SIS) spectroscopy for \(B_{//}=0\) as shown in Fig. 2d. We observed a single pair of particle-hole symmetric peaks at \(eV_{bias}\approx\pm 1.6\,meV\), instead of two pairs at \(eV_{bias}=\pm(\Delta_{Al}\pm\Delta_{Pb})\) as expected from SIS tunneling. The relative magnitude of these peaks depends on temperature, and the peak at \(\Delta_{Al}+\Delta_{Pb}\) has been observed to be more visible than the \(\Delta_{Al}-\Delta_{Pb}\) peak in the previous study [22]. Taking the bulk value of \(\Delta_{Al}=0.175\,meV\)for the Al lead [23], we estimate \(\Delta_{Pb}\approx 1.4\,meV\), similar to that of a Pb layer 1.36 meV [24] but larger than theoretical value 1.28 meV [25] or 1.25 meV of epitaxial Pb [12]. Moreover, the sub-gap conductance is two orders of magnitude lower than the above-gap conductance, which we interpret as a hard gap, similar to those reported in III/V NWs with epitaxial Al [26, 27], epitaxial Pb [12], polycrystalline Sn [28], and amorphous Ta [29]. Figure 2e shows tunneling spectroscopy for varying \(B_{//}\), where the Al turns normal for a finite field. It is found that the gap shrinks with field but cannot be closed due to blockaded transport at low-bias. It can be seen that the gap shrinks with field but cannot be closed due to blockaded transport at low-bias. It can be seen that the gap shrinks with field but cannot be closed due to blockaded transport at low-bias. Figure 3: **Current-biased measurements of InSb-Pb SNS devices.****a.** False-colored SEM micrograph of an InSb-Pb NW SNS device, contacted on the Pb shell by three Ti/Al contacts. A global back gate is used to tune the InSb electron density. The directions of magnetic fields are shown for panels d and e. **b.** The voltage drop \(V_{m}\) measured between contacts L2 and L3 indicated in panel a. **c.** The DC differential resistance \(R_{diff}=dV_{m}/dI\) versus bias current \(I_{bias}\), for varying back gate voltage \(V_{BG}\). The blue line indicates the value of \(V_{BG}\) where panel b is measured. **d.**\(R_{diff}\) versus bias current \(I_{bias}\) and out-of-plane magnetic field \(B_{\perp}\), measured between contacts L1 and L2. **e.** Same as panel d, but for magnetic field \(B_{//}\) applied parallel to the NW axis. attributed to an accidental quantum dot formed in the junction which causes a coulomb blockade. Another metric to characterize the proximity effect in semiconductor-superconductor hybrids is presence of supercurrents in Josephson junctions. In Fig. 3a, the InSb-Pb NW was shadowed in two segments, so two superconductor - normal conductor - superconductor (SNS) junctions were formed. Figure 3b shows the _V-I_ curve measured in a four-point configuration between the middle and top contact depicted in Fig. 3a. Within a range of applied currents, between approx. 0 and 6 nA, no voltage dropped over the junction, indicating the presence of a supercurrent in the system. In Fig. 3c, the measured switching current (\(I_{\text{S}}\)) responds non-monotonically to \(V_{\text{BG}}\), typical for a disordered Josephson junction in such hybrids. When we turn our attention to the dependence of \(I_{\text{S}}\) on the applied magnetic field, Figure 3d shows _V-I_ curves measured with increasing \(B_{\perp}\) between the middle and bottom contact shown in Fig. 3a. \(I_{\text{S}}\) decayed rapidly once the magnetic field was applied, giving rise to an interference pattern with \(I_{\text{S}}\) smaller than 500 pA. The separation between the lobes allows us to estimate the effective area of the junction by assuming that they are separated by approximately one flux quantum. The lobes are spaced 100 mT apart, giving an effective junction are of 20,000 nm\({}^{2}\), slightly larger than the area of 15,000 nm\({}^{2}\) observed in the SEM image. In Fig. 3e, we show the interference pattern of \(I_{\text{S}}\) resulting from the application of in-plane magnetic field. Here, there is a supercurrent extinction and revival around \(B_{//}\approx 250\,mT\), which corresponds roughly to one superconducting flux being threaded through the NW cross-section. This behavior is predicted for NW Josephson junctions with multiple modes [30, 31]. The observed gate-, and field-based interference could result from mode-mixing of multiple, occupied modes in the NW as observed in comparable systems [32, 33]. The maximum value of switching current is usually proportional to the superconducting gap [34]. Therefore, mixing of multiple modes in a large-gap superconductor like Pb could result in a gate-tunable supercurrent over a large range. It is noted that the NW in Fig. 3a is bent. When the NW is bent, the strain can be induced in the NW. The strain can move the Fermi level in the band structure, so it can influence the band offset between InSb and Pb. At the same time, it can change the carrier density which can affect the strength of proximity effect. Therefore, the induced superconducting gap will be changed, and the obtained results could be deviated from intrinsic properties of the devices based on strain-free NWs. In summary, we develop hybrid InSb-Pb NWs based on molecular beam epitaxy growth technique. The in-situ junctions were fabricated during Pb deposition with a shadowing approach. The NWs consist of the single crystalline zinc-blende InSb lattice free of dislocations and the Pb layer of high crystal quality. However, the Pb [1-10] zone axis is tilted 5 \(\sim\)10\({}^{\circ}\) with respect to the InSb [1-10] zone axis, so the Pb layer is not epitaxial on InSb side facets. The intact Pb layer shows a superconducting transition at 7 K, similar to that of bulk Pb. The tunneling spectrum of a SIS device shows a superconducting gap of around 1.40 meV in the single-particle density of states of the InSb-Pb NW. The sub-gap conductance is two orders of magnitude lower than that above the gap, indicating a hard gap. The supercurrent interference measurements show modulation of switching current for both in-plane and out-of-plane magnetic fields. ## Author Information *YC and DvD contribute equally to this work. **Corresponding Author** #E-mail: [email protected]. **Note** The authors declare no competing financial interests. ## Acknowledgments We thank C. B. Sorensen, A. J. Cui, J. H. Kang for technical assistance. We also thank L. Kouwenhoven for fruitful discussion. We acknowledge financial support from the Microsoft Quantum initiative, from the Danish Agency for Science and Innovation through DANSCATT, from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement n\({}^{\circ}\) 716655), and from the international training network 'INDEED' (grant agreement n\({}^{\circ}\) 722176). This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No.823717-ESTEEM3. SAK also acknowledge financial support from Danish agency for higher education and science.
2308.16495
Quantum Suplattices
Building on the theory of quantum posets, we introduce a non-commutative version of suplattices, i.e., complete lattices whose morphisms are supremum-preserving maps, which form a step towards a new notion of quantum topological spaces. We show that the theory of these quantum suplattices resembles the classical theory: the opposite quantum poset of a quantum suplattice is again a quantum suplattice, and quantum suplattices arise as algebras of a non-commutative version of the monad of downward-closed subsets of a poset. The existence of this monad is proved by introducing a non-commutative generalization of monotone relations between quantum posets, which form a compact closed category. Moreover, we introduce a non-commutative generalization of Galois connections and we prove that an upper Galois adjoint of a monotone map between quantum suplattices exists if and only if the map is a morphism of quantum suplattices. Finally, we prove a quantum version of the Knaster-Tarski fixpoint theorem: the quantum set of fixpoints of a monotone endomap on a quantum suplattice form a quantum suplattice.
Gejza Jenča, Bert Lindenhovius
2023-08-31T06:57:39Z
http://arxiv.org/abs/2308.16495v1
# Quantum Sulattices ###### Abstract Building on the theory of quantum posets, we introduce a non-commutative version of suplattices, i.e., complete lattices whose morphisms are supremum-preserving maps, which form a step towards a new notion of quantum topological spaces. We show that the theory of these 'quantum suplattices' resembles the classical theory: the opposite quantum poset of a quantum suplattice is again a quantum suplattice, and quantum suplattices arise as algebras of a non-commutative version of the monad of downward-closed subsets of a poset. The existence of this monad is proved by introducing a non-commutative generalization of monotone relations between quantum posets, which form a compact closed category. Moreover, we introduce a non-commutative generalization of Galois connections and we prove that an upper Galois adjoint of a monotone map between quantum suplattices exists if and only if the map is a morphism of quantum suplattices. Finally, we prove a quantum version of the Knaster-Tarski fixpoint theorem: the quantum set of fixpoints of a monotone endomap on a quantum suplattice form a quantum suplattice. ## 1 Introduction A poset is called a _complete lattice_ if it has all suprema, or equivalently, if it has all infima. However, a function between complete lattices that preserves all suprema does not necessarily preserve all infima. Hence, one can define several categories of complete lattices with different classes of morphisms. For instance, the class consisting of maps that preserve both all suprema and all infima, or the class of maps that only preserve all suprema. If we choose this latter class of morphisms, we typically call the objects of the resulting category **Sup**_sulattices_ instead of complete lattices. In this contribution, we introduce a noncommutative version of suplattices, which we call _quantum suplattices_. One of the reasons why we are interested in these objects is that they might lead to a notion of quantum topological spaces that allows for the quantization of topological spaces that are not locally compact Hausdorff, such as the Scott topology on a dcpo. Any topology of a topological space is in particular a complete lattice; the usual approach to quantum topological spaces are C*-algebras, which can be regarded as noncommutative locally compact Hausdorff spaces. The approach we take is the program of _discrete quantization_[12]. Here, _quantizing_ some mathematical structure is understood as the operation of finding a noncommutative generalization or version of the structure. This can be done by internalizing the structure in a suitable category of operator algebras. For discrete quantization, this category is called **qRel**, which is equivalent to the category of von Neumann algebras isomorphic to some (possibly infinite) \(\ell^{\infty}\)-sum of matrix algebras (such von Neumann algebras are also called _hereditarily atomic_) equipped with Weaver's quantum relations [22]. The category **qRel** shares many properties with **Rel**. For instance, it is also dagger compact [2, 8, 10] and enriched over **Sup**. Since many mathematical structures can be described in terms of the dagger structure and the **Sup**-enrichment of **Rel**, this makes **qRel** a very suitable tool for quantization. Since hereditarily atomic von Neumann algebras have a discrete character, they can be regarded as noncommutative sets, which explains the name 'discrete quantization'. Regarding discrete quantization, one could argue that it is a disadvantage that we do not work in full generality with all von Neumann algebras. However, we see this lack of generality as a feature, not as a bug: the category of all von Neumann algebras and quantum relations is not compact, whereas \(\mathbf{qRel}\) is. This is of huge importance for the theory of quantum suplattices, which relies heavily on \(\mathbf{qRel}\) being compact. By definition, any matrix algebra \(\mathrm{M}_{d}(\mathbb{C})\), which is often used to represent a qudit, is an example of a hereditarily atomic von Neumann algebra. Since any tensor product of two matrix algebras is a hereditarily atomic von Neumann algebra, systems of multiple qudits can be represented by hereditarily atomic von Neumann algebras. Therefore, hereditarily atomic von Neumann algebras are sufficient for most practical applications in quantum information theory and in quantum computing. Recently, discrete quantization was applied successfully in the denotational semantics of quantum programming languages [14]. ### Related work Discrete quantization can be regarded as a special case of quantization via quantum relations, which we distilled by Weaver [22] from his work with Kuperberg on quantum metric spaces [16]. The category \(\mathbf{qRel}\) whose objects are called _quantum sets_ was introduced by Kornell [13], who showed that this category is dagger compact. Moreover, in the same reference, he quantized functions by internalizing them in \(\mathbf{qRel}\), and showed showed that resulting category \(\mathbf{qSet}\) of quantum sets and quantized functions is symmetric monoidal closed, complete, cocomplete, and dual to the category of hereditarily atomic von Neumann algebras and normal \(*\)-homomorphisms. Furthermore, Kornell showed in [12] that \(\mathbf{qRel}\) is equivalent to the category of hereditarily atomic von Neumann algebras and Weaver's quantum relations, and introduced a logic with equality for \(\mathbf{qRel}\). Quantum posets were already defined by Weaver in [22]. The properties of the category of quantum posets in the framework of discrete quantum mathematics were investigated in [15] by Kornell, Mislove and the second author. The same authors proceeded in [14] by quantizing cpos by means of discrete quantization. Traditionally, cpos are a class of posets that form the essential objects for the denotational semantics of programming languages. One would expect that the denotational semantics of quantum programming languages will require quantized cpos. The current state-of-the art quantum programming language is Proto-Quipper-M, which was introduced by Rios and Selinger [20], subsequently extended with recursive terms in [17] and then with recursive types in [18] by Mislove, Zamdzhiev and the second author. In [14], the quantized cpos were successfully used to construct sound and adequate denotational models for these extensions. Finally, we mention the quantum graphs [22, Definition 2.6(d)][9], which recently attracted some attention [3, 21, 23, 19, 6, 7]. Quantum graphs can be described in the framework of discrete quantum quantization in a similar way as quantum posets. Just like a poset is a special kind of graph, a quantum poset is a special kind of quantum graph, and many concepts and techniques from quantum graphs carry over to quantum posets. ### Overview of the paper We start by giving a recap of quantum sets and quantum posets. Then we introduce monotone relations between quantum sets. These are of importance, because the ordinary down-set monad on the category \(\mathbf{Pos}\) of ordinary posets can be obtained from an adjunction between \(\mathbf{Pos}\) and the category \(\mathbf{RelPos}\) of posets and monotone relations. We proceed by introducing the quantum down-set monad qDwn and explaining its connection with upper sets and the quantum power set. We then introduce a quantum generalization of Galois connections, which we use to define quantum suplattices. We show that \(\mathrm{qDown}(\mathcal{X})\) is a quantum suplattice for any quantum poset \(\mathcal{X}\). Together with the characterization of Galois connections between quantum posets (cf. Theorem 7.2), these are the only proofs we include, just to give a flavor of how to work with quantum sets. Furthermore, we sketch why the opposite of a quantum suplattice is also a quantum suplattice. Finally, we discuss enrichment over \(\mathbf{Sup}\) and fixpoints of monotone endomaps between quantum suplattices. Let us stress that although the quantized theorems we included in the present paper are almost verbatim copies of their classical versions, the proof of a quantized theorem is usually much more complicated than its classical counterpart. ## 2 Preliminaries ### Quantum sets The basic reference for this section is [13]. Here, the definition of \(\mathbf{qRel}\) implicitly makes use of categorical constructions, which we choose to highlight. In order to do so, we first introduce the category \(\mathbf{FdOS}\) whose objects are nonzero finite-dimensional Hilbert spaces. A morphism \(A:X\to Y\) in \(\mathbf{FdOS}\) is a _operator space_, that is a subspace of the vector space \(L(X,Y)\) of linear operators \(X\to Y\). We define composition of \(A\) with a morphism \(B:Y\to Z\) in \(\mathbf{FdOS}\) as the operator space \(B\cdot A:=\mathrm{span}\{ba:a\in A,b\in B\}\). The identity morphism on \(X\) is the operator space \(\mathbb{C}1_{X}:=\{\lambda\,1_{X}:\lambda\in\mathbb{C}\}\). Since the space \(L(X,Y)\) is actually a finite-dimensional Hilbert space itself via the inner product \((a,b)\mapsto\mathrm{Tr}(a^{\dagger}b)\), where \(a^{\dagger}:Y\to X\) denotes the hermitian adjoint of \(a\in L(X,Y)\), the homset \(\mathbf{FdOS}(X,Y)\) becomes a complete modular ortholattice; the order on the homset is explicitly given by \(A\leq B\) if and only if \(A\) is a subspace of \(B\). Since composition in \(\mathbf{FdOS}(X,Y)\) preserves suprema, \(\mathbf{FdOS}\) is enriched over the category \(\mathbf{Sup}\) of complete lattices and suprema-preserving functions; any such category is also called a _quantaloid_. Products and coproducts in quantaloids coincide and are also called _sums_. Any quantaloid \(\mathbf{Q}\) has a free sum-completion of \(\mathbf{Q}\), which can be described by the quantaloid \(\mathrm{Matr}(\mathbf{Q})\), whose objects are \(\mathbf{Set}\)-indexed families of objects \((X_{i})_{i\in I}\) of \(\mathbf{Q}\), and whose morphisms \(R:(X_{i})_{i\in I}\to(Y_{j})_{j\in J}\) are'matrices' whose \((i,j)\)-component \(R(i,j)\) is a \(\mathbf{Q}\)-morphism \(X_{i}\to Y_{j}\). Composition in \(\mathrm{Mat}(\mathbf{Q})\) is defined by matrix multiplication: we define \(S\circ R\) for \(S:(Y_{j})_{j\in J}\to(Z_{k})_{k\in K}\) by \((S\circ R)(i,k)=\bigvee_{j\in J}S(j,k)\cdot R(i,j)\) for each \(i\in I\) and \(k\in K\), where \(\cdot\) denotes the composition of morphisms in \(\mathbf{Q}\). The \((i,i^{\prime})\)-component of the identity morphism on an object \((X_{i})_{i\in I}\) is the identity morphism of \(X_{i}\) if \(i=i^{\prime}\), and \(0\) otherwise. The order on the homsets of \(\mathrm{Matr}(\mathbf{Q})\) is defined componentwise. The matrix-construction as the free sum-completion of quantaloids was introduced in [11], and is a special case of a matrix-construction for more general bicategories as described in [4]. We now define \(\mathbf{qRel}\) as the quantaloid \(\mathrm{Matr}(\mathbf{FdOS})\). Any object \(\mathcal{X}\) of \(\mathbf{qRel}\) is called a _quantum set_; whose _atoms_ are the Hilbert spaces in \(\mathcal{X}\). A quantum set consisting of a single atom is called _atomic_. For convenience, we denote the elements of the index set \(\mathrm{At}(\mathcal{X})\) of \(\mathcal{X}\) by the atoms of \(\mathcal{X}\) themselves, hence \(\mathrm{At}(\mathcal{X})\) can be interpret as the set of atoms of \(\mathcal{X}\). Thus, in some sense \(\mathcal{X}\) is indexed by itself, just like ordinary sets can be regarded as indexed families indexed by themselves via the identity function. Since a quantum set \(\mathcal{X}\) is formally a indexed family, it does not have elements in the usual sense. We shall use the notation \(X\mathbin{\!\propto}\mathcal{X}\) to express that \(X\) is an atom of \(\mathcal{X}\). Thus, we have \(X\mathbin{\!\propto}\mathcal{X}\) if and only if \(X\in\mathrm{At}(\mathcal{X})\). Conversely, to any ordinary set \(M\) consisting of finite-dimensional Hilbert spaces we can associate a unique quantum set \(\mathcal{Q}M\) whose set of atoms \(\mathrm{At}(\mathcal{Q}M)\) consists of all Hilbert spaces \(H\) in \(M\) such that \(\dim(H)\neq 0\). Any morphism in \(\mathbf{qRel}\) is called a _binary relation_. We emphasize that binary relations between quantum sets are not binary relations in the usual sense, i.e., subsets of the product of domain and codomain of the relation. However, binary relations between quantum sets turn out to be generalizations of binary relations between ordinary sets. Given our convention that the indices in the index set \(\operatorname{At}(\mathcal{X})\) of \(\mathcal{X}\) are chosen to be the atoms of \(\mathcal{X}\) itself, any binary relation \(R:\mathcal{X}\to\mathcal{Y}\) between quantum sets is an assignment that to each atom \(X\!\propto\!\mathcal{X}\) and each atom \(Y\) of \(\mathcal{Y}\) assigns a subset \(R(X,Y)\) of \(L(X,Y)\). Given another binary relation \(S:\mathcal{Y}\to\mathcal{Z}\), the \((X,Z)\)-component of the composition \(S\!\circ\!R\) is given by \((S\!\circ\!R)(X,Z)=\bigvee_{Y\!\propto\!\mathcal{Y}}S(Y,Z)\cdot R(X,Y)\). The identity morphism on a quantum set \(\mathcal{X}\) is denoted by \(I_{\mathcal{X}}\) and is given by \(I_{\mathcal{X}}(X,X^{\prime})=\mathbb{C}1_{X}\) if \(X=X^{\prime}\) and \(I_{\mathcal{X}}(X,X^{\prime})=0\) otherwise. The quantaloid structure of \(\mathbf{qRel}\) can be described explicitly as follows. For binary relation \(R,S:\mathcal{X}\to\mathcal{Y}\) we have \(R\leq S\) if and only if \(R(X,Y)\leq S(X,Y)\) for each \(X\!\propto\!\mathcal{X}\) and each \(Y\!\propto\!\mathcal{Y}\). Equipped with this order, any homset of \(\mathbf{qRel}\) becomes a complete lattice. The supremum \(\bigvee_{i\in I}R_{i}\) of a collection \((R_{i}\!:i\in I)\) of relations \(\mathcal{X}\to\mathcal{Y}\) is given by \(\big{(}\bigvee_{i\in I}R_{i}\big{)}\,(X,Y)=\bigvee_{i\in I}R_{i}(X,Y)\) for each \(X\!\propto\!\mathcal{X}\) and each \(Y\!\propto\!\mathcal{Y}\), where the supremum in the right-hand side is taken in the complete lattice of subspaces of \(L(X,Y)\). We can generalize the following set-theoretic notions to the quantum setting: 1. A quantum set \(\mathcal{X}\) is _empty_ if \(\operatorname{At}(\mathcal{X})=\emptyset\); 2. A quantum set \(\mathcal{X}\) is a _subset_ of a quantum set \(\mathcal{Y}\) if \(\operatorname{At}(\mathcal{X})\subseteq\operatorname{At}(\mathcal{Y})\), in which case we write \(\mathcal{X}\subseteq\mathcal{Y}\). 3. The _cartesian product_\(\mathcal{X}\times\mathcal{Y}\) of two quantum sets \(\mathcal{X}\) and \(\mathcal{Y}\) is defined by \(\operatorname{At}(\mathcal{X}\times\mathcal{Y})=\{X\otimes Y\!:X\!\propto\! \mathcal{X},Y\!\propto\!\mathcal{Y}\}\), where \(X\otimes Y\) denotes the usual tensor product of the Hilbert spaces \(X\) and \(Y\). We denote the cartesian product of quantum sets by \(\times\), because it is the noncommutative generalization of the usual product. However, it is not a categorical product in any of the categories that we will introduce below. It is not uncommon to use the notation \(\times\) for a non-categorical product: for instance, it is also used to denote the monoidal product in the category \(\mathbf{Rel}\) of sets and binary relations. To each ordinary set \(S\) we can assign a quantum set '\(S\) whose atoms are one-dimensional Hilbert spaces that are in a one-to-one correspondence with elements of \(S\). This correspondence can be made precise as follows. For each \(s\in S\), we define \(\mathbb{C}_{s}:=\ell^{2}(\{s\})\) with the convention that \(\mathbb{C}_{s}\neq\mathbb{C}_{t}\) if \(s\neq t\). Then \(\operatorname{At}(\text{'}S)=\{\mathbb{C}_{s}\!:s\in S\}\). Note that '\((S\times T)\) is isomorphic to \((\text{'}S)\times(\text{'}T)\) as a quantum set. It is well known that the category \(\mathbf{FdHilb}\) of finite-dimensional Hilbert spaces and linear maps is a dagger compact category, where the dagger of a linear map \(a\) is given by taking its Hermitian adjoint \(a^{\dagger}\). Also \(\mathbf{qRel}\) is a dagger compact category: for a relation \(R\!:\mathcal{X}\to\mathcal{Y}\), we define \(R^{\dagger}\!:\mathcal{Y}\to\mathcal{X}\) by \(R^{\dagger}(Y,X)=\{a^{\dagger}\!:a\in R(X,Y)\}\) for each \(X\!\propto\!\mathcal{X}\) and each \(Y\!\propto\!\mathcal{Y}\). The cartesian product \(\times\) of quantum sets extends to a monoidal product that is defined on morphisms \(R\!:\mathcal{X}\to\mathcal{Y}\) and \(S\!:\mathcal{W}\to\mathcal{Z}\) by \((R\times S)(X\otimes W,Y\otimes Z)=R(X,Y)\otimes S(W,Z)\) for each \(X\otimes W\!\propto\!\mathcal{X}\times\mathcal{W}\) and each \(Y\otimes Z\!\propto\!\mathcal{Y}\times\mathcal{Z}\). The monoidal unit \(\mathbf{1}\) is given by the quantum set consisting of a single one-dimensional atom, typically denoted by \(\mathbb{C}\). Let \(H\) and \(K\) be Hilbert spaces. For each linear operator \(v\in L(H,K)\), write \(v^{*}\in L(K^{*},H^{*})\) for the Banach space adjoint of \(v\), defined by \(v^{*}(\varphi)=\varphi\circ v\). For each subspace \(V\leq L(H,K)\), write \(V^{*}=\{v^{*}\!:v\in V\}\leq L(K^{*},H^{*})\). The _dual_ of a quantum set \(\mathcal{X}\) is the quantum set \(\mathcal{X}^{*}\) determined by \(\operatorname{At}(\mathcal{X}^{*})=\{X^{*}\!:X\!\propto\!\mathcal{X}\}\). The _dual_ of a binary relation \(R\) from \(\mathcal{X}\) to \(\mathcal{Y}\) is the binary relation \(R^{*}\) from \(\mathcal{Y}^{*}\) to \(\mathcal{X}^{*}\) defined by \(R^{*}(Y^{*},X^{*})=R(X,Y)^{*}\). In \(\mathbf{qRel}\), the associator \(A\), the unitors \(L\) and \(R\), the symmetry \(S\), the unit \(H\) and the counit \(E\) of the compact structure can be expressed in terms of the associator \(\alpha\), the unitors \(\lambda\) and \(\rho\), the symmetry \(\sigma\), and the unit \(\eta\) and counit \(\varepsilon\) of the compact structure of \(\mathbf{FdHilb}\). For instance, the nonzero components of \(S_{\mathcal{X}\mathcal{Y}}\!:\mathcal{X}\times\mathcal{Y}\to\mathcal{Y}\times\mathcal{X}\) is given by \(S_{\mathcal{X}\mathcal{Y}}(X\otimes Y,Y\otimes X)=\mathbb{C}\sigma_{\mathcal{X}}\), and the nonzero components of \(E_{\mathcal{X}}\!:\mathcal{X}\times\mathcal{X}^{*}\to\mathbf{1}\) are given by \(E_{\mathcal{X}}(X\otimes X^{*},\mathbb{C})=\mathbb{C}\varepsilon_{\mathcal{X}}\). The assignment \(S\mapsto\)'\(S\) extends to a fully faithful functor '\((-)\colon\mathbf{Rel}\to\mathbf{qRel}\), which is defined on ordinary binary relations \(r\colon S\to T\) for each \(s\in S\) and each \(t\in T\) by \((\mathord{\text{'}}r)(\mathbb{C}_{s},\mathbb{C}_{t})=L(\mathbb{C}_{s},\mathbb{C}_ {t})\) if \((s,t)\in r\), and \((\mathord{\text{'}}r)(\mathbb{C}_{s},\mathbb{C}_{t})=0\) otherwise. Since \(L(\mathbb{C}_{s},\mathbb{C}_{t})\) is one dimensional, it only has two subspaces, whence '\((-)\) is indeed fully faithful. Moreover, it preserves the dagger structure, and the inclusion order on homsets of \(\mathbf{Rel}\). It is easy to verify that a function \(f\colon X\to Y\) between ordinary sets is a binary relation such that \(f^{\dagger}\circ f\geq 1_{X}\) and \(f\circ f^{\dagger}\leq 1_{Y}\), where \(f^{\dagger}\) is the opposite relation of \(f\). Hence, a relation \(F\colon\mathcal{X}\to\mathcal{Y}\) between quantum sets is called a _function_ if it satisfies \(F^{\dagger}\circ F\geq I_{\mathcal{X}}\) and \(F\circ F^{\dagger}\leq I_{\mathcal{Y}}\). Examples of functions are the associator, unitors, and symmetry of \(\mathbf{qRel}\). Another example of a function is provided by subsets \(\mathcal{Y}\) of quantum sets \(\mathcal{X}\), for which there is an _inclusion function_\(J_{\mathcal{Y}}^{\mathcal{X}}\) defined for each \(Y\propto\mathcal{Y}\) and each \(X\propto\mathcal{X}\) by \(J_{\mathcal{Y}}^{\mathcal{X}}(Y,X)=\mathbb{C}1_{Y}\) if \(Y=X\) and \(J_{\mathcal{Y}}^{\mathcal{X}}(Y,X)=0\) otherwise. If it is clear that \(\mathcal{X}\) is the ambient quantum set, we often write \(J_{\mathcal{Y}}\) instead of \(J_{\mathcal{Y}}^{\mathcal{X}}\). Given a binary relation \(R:\mathcal{X}\to\mathcal{Y}\) and subsets \(\mathcal{Z}\subseteq\mathcal{X}\) and \(\mathcal{W}\subseteq\mathcal{Y}\), we define the _restriction_\(R|_{\mathcal{Z}}\) of \(R\) to \(\mathcal{Z}\) as the relation \(R\circ J_{\mathcal{Z}}^{\mathcal{X}}\). The _corestriction_\(R|^{\mathcal{W}}\) of \(R\) to \(\mathcal{W}\) is defined as the relation \((J_{\mathcal{W}}^{\mathcal{Y}})^{\dagger}\circ R\). We have \((R|_{\mathcal{Z}})|^{\mathcal{W}}=(R|^{\mathcal{W}})|_{\mathcal{Z}}\), which we denote as \(R|_{\mathcal{Z}}^{\mathcal{W}}\). The wide subcategory of \(\mathbf{qRel}\) of functions is denoted by \(\mathbf{qSet}\), which is complete, cocomplete and symmetric monoidal closed with respect to the monoidal product \(\times\). The monoidal unit, associator, unitors and symmetry are the same as for \(\mathbf{qRel}\). A function \(F\colon\mathcal{X}\to\mathcal{Y}\) is called _injective_ if \(F^{\dagger}\circ F=I_{\mathcal{X}}\) and _surjective_ if \(F\circ F^{\dagger}=I_{\mathcal{Y}}\). Any inclusion function is an injective map. The injective and surjective functions are precisely the respective monomorphisms and epimorphisms of \(\mathbf{qSet}\). Functions that are both injective and surjective are called _bijective_, and are precisely the isomorphisms of \(\mathbf{qSet}\). The _range_ of a function \(F\colon\mathcal{X}\to\mathcal{Y}\) is the quantum set \(\operatorname{ran}F\) specified by \(\operatorname{At}(\operatorname{ran}F)=\{Y\propto\mathcal{Y}\colon F(X,Y)\neq 0\) for some \(X\propto\mathcal{X}\}\). We have \(F=J_{\operatorname{ran}F}\circ\bar{F}\) for some unique surjective function \(\bar{F}\colon\mathcal{X}\to\operatorname{ran}F\), which is defined by \(\bar{F}(X,Y)=F(X,Y)\) for each \(X\propto\mathcal{X}\) and each \(Y\propto\operatorname{ran}F\). It follows that \(F\) is surjective if and only if \(\operatorname{ran}F=\mathcal{Y}\). The functor '\((-)\colon\mathbf{Rel}\to\mathbf{qRel}\) restricts and corestricts to a fully faithful functor '\((-)\colon\mathbf{Set}\to\mathbf{qSet}\). Furthermore, if we denote the category of von Neumann algebras and normal \(*\)-homomorphisms by \(\mathbf{WStar}\), then there is a fully faithful functor \(\ell^{\infty}\colon\mathbf{qSet}\to\mathbf{WStar}^{\operatorname{op}}\) that on objects is defined by \(\mathcal{X}\mapsto\bigoplus_{X\propto\mathcal{X}}L(X)\). The essential image of this functor is the category of _hereditarily atomic_ von Neumann algebras, i.e., von Neumann algebras that are isomorphic to some (possibly infinite) \(\ell^{\infty}\)-sum of matrix algebras. Also \(\mathbf{qRel}\) can be shown to equivalent to a category of operator algebras [11], namely the category of hereditarily atomic von Neumann algebras and Weaver's quantum relations [21]. ### Quantum posets The basic reference for this section is [14]. Let \(\mathcal{X}\) be a quantum set. Then we call a binary relation \(R\colon\mathcal{X}\to\mathcal{X}\)_reflexive_ if \(I_{\mathcal{X}}\leq R\), _transitive_ if \(R\circ R\leq R\), and _antisymmetric_ if \(R\wedge R^{\dagger}\leq I_{\mathcal{X}}\). A pair \((\mathcal{X},\clubsuit)\) consisting of a quantum set \(\mathcal{X}\) and a reflexive, transitive and antisymmetric relation \(\clubsuit\) on \(\mathcal{X}\) is called a _quantum poset_. The relation \(\clubsuit\) is called an _order_. In order to improve the readability of expressions and calculations, we sometimes write parentheses around \(\clubsuit\), so we write \((\clubsuit)\). _Example 2.1_.: Let \(\mathcal{X}\) be a quantum set. Then \(I_{\mathcal{X}}\) is a quantum order on \(\mathcal{X}\), which we call the _trivial_ order. _Example 2.2_.: Let \(\mathcal{H}\) be the quantum set consisting of a single two-dimensional atom \(H\). A 'non-classical' order on \(\mathcal{H}\) is given by the relation \(\clubsuit\) on \(\mathcal{H}\) specified by \(\clubsuit(H,H)=\mathbb{C}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}+\mathbb{C}\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\). Since \(\mathcal{H}\) has only one atom \(H\), \(\clubsuit\) is determined by \(\clubsuit(H,H)\). Thus, \((\mathcal{H},\clubsuit)\) is a quantum poset. Let \((\mathcal{X},\clubsuit)\) be a quantum poset. The relation \(\clubsuit^{\dagger}\) is also an order, and is called the _opposite_ order on \(\mathcal{X}\). We write \((\mathcal{X},\clubsuit)^{\text{op}}=(\mathcal{X},\clubsuit)\), which is called the _opposite_ quantum poset of \((\mathcal{X},\clubsuit)\). Often, we just write \(\mathcal{X}\) instead of \((\mathcal{X},\clubsuit)\) and \(\mathcal{X}^{\text{op}}\) instead of \((\mathcal{X},\clubsuit)\). _Example 2.3_.: Let \((\mathcal{H},\clubsuit)\) be the quantum poset of the previous example. Then \(\clubsuit\) is specified by \(\clubsuit(H,H)=\mathbb{C}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}+\mathbb{C}\begin{pmatrix}0&0\\ 1&0\end{pmatrix}.\) Let \((\mathcal{X},\clubsuit\mathcal{X})\) and \((\mathcal{Y},\clubsuit\mathcal{Y})\) be quantum posets. Then we say that a function \(F\colon\mathcal{X}\to\mathcal{Y}\) is _monotone_ if \(F\circ(\clubsuit\mathcal{X})\leq(\clubsuit\mathcal{Y})\circ F\). Under the composition of functions between quantum sets, quantum posets and monotone functions form a category, which we call \(\mathbf{qPos}\), which is complete, cocomplete and monoidal closed under the monoidal product that is defined by \((\mathcal{X},\clubsuit\mathcal{X})\times(\mathcal{Y},\clubsuit\mathcal{Y})=( \mathcal{X}\times\mathcal{Y},\clubsuit\mathcal{X}\times\clubsuit\mathcal{Y})\). The monoidal unit is given by \((\mathbf{1},\boldsymbol{I_{1}})\). The components of the associator, unitors and symmetry are the components of the respective associator, unitors and symmetry of the underlying quantum sets. We denote the evaluation morphism of \(\mathbf{qPos}\) by \(\operatorname{Eval}_{\sqsubseteq}\), and the internal hom by \([\cdot,\cdot]_{\sqsubseteq}\). We call the isomorphisms of \(\mathbf{qPos}\)_order isomorphisms_. Let \((\mathcal{X},\clubsuit\mathcal{X})\) and \((\mathcal{Y},\clubsuit\mathcal{Y})\) be quantum posets. Then a monotone map \(F\colon\mathcal{X}\to\mathcal{Y}\) is called an _order embedding_ if \((\clubsuit\mathcal{X})=F^{\dagger}\circ(\clubsuit\mathcal{Y})\circ F\). The surjective order embeddings are precisely the order isomorphisms. A subposet of a quantum poset \((\mathcal{Y},\clubsuit)\) consists of a subset \(\mathcal{X}\) of \(\mathcal{Y}\) equipped with the order \(\clubsuit\mathcal{X}:=J^{\dagger}_{\mathcal{X}}\circ\clubsuit\mathcal{Y}\circ J _{\mathcal{X}}\), to which we refer as the _induced_ order on \(\mathcal{X}\). It follows that \(J_{\mathcal{X}}\colon(\mathcal{X},\clubsuit\mathcal{X})\to(\mathcal{Y},\clubsuit \mathcal{Y})\) is an order embedding. Given a monotone map \(F\colon(\mathcal{X},\clubsuit\mathcal{X})\to(\mathcal{Y},\clubsuit\mathcal{Y})\) between quantum posets, if we equip \(\operatorname{ran}F\subseteq\mathcal{Y}\) with the relative order, then the unique surjective function \(\bar{F}\colon\mathcal{X}\to\operatorname{ran}F\) such that \(F=J_{\operatorname{ran}F}\circ\bar{F}\) is monotone. Hence, every monotone map can be written as the composition of a monotone surjective map and an order embedding. If \((S,\sqsubseteq)\) is an ordinary poset, then \((\mathord{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text A way to see that Dwn is the underlying endofunctor of a monad is the following. Just like we can embed \(\mathbf{Set}\) into \(\mathbf{Rel}\), we have an embedding \((-)_{\circ}\colon\mathbf{Pos}\to\mathbf{RelPos}\) that is the identify on objects, and that sends any monotone function \(f\colon(X,\sqsubseteq)\to(Y,\sqsubseteq)\) to the monotone relation \(f_{\circ}=\{(x,y)\colon y\sqsubseteq f(x)\}\). Moreover, just like the covariant power set functor extends to a functor \(\mathbf{Rel}\to\mathbf{Set}\) that is the right adjoint of the embedding \(\mathbf{Set}\to\mathbf{Rel}\), the assignment \(X\mapsto\mathrm{Dwn}(X)\) extends to a functor \(\mathbf{RelPos}\to\mathbf{Pos}\) that is the right adjoint of \((-)_{\circ}\). The monad induced by this adjunction is precisely the down-set monad on \(\mathbf{Pos}\). Hence, in order to define quantum suplattices, we will have to find the quantum generalization of the down-set monad, and in order to find this quantum down-set monad, we have to find a quantum generalization of monotone relations. **Definition 3.1**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) and \((\mathcal{Y},\clubsuit_{\mathcal{Y}})\) be quantum posets. A binary relation \(V\colon\mathcal{X}\to\mathcal{Y}\) is called a monotone relation \((\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Y},\clubsuit_{\mathcal{Y}})\) if it satisfies one of the following two equivalent conditions (hence both):_ 1. \((\clubsuit_{\mathcal{Y}})\circ V\leq V\) _and_ \(V\circ(\clubsuit_{\mathcal{X}})\leq V\)_._ 2. \((\clubsuit_{\mathcal{Y}})\circ V=V=V\circ(\clubsuit_{\mathcal{X}})\)_._ The equivalence of both conditions follows from the reflexivity of orders. **Lemma 3.2**.: _Let \(V\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Y},\clubsuit_{\mathcal{ Y}})\) and \(W\colon(\mathcal{Y},\clubsuit_{\mathcal{Y}})\to(\mathcal{Z},\clubsuit_{\mathcal{Z}})\) be monotone relations between quantum posets. Then \(W\circ V\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Z},\clubsuit_{ \mathcal{Z}})\) is a monotone relation._ **Lemma 3.3**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) be a quantum poset. Then \(\clubsuit_{\mathcal{X}}\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to( \mathcal{X},\clubsuit_{\mathcal{X}})\) is a monotone relation such that for each quantum poset \((\mathcal{Y},\clubsuit_{\mathcal{Y}})\) and all monotone relations \(V\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Y},\clubsuit_{ \mathcal{Y}})\) and \(W\colon(\mathcal{Y},\clubsuit_{\mathcal{Y}})\to(\mathcal{X},\clubsuit_{ \mathcal{X}})\) we have \(V\circ(\clubsuit_{\mathcal{X}})=V\) and \((\clubsuit_{\mathcal{X}})\circ W=W\)._ It follows from the previous two lemmas that the following definition is sound: **Definition 3.4**.: _We define the category of quantum posets and monotone relations by \(\mathbf{qRelPos}\). The identity monotone relation on a quantum poset \((\mathcal{X},\clubsuit_{\mathcal{X}})\) is \(\clubsuit_{\mathcal{X}}\), which we often denote by \(I_{(\mathcal{X},\clubsuit)}\)._ **Lemma 3.5**.: _There is a fully faithful functor \({}^{*}(-)\colon\mathbf{RelPos}\to\mathbf{qRelPos}\) that sends any poset \((S,\sqsubseteq)\) to \((\,{}^{*}S,\sqsubseteq)\) and any monotone relation \(v\colon(S,\sqsubseteq)\to(T,\sqsubseteq)\) to \({}^{*}v\)._ **Lemma 3.6**.: _There is a faithful functor \((-)_{\circ}\colon\mathbf{qPos}\to\mathbf{qRelPos}\) which is the identity on objects, and which acts on monotone maps \(F\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Y},\clubsuit_{ \mathcal{Y}})\) by \(F_{\circ}:=(\clubsuit_{\mathcal{Y}})\circ F\)._ The functor in the previous lemma is an extension of the functor \((-)_{\circ}\colon\mathbf{Pos}\to\mathbf{RelPos}\) mentioned above, in the sense that \(\mathbf{Pos}\stackrel{{\cdot(-)}}{{\longrightarrow}}\mathbf{qPos} \stackrel{{(-)_{\circ}}}{{\longrightarrow}}\mathbf{qRelPos}\) equals \(\mathbf{Pos}\stackrel{{(-)_{\circ}}}{{\longrightarrow}}\mathbf{ RelPos}\stackrel{{\cdot(-)}}{{\longrightarrow}}\mathbf{qRelPos}\). **Definition 3.7**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) be a quantum poset. Then we define \((\mathcal{X},\clubsuit_{\mathcal{X}})^{*}\) to be the quantum poset \((\mathcal{X}^{*},\clubsuit_{\mathcal{X}}^{*})\). Sometimes, we write \(\mathcal{X}^{*}\) instead of \((\mathcal{X},\clubsuit_{\mathcal{X}})^{*}\)._ Since the operation of taking daggers in dagger compact categories commutes with the operation of taking duals, we obtain the following lemma: **Lemma 3.8**.: _Let \((\mathcal{X},\clubsuit)\) be a quantum poset. Then \((\mathcal{X}^{*})^{\mathrm{op}}=(\mathcal{X}^{\mathrm{op}})^{*}\)._ **Theorem 3.9**.: _The category \(\mathbf{qRelPos}\) is compact closed: for each quantum poset \((\mathcal{X},\clubsuit)\), the unit \(H_{(\mathcal{X},\clubsuit)}\colon(1,I_{\mathbf{I}})\to(\mathcal{X},\clubsuit)^{*} \times(\mathcal{X},\clubsuit)\) and the counit \(E_{(\mathcal{X},\clubsuit)}\colon(\mathcal{X},\clubsuit)\times(\mathcal{X}, \clubsuit)^{*}\to(\mathbf{1},I_{\mathbf{I}})\) are given by \((\clubsuit^{*}\times\clubsuit)\circ H_{\mathcal{X}}\) and \(E_{\mathcal{X}}\circ(\clubsuit\times\clubsuit^{*})\), respectively, where \(H_{\mathcal{X}}\) and \(E_{\mathcal{X}}\) denote the usual unit and counit of \(\mathbf{qRel}\). The associator, unitors and symmetry are obtained by applying the functor \((-)_{\circ}\) to the usual associator, unitors and symmetry in \(\mathbf{qPos}\)._ ## 4 The quantum down-set monad In this section, we introduce the quantum down-set monad. Its construction shares similarities with the construction of the quantum power set in [15]. This construction yields a quantum poset, but the construction of this order seems to be a bit ad hoc. The framework of monotone relations seems to be more appropriate for the construction of an ordered object. We will see that the quantum down-set monad by means of monotone relations is ordered in a natural way. When we apply the monad to a trivially ordered quantum set, then we obtain the quantum power set of this quantum set. **Definition 4.1**.: _We define the quantum poset \(\mathrm{qDwn}(\mathcal{X},\clubsuit)\) of down-sets of a quantum poset \((\mathcal{X},\clubsuit)\) to be the internal hom in \(\mathbf{qPos}\) from \((\mathcal{X},\clubsuit)^{*}\) to \((\mathbf{2},\clubsuit_{2})\), i.e., \(\mathrm{qDwn}(\mathcal{X},\clubsuit):=[(\mathcal{X},\clubsuit_{\blacksquare})^{*}, (\mathbf{2},\clubsuit_{2})]_{\sqsubseteq}\). We will denote its underlying quantum set by \(\mathcal{D}(\mathcal{X},\clubsuit)\). The order on \(\mathcal{D}(\mathcal{X},\clubsuit)\) is denoted by \(\clubsuit_{(\mathcal{X},\clubsuit)}\), so \(\mathrm{qDwn}(\mathcal{X},\clubsuit)=(\mathcal{D}(\mathcal{X},\clubsuit), \clubsuit_{\blacksquare}(\mathcal{X},\clubsuit_{\blacksquare}))\)._ Note that the order \(\clubsuit\) on \(\mathrm{qDwn}(\mathcal{X})\) is a boldface symbol to distinguish it from the inclusion order \(\subseteq\) between ordinary sets. We will prove that the assignment of objects \((\mathcal{X},\clubsuit)\mapsto\mathrm{qDwn}(\mathcal{X},\clubsuit)\) extends to a monad on \(\mathbf{qPos}\) by showing that the functor \((-)_{\diamond}\colon\mathbf{qPos}\rightarrow\mathbf{qRelPos}\) has a right adjoint; the monad is then induced by this adjunction. The right adjoint also sends objects \((\mathcal{X},\clubsuit)\) to \(\mathrm{qDwn}(\mathcal{X},\clubsuit)\). Nevertheless, it is useful to make a distinction in the notation between monad and right adjoint, hence we will denote the right adjoint by \(\mathrm{qDwn}^{\prime}\). The first step of showing the existence of \(\mathrm{qDwn}\) is the following lemma, for which we note that we have embeddings \(1\to 2\) which map \(*\in 1\) to either \(0\in 2\) or \(1\in 2\). We denote the respective maps by \(0\) and \(1\) as well. As a consequence, we have functions \(\ {}^{\shortmid}0,\ ^{\shortmid}1\colon\mathbf{1}\rightarrow\mathbf{2}\). **Lemma 4.2**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) be a quantum poset. Then the bijection \(\mathbf{qSet}(\mathcal{X},\mathbf{2})\rightarrow\mathbf{qRel}(\mathcal{X}, \mathbf{1})\), \(F\mapsto\ ^{\shortmid}1^{\dagger}\circ F\) in [13, Theorem B.8] restricts and corestricts to a bijection_ \[\mathbf{qPos}((\mathcal{X},\clubsuit_{\mathcal{X}}),(\mathbf{2},\clubsuit_{ \mathbf{2}}))\rightarrow\mathbf{qRelPos}((\mathcal{X},\clubsuit_{\mathcal{X}} ),(\mathbf{1},I_{\mathbf{1}})).\] The counit of the adjunction that yields the ordinary down-set monad is the inverse membership relation \(\ni\). Lemma 4.2 assures the existence of the quantum equivalent of this counit, which we will denote with a boldface symbol \(\mathfrak{D}\). **Lemma 4.3**.: _For any quantum poset \((\mathcal{X},\clubsuit)\) there is a unique monotone relation \(\mathfrak{D}_{(\mathcal{X},\clubsuit)}\colon\mathrm{qDwn}(\mathcal{X},\clubsuit )\rightarrow(\mathcal{X},\clubsuit)\) such that \(\ {}^{\shortmid}1^{\dagger}\circ\mathrm{Eval}_{\sqsubseteq}=E_{(\mathcal{X}, \clubsuit)}\circ(\mathfrak{D}_{(\mathcal{X},\clubsuit)}\times I_{(\mathcal{X}, \clubsuit)^{*})}\)._ **Theorem 4.4**.: _There is a functor \(\mathrm{qDwn}^{\prime}\colon\mathbf{qRelPos}\rightarrow\mathbf{qPos}\) whose action on objects is given by \((\mathcal{X},\clubsuit)\mapsto\mathcal{D}(\mathcal{X},\clubsuit)\), and which is right adjoint to the functor \((-)_{\diamond}\colon\mathbf{qPos}\rightarrow\mathbf{qRelPos}\). The \((\mathcal{X},\clubsuit)\)-component of the counit \(\mathfrak{D}\) of this adjunction is the monotone relation constructed in Lemma 4.3. The unit of the adjunction is denoted by \(\{\clubsuit\}\). Its \((\mathcal{X},\clubsuit)\)-component is an order embedding that is the unique monotone function \((\mathcal{X},\clubsuit)\rightarrow\mathcal{D}(\mathcal{X},\clubsuit)\) such that \(\mathfrak{D}_{(\mathcal{X},\clubsuit)}\circ\{\clubsuit\}\setminus_{(\mathcal{X}, \clubsuit)}=I_{(\mathcal{X},\clubsuit)}\)._ **Definition 4.5**.: _We define the quantum down-set monad \(\mathrm{qDwn}\) to be the monad induced by the adjunction \((-)_{\diamond}\dashv\mathrm{qDwn}^{\prime}\), so \(\mathrm{qDwn}=\mathrm{qDwn}^{\prime}\circ(-)_{\diamond}\). We denote its multiplication by \(\bigsqcup\) and its unit by \(\clubsuit\{\{\}}\)._ Note that the multiplication \(\bigsqcup\) is a boldfaced version of the usual union \(\bigcup\) of ordinary sets. ## 5 Opposite quantum posets and upper sets Let \(X\) be an ordinary poset. Then the complementation operator provides a bijection between the set \(D(X)\) of down sets of \(X\) and the set \(U(X)\) of upper sets of \(X\). Both \(D(X)\) and \(U(X)\) can be extended to endofunctors on \(\mathbf{Pos}\), for which we have to order the former by inclusion and the latter by containment. Then, writing \(\mathrm{Dwn}(X)=(D(X),\subseteq)\) and \(\mathrm{Up}(X)=(U(X),\supseteq)\), the bijection extends to an order isomorphism \(\mathrm{Dwn}(X)\to\mathrm{Up}(X)\). In the quantum world, we can obtain a similar order isomorphism by showing that we can also construct a different right adjoint of \((-)_{\diamond}\) in terms of upper sets, which, by the uniqueness of right adjoints up to natural isomorphism, should be naturally isomorphic to \(\mathrm{qDown}^{\prime}\). This natural isomorphism is precisely the operation of taking complements. Before we construct this right adjoint in terms of upper sets, we first have to extend the operation of taking opposite quantum posets to an endofunctor on \(\mathbf{qPos}\). **Lemma 5.1**.: _There is an endofunctor \((-)^{\mathrm{op}}\colon\mathbf{qPos}\to\mathbf{qPos}\), defined on objects by \((\mathcal{X},\clubsuit_{\mathcal{X}})\mapsto(\mathcal{X},\clubsuit_{\mathcal{ X}})\) and which maps any monotone map \(F\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Y},\clubsuit_{ \mathcal{Y}})\) to the monotone map \(F\colon(\mathcal{X},\clubsuit_{\mathcal{X}})\to(\mathcal{Y},\clubsuit_{ \mathcal{Y}})\). This functor \((-)^{\mathrm{op}}\) is involutory, i.e., \((-)^{\mathrm{opop}}=1_{\mathbf{qPos}}\), hence an isomorphism of categories._ **Proposition 5.2**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) and \((\mathcal{Y},\clubsuit_{\mathcal{Y}})\) be quantum posets. Then \([\mathcal{X}^{\mathrm{op}},\mathcal{Y}^{\mathrm{op}}]_{\subseteq}=[\mathcal{ X},\mathcal{Y}]_{\subseteq}^{\mathrm{op}}\)._ **Definition 5.3**.: _Let \((\mathcal{X},\clubsuit)\) be a quantum poset. Then we define the quantum poset of upsets of \((\mathcal{X},R)\) as the quantum poset \(\mathrm{qUp}(\mathcal{X},\clubsuit):=[(\mathcal{X},\clubsuit)^{*},(\mathbf{2},\clubsuit_{2})]_{\subseteq}\). We denote its underlying quantum set by \(\mathcal{U}(\mathcal{X},\clubsuit)\)._ The previous proposition yields \(\mathrm{qUp}(\mathcal{X},\clubsuit)=[\mathcal{X}^{*},\mathbf{2}^{\mathrm{op} }]_{\subseteq}=[(\mathcal{X}^{\mathrm{op}})^{*},\mathbf{2}]_{\subseteq}^{ \mathrm{op}}=\mathrm{qDown}(\mathcal{X},\clubsuit)^{\mathrm{op}}\), whence \(\mathcal{U}(\mathcal{X},\clubsuit)=\mathcal{D}(\mathcal{X},\clubsuit)\). The other right adjoint is obtained by constructing a different counit, namely the inverse non-membership relation. This is done by taking \(\mathbf{2}^{\mathrm{op}}\) instead of \(\mathbf{2}\) and \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbfmathbfmathbf{ }}}}}}}{}}{}}{{}}{}}{{}}{{{{{{{{{{{{{{{{{{{{{{{{{{{ {{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\ ## 7 Galois connections **Definition 7.1**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) and \((\mathcal{Y},\clubsuit_{\mathcal{Y}})\) be quantum posets and let \(F\colon\mathcal{X}\to\mathcal{Y}\) and \(G\colon\mathcal{Y}\to\mathcal{X}\) be functions. If \((\clubsuit_{\mathcal{Y}})\circ F=G^{\dagger}\circ(\clubsuit_{\mathcal{X}}),\) we say that \((F,G)\) forms a Galois connection, or that \(F\) is the lower adjoint of \(G\), or that \(G\) is the upper adjoint of \(F\)._ This definition is a generalization of the usual definition of a Galois connection between ordinary posets: the '\((-)\) functor maps ordinary Galois connections to Galois connections in the sense of the definition above. Similarly as in the classical case, the lower adjoint in a Galois connection determines the upper adjoint and _vice versa_. The next theorem provides alternative characterizations of Galois connections, which might look more familiar to the classical case: **Theorem 7.2**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) and \((\mathcal{Y},\clubsuit_{\mathcal{Y}})\) be quantum posets and let \(F\colon\mathcal{X}\to\mathcal{Y}\) and \(G\colon\mathcal{Y}\to\mathcal{X}\) be monotone functions. Then the following statements are equivalent:_ 1. \(F\) _is the lower adjoint of_ \(G\colon\mathcal{Y}\to\mathcal{X}\)_._ 2. \(F\circ K\sqsubseteq_{\mathcal{Y}}M\iff K\sqsubseteq_{\mathcal{X}}G\circ M\) _for any quantum set_ \(\mathcal{Z}\) _and functions_ \(K\colon\mathcal{Z}\to\mathcal{X}\) _and_ \(M\colon\mathcal{Z}\to\mathcal{Y}\)_._ 3. _We have_ \(I_{\mathcal{X}}\sqsubseteq_{\mathcal{X}}G\circ F\) _and_ \(F\circ G\sqsubseteq_{\mathcal{Y}}I_{\mathcal{Y}}\)_._ Proof.: We start by showing that (1) implies (2). So assume that \(F\) is the lower adjoint of \(G\), so \((\clubsuit_{\mathcal{Y}})\circ F=G^{\dagger}\circ(\clubsuit_{\mathcal{X}})\). Let \(\mathcal{Z}\) be a quantum set and let \(K\colon\mathcal{Z}\to\mathcal{X}\) and \(M:\mathcal{Z}\to\mathcal{Y}\) be functions. Assume \(F\circ K\sqsubseteq_{\mathcal{Y}}M\). By definition of \(\sqsubseteq_{\mathcal{Y}}\), we have \(M\leq(\clubsuit_{\mathcal{Y}})\circ F\circ K\), so \(M\leq G^{\dagger}\circ\clubsuit_{\mathcal{X}}\circ K\), hence \(G\circ M\leq G\circ G^{\dagger}\circ\clubsuit_{\mathcal{X}}\circ K\leq( \clubsuit_{\mathcal{X}})\circ K\) since \(G\) is a function. Hence \(K\sqsubseteq_{\mathcal{X}}G\circ M\). Conversely if \(K\sqsubseteq_{\mathcal{X}}G\circ M\), we have \(G\circ M\leq(\clubsuit_{\mathcal{X}})\circ K\) by definition of a function, \(M\leq G^{\dagger}\circ G\circ M\leq G^{\dagger}\circ\clubsuit_{\mathcal{X}} \circ K=(\clubsuit_{\mathcal{Y}})\circ F\circ K\), so \(F\circ K\sqsubseteq_{\mathcal{Y}}M\). We show that (2) implies (3). Since \(F\sqsubseteq_{\mathcal{Y}}F\), condition (2) yields \(I_{\mathcal{X}}\sqsubseteq_{\mathcal{X}}G\circ F\) if we choose \(\mathcal{Z}=\mathcal{X}\), \(K=I_{\mathcal{X}}\) and \(M=F\). Since \(G\sqsubseteq_{\mathcal{X}}G\), we obtain \(F\circ G\sqsubseteq_{\mathcal{Y}}I_{\mathcal{Y}}\) by choosing \(\mathcal{Z}=\mathcal{Y}\), \(K=G\) and \(M=I_{\mathcal{Y}}\). We proceed by showing that (3) implies (1). Since \(I_{\mathcal{X}}\sqsubseteq_{\mathcal{X}}G\circ F\), we have \(G\circ F\leq(\clubsuit_{\mathcal{X}})\). Then \(F\leq G^{\dagger}\circ G\circ F\leq G^{\dagger}\circ(\clubsuit_{\mathcal{X}})\). By Lemma 5.1, \(G:\mathcal{Y}^{\mathrm{op}}\to\mathcal{X}^{\mathrm{op}}\) is also monotone, so \(G\circ(\clubsuit_{\mathcal{Y}})\leq(\clubsuit_{\mathcal{X}})\circ G\), which implies \((\clubsuit_{\mathcal{Y}})\circ G^{\dagger}\leq G^{\dagger}\circ(\clubsuit_{ \mathcal{X}})\) after taking daggers. Hence, \((\clubsuit_{\mathcal{Y}})\circ F\leq(\clubsuit_{\mathcal{Y}})\circ G^{ \dagger}\circ\clubsuit_{\mathcal{X}}\leq G^{\dagger}\circ(\clubsuit_{\mathcal{ X}}\circ\clubsuit_{\mathcal{X}})=G^{\dagger}\circ(\clubsuit_{\mathcal{X}})\). Since \(F\circ G\sqsubseteq_{\mathcal{Y}}I_{\mathcal{Y}}\), we have \(I_{\mathcal{Y}}\leq(\clubsuit_{\mathcal{Y}})\circ F\circ G\). Hence, \(G^{\dagger}\leq(\clubsuit_{\mathcal{Y}})\circ F\circ G\circ G^{\dagger}\leq( \clubsuit_{\mathcal{Y}})\circ F\). Since \(F\) is monotone, we have \(F\circ(\clubsuit_{\mathcal{X}})\leq(\clubsuit_{\mathcal{Y}})\circ F\), whence \(G^{\dagger}\circ(\clubsuit_{\mathcal{X}})\leq(\clubsuit_{\mathcal{Y}})\circ F \circ(\clubsuit_{\mathcal{X}})\leq(\clubsuit_{\mathcal{Y}}\circ\clubsuit_{ \mathcal{Y}})\circ F=(\clubsuit_{\mathcal{Y}})\circ F\). We conclude that \((\clubsuit_{\mathcal{Y}})\circ F=G^{\dagger}\circ(\clubsuit_{\mathcal{X}})\), so \(F\) is the lower adjoint of \(G\). The next example is the quantum version of the statement that the direct image and the preimage of a function form a Galois connection: _Example 7.3_.: \(\mathrm{qPow}(F)\colon\mathrm{qPow}(\mathcal{X})\to\mathrm{qPow}(\mathcal{Y})\) is the lower Galois adjoint of \(\mathrm{qPow}(F^{\dagger})\) for any function \(F\colon\mathcal{X}\to\mathcal{Y}\). Also the notion of closure operators can be generalized to the quantum setting: **Definition 7.4**.: _Let \((\mathcal{X},\clubsuit)\) be a quantum poset. Then we call a monotone function \(C\colon\mathcal{X}\to\mathcal{X}\) a closure operator on \(\mathcal{X}\) if \(I_{\mathcal{X}}\sqsubseteq C\) and \(C\circ C=C\)._ Just as in the classical case, there is a relation between Galois connections and closure operators: **Theorem 7.5**.: _Let \((\mathcal{X},\clubsuit_{\mathcal{X}})\) and \((\mathcal{Y},\clubsuit_{\mathcal{Y}})\) be quantum posets and let \(F\colon\mathcal{X}\to\mathcal{Y}\) be a monotone function that is the lower adjoint of a monotone function \(G\colon\mathcal{Y}\to\mathcal{X}\). Then \(C:=G\circ F\) is a closure operator on \(\mathcal{X}\). Conversely, let \(C\) be a closure operator on a quantum poset \((\mathcal{X},\clubsuit_{\mathcal{X}})\). Let \(\mathcal{Y}=\mathrm{ran}\,C\), and let \(\clubsuit_{\mathcal{Y}}\) be the induced order on \(\mathcal{Y}\), i.e., \(\clubsuit_{\mathcal{Y}}=\clubsuit_{\mathcal{X}}\big{\lvert}_{\mathcal{Y}}^{ \mathcal{Y}}\). Then the unique surjective monotone function \(\bar{C}\colon\mathcal{X}\to\mathcal{Y}\) _such that \(C=J_{\mathcal{Y}}\circ\bar{C}\) is the lower adjoint of the order embedding \(J_{\mathcal{Y}}\colon\mathcal{Y}\to\mathcal{X}\). In particular, we have \(\bar{C}\circ J_{\mathcal{Y}}=I_{\mathcal{Y}}\)._ The quantum version of the operation \(A\mapsto\downarrow A\) on a power set \(\mathrm{Pow}(X)\) is a closure operator: _Example 7.6_.: Let \(\blacktriangleleft\) be an order on a quantum set \(\mathcal{X}\). Then \(\mathrm{qPow}(\blacktriangleright)\) is a closure operator on \(\mathrm{qPow}(\mathcal{X})\). Its range equipped with the relative order equals \(\mathrm{qDwn}(\mathcal{X},\blacktriangleleft)\). ## 8 Quantum suplattices Classically, a poset \(X\) is a complete lattice if its canonical embedding \(X\to\mathrm{Dwn}(X),x\mapsto\downarrow x\) into its poset of down-sets has a lower Galois adjoint. We will use this fact in order to define quantum suplattices. **Definition 8.1**.: _Let \((\mathcal{X},\blacktriangleleft_{\mathcal{X}})\) be a quantum poset, and let \(\blacktriangleleft\blackblack\blackblack\blackblack\blackblack\blackblack\blackblack \blackblackblack\blackblack\blackblack\blackblack\blackblack\blackblack\)\(:(\mathcal{X},\blacktriangleleft_{\mathcal{X}})\to\mathrm{qDwn}(\mathcal{X},\blacktriangleleft_{ \mathcal{X}})\) be the order embedding of \(\mathcal{X}\) into the quantum poset of down-sets of \(\mathcal{X}\). Then we say that \((\mathcal{X},\blacktriangleleft_{\mathcal{X}})\) is a quantum sublattice if \(\blacktriangleleft\blackblack\blackblack\blackblack\blackblack\blackblack\blackblack \blackblack\blackblack\blackblack\blackblack\black\blackblack\) has a lower adjoint \(\blackblackblackblack\blackblackblack\blackblackblack\blackblack\blackblack\blackblack \blackblack\blackblack\blackblack\blackblack\blackblack\blackblack\black\black \blackblack\blackblack\black\blackblack\blackblack\blackblack\blackblack\)\(:\mathrm{qDwn}(\mathcal{X},\blacktriangleleft_{\mathcal{X}})\to(\mathcal{X},\blacktriangleleft_{\mathcal{X}})\). Moreover, if \((\mathcal{Y},\blacktriangleleft_{\mathcal{Y}})\) is another quantum sublattice, then we say that a function \(K\colon\mathcal{X}\to\mathcal{Y}\) is a homomorphism of quantum suplattices if \(K\circ\blackblackblackblack\blackblackblack\blackblackblack\blackblack\blackblack \blackblackblack\blackblack\blackblack\black\blackblack\black\blackblack\black\black \blackblack\black\blackblack\black\black\black\blackblack\black\black\black\black \blackblack\black\blackblack\black\black\black\black\black\black\black\black\black \black\black\blackblack\black\black\black\black\black\black\black\black\black \black\black\black\black\black\black\black\black\black\black\black\black\black\)\(.\) We denote the category of quantum suplattices and homomorphisms of quantum suplattices by \(\mathbf{qSup}\)._ We will show that quantum posets of down-sets form the primary examples of quantum suplattices. In the classical case, this is completely obvious; one just need to observe that a union of down-sets is a down-set. However, in the quantum case, the proof is nontrivial. We need one crucial lemma. **Lemma 8.2**.: _Let \((\mathcal{X},\blacktriangleleft)\) be a quantum poset. Then the inverse inclusion order \(\blacktriangleleft\) on \(\mathcal{D}(\mathcal{X})\) is the largest binary relation \(T\) on \(\mathcal{D}(\mathcal{X})\) such that \((\mathfrak{P}_{(\mathcal{X},\blacktriangleleft)})\circ T\leq(\mathfrak{P}_{( \mathcal{X},\blacktriangleleft)})\)._ **Theorem 8.3**.: _Let \((\mathcal{X},\blacktriangleleft)\) be a quantum poset. Then \(\mathrm{qDwn}(\mathcal{X},\blacktriangleleft)\) is a quantum sublattice. More specifically, the lower Galois adjoint \(\blackblackblackblackblack\blackblackblack\blackblackblack\blackblackblack\blackblack \blackblackblack\blackblack\blackblack\blackblack\blackblack\) of the canonical embedding \(\blacktriangleleft\blackblackblack\blackblackblack\blackblackblack\blackblack \blackblack\blackblack\black\blackblack\blackblack\black\blackblack\black \blackblack\blackblack\blackblack\black\black\blackblack\blackblack\)\(:\mathrm{qDwn}(\mathcal{X},\blacktriangleleft)\to\mathrm{qDwn}^{2}(\mathcal{X},\blacktriangleleft)\) is given by the \((\mathcal{X},\blacktriangleleft)\)-component of \(\blackblackblackblackblackblack\blackblackblack\blackblackblack\blackblackblack \blackblack\blackblack\blackblack\blackblack\blackblack\) of the multiplication \(\blackblackblackblackblackblackblackblackblackblackblackblackblackblack\blackblackblack \blackblackblack\blackblackblack\)\(:\mathrm{qDwn}^{2}\to\mathrm{qDwn}\) of the \(\mathrm{qDwn}\)-monad on \(\mathbf{qPos}\)._ Proof.: For simplicity, we write \(\blacktriangleleft\{\cdot\}\) instead of \(\blacktriangleleft\{\cdot\}_{\mathrm{qDwn}(\mathcal{X},\blacktriangleleft)}\) and \(\blackblackblackblackblackblack\) instead of \(\blackblackblackblackblackblackblackblack\blackblackblackblack\blackblackblack\blackblack \blackblackblack\blackblackblack\). Consider the following two diagrams: The triangle in the left diagram commutes by Theorem 4.4. The square commutes by naturality of \(\mathfrak{P}\) and since \(\mathrm{qDwn}^{\prime}\) equals \(\mathrm{qDwn}\) on objects. The right diagram commutes since \(\blackblackblackblackblackblack\blackblackblackblack\blackblackblack\blackblackblack\blackblack \blackblackblack\blackblack\blackblackblack\)\(=\mathrm{qDwn}(\mathfrak{P}_{(\mathcal{X},\blacktriangleleft)})\), so it is the outside of the left diagram. By the universal property of \(\ni\) as the counit of the adjunction \((-)_{\circ}\dashv\mathrm{qDwn}\), the unique monotone function \(K\colon\mathrm{qDwn}(\mathcal{X})\to\mathrm{qDwn}(\mathcal{X})\) such that \(\mathfrak{P}_{(\mathcal{X},\blacktriangleleft)}\circ K=\mathfrak{P}_{(\mathcal{X}, \blacktriangleleft)}\) is the identity function on \(\mathcal{D}(\mathcal{X})\). Hence, we conclude that \(\blackblackblackblackblackblack\blackblack\blackblackblack\blackblack\blackblackblack\blackblack \blackblack\blackblack\blackblack\blackblack\blackblack\black\blackblack\black\black\blackblack \black\black\black\black\black\black\black\black\black\black\black\black\black\black\black\)\(=I_{\mathcal{D}(\mathcal{X})}\). Note that the right-hand side is not the same as the identity monotone relation \(I_{\mathrm{qDwn}(\mathcal{X})}\), which is \(\blacktriangleleft\), even though \(\mathcal{D}(\mathcal{X})\) is the underlying quantum set of \(\mathrm{qDwn}(\mathcal{X})\). As a consequence, \(\mathsf{U}\) is an epimorphism in \(\mathsf{qSet}\), hence it is surjective, so \(\mathsf{U}\circ\mathsf{U}^{\dagger}=I_{\mathcal{D}(\mathcal{X})}\). Then, using the naturality of \(\mathfrak{P}\), we obtain which, by Lemma 8.2, implies \(\mathfrak{P}_{\mathsf{qDown}(\mathcal{X})}\circ\mathsf{U}^{\dagger}\leq( \mathfrak{P})=I_{\mathsf{qDown}(\mathcal{X})}\). Since also \(I_{\mathsf{qDown}(\mathcal{X})}=\mathfrak{P}_{\mathcal{D}(\mathcal{X}, \blacktriangleleft)}\circ\mathsf{U}\{\cdot\}_{\mathcal{D}(\mathcal{X}, \blacktriangleleft)}\) (cf. Theorem 4.4), we obtain \(\mathfrak{P}_{\mathsf{qDown}(\mathcal{X})}\circ\mathsf{U}^{\dagger}\leq \mathfrak{P}_{\mathsf{qDown}(\mathcal{X})}\circ\mathsf{U}\{\cdot\}\). Then, since \(\mathsf{U}\{\cdot\}\) is a function, we find \(\mathfrak{P}_{\mathsf{qDown}(\mathcal{X})}\circ\mathsf{U}^{\dagger}\circ( \mathsf{U}\{\cdot\})^{\dagger}\leq\mathfrak{P}_{\mathsf{qDown}(\mathcal{X})} \circ\mathsf{U}\{\cdot\}\circ(\mathsf{U}\{\cdot\})^{\dagger}\leq\mathfrak{P}_ {\mathsf{qDown}(\mathcal{X})}\). Again applying Lemma 8.2 yields \(\mathsf{U}^{\dagger}\circ(\mathsf{U}\{\cdot\})^{\dagger}\leq(\mathfrak{P})\), whence \((\mathsf{\underline{C}})\circ\mathsf{U}\{\cdot\}\circ\mathsf{U}\leq( \mathsf{\underline{C}})\circ(\mathfrak{P})^{\dagger}=(\mathsf{\underline{C}}) \circ(\mathsf{\underline{C}})\leq(\mathsf{\underline{C}})=(\mathsf{\underline{ C}})\circ I_{\mathcal{D}^{2}(\mathcal{X})}\), which expresses that \(I_{\mathcal{D}^{2}(\mathcal{X})}\leq\mathsf{U}\circ\mathsf{U}\{\cdot\}\). It now follows from Theorem 7.2 that \(\mathsf{U}\) is the lower Galois adjoint \(\mathsf{V}_{\mathsf{qDown}(\mathcal{X})}\) of \(\mathsf{U}\{\cdot\}\), hence \(\mathsf{qDown}(\mathcal{X})\) is indeed a quantum sublattice. We can now formulate the quantum versions of two classical theorems on suplattices: **Theorem 8.4**.: _Let \((\mathcal{X},\blacktriangleleft_{\mathcal{X}})\) and \((\mathcal{Y},\blacktriangleleft_{\mathcal{Y}})\) be quantum suplattices and let \(F\colon\mathcal{X}\to\mathcal{Y}\) be monotone. Then \(F\) is a homomorphism of quantum suplattices if and only if it has an upper Galois adjoint._ **Theorem 8.5**.: _The Eilenberg-Moore category of \(\mathsf{qDown}\) is equivalent to \(\mathsf{qSup}\)._ ## 9 Quantum inflattices We can define an inflattice to be the opposite of a sublattice. In the quantum world, we follow the same path to define quantum inflattices: **Definition 9.1**.: _A quantum poset \((\mathcal{X},\blacktriangleleft)\) is called a quantum inflattice if \((\mathcal{X},\blacktriangleright)\) is a quantum suplattice._ Classically, a poset is a sublattice if and only if it is an inflattice. The same is true in the quantum case. In order to prove this, a first step is showing a different characterization of quantum suplattices, which is analogue to the observation that any ordinary poset \(X\) is a sublattice if and only if the embedding \(X\to\mathrm{Pow}(X)\), \(x\mapsto\downarrow x\) has a lower Galois adjoint. The analogue embedding \(D_{(\mathcal{X},\blacktriangleleft)}\) of a quantum poset \((\mathcal{X},\blacktriangleleft)\) into \(\mathsf{q}\mathrm{Pow}(\mathcal{X})\) is given by \(J\circ\mathsf{U}\{\cdot\}_{\mathcal{X}}\), where \(J\colon\mathsf{qDown}(\mathcal{X},\blacktriangleleft)\to\mathsf{q}\mathrm{ Pow}(\mathcal{X})\) is the order embedding from Example 7.6. **Lemma 9.2**.: _A quantum poset \((\mathcal{X},\blacktriangleleft)\) is a quantum sublattice if and only if \(D_{(\mathcal{X},\blacktriangleleft)}\colon(\mathcal{X},\blacktriangleleft)\to \mathsf{q}\mathrm{Pow}(\mathcal{X})\) has a lower Galois adjoint \(F_{(\mathcal{X},\blacktriangleleft)}\)._ Classically, for any ordinary set \(X\) and any subset \(\mathcal{A}\subseteq\mathrm{Pow}(X)\), the intersection \(\bigcap\mathcal{A}\) of all subsets in \(\mathcal{A}\) is given by \(X\setminus\bigcup_{A\in\mathcal{A}}(X\setminus A)\), so by using the union operator and the complement operator. Since for any quantum set \(\mathcal{X}\) its quantum power set \(\mathsf{q}\mathrm{Pow}(\mathcal{X})=\mathsf{qDown}(\mathcal{X},I_{\mathcal{X}})\) is a quantum suplattice (cf. Theorem 8.3), and similar to the complementation operator on ordinary power sets, we have an order isomorphism \(C_{\mathcal{X}}\colon\mathsf{qPow}(\mathcal{X})\to\mathsf{qPow}(\mathcal{X})^{ \mathrm{op}}\), we can prove: **Lemma 9.3**.: _The quantum power set \(\mathsf{qPow}(\mathcal{X})\) of any quantum set \(\mathcal{X}\) is a quantum inflattice._ Then, given a quantum sublattice \((\mathcal{X},\blacktriangleleft)\) with embedding \(D_{(\mathcal{X},\blacktriangleleft)}\colon(\mathcal{X},\blacktriangleleft)\to \mathsf{qPow}(\mathcal{X})\) that is the upper adjoint of \(F\), and denoting the lower adjoint of \(D_{\mathsf{qPow}(\mathcal{X})^{\mathrm{op}}}\colon\mathsf{qPow}(\mathcal{X})^{ \mathrm{op}}\to\mathsf{qPow}(\mathcal{P}(\mathcal{X}))\) by \(N\), we can show that \(F\circ N\circ\mathsf{qPow}(D_{(\mathcal{X},\blacktriangleleft)})\) is the lower Galois adjoint of \(D_{(\mathcal{X},\blacktriangleleft)}\colon(\mathcal{X},\blacktriangleleft)\to \mathsf{qPow}(\mathcal{X})\), proving: **Theorem 9.4**.: _Any quantum sublattice \((\mathcal{X},\blacktriangleleft)\) is a quantum inflattice._ ## 10 Enrichment It was shown in [15] that the pointwise order of functions between quantum sets induces a **Pos**-enrichment of **qPos**. Similarly, in [14] it was shown that the category **qCPO** of quantum cpos is enriched over the category **CPO** of cpos, i.e., posets for which any monotonically increasing sequence has a supremum. We can enrich **qSup** over **Sup** in a similar way. We first have to quantize the supremum of collections of functions into a quantum sublattice. **Definition 10.1**.: _Let \(\mathcal{X}\) be a quantum set and \((\mathcal{Y},\clubsuit)\) a quantum poset. Let \(\mathfrak{K}\) be a subset of \(\mathbf{qSet}(\mathcal{X},\mathcal{Y})\). Then a function \(F\colon\mathcal{X}\to\mathcal{Y}\) is called the limit of \(\mathfrak{K}\), denoted by \(F=\lim\mathfrak{K}\) if \((\clubsuit)\circ F=\bigwedge_{\mathfrak{K}\in\mathfrak{K}}(\clubsuit)\circ K\)._ A quantum cpo is a quantum poset \((\mathcal{Y},\clubsuit)\) for which the limit of any countable chain in \(\mathbf{qSet}(\mathcal{X},\mathcal{Y})\) exists for any quantum set \(\mathcal{X}\), so the above definition is a generalization of the concept of limits in [15]. Let \(\mathcal{X}\) be a quantum set and let \((\mathcal{Y},\clubsuit)\) be a quantum poset. Let \(\mathfrak{K}\subseteq\mathbf{qSet}(\mathcal{X},\mathcal{Y})\). If \(\lim\mathfrak{K}\) exists, then it is the supremum of \(\mathfrak{K}\) in \(\mathbf{qSet}(\mathcal{X},\mathcal{Y})\) ordered by \(\sqsubseteq\). The converse does not always hold. If \(\mathcal{Y}\) is a quantum sublattice, then \(\lim\mathfrak{K}\) always exists. We can prove this by first showing the case \(\mathcal{Y}=\mathbf{2}\), then the case \(\mathcal{Y}=\mathbf{qPow}(\mathcal{X})\), which equals \([\mathcal{X}^{*},\mathbf{2}]_{\sqsubseteq}\), and finally the general case using Lemma 9.2. Moreover, one can show that \(\lim\mathfrak{K}\) is a homomorphism of quantum suplattices if \(\mathcal{X}\) is also a quantum suplattice and all functions in \(\mathfrak{K}\) are homomorphisms of quantum suplattices. Finally, one can show that composition with homomorphisms of quantum suplattices preserves the operation of taking limits, yielding: **Theorem 10.2**.: \(\mathbf{qSup}\) _is enriched over \(\mathbf{Sup}\)._ ## 11 Fixpoints Let \(\mathcal{X}\) be a quantum set and \(\mathcal{Y}\) a quantum suplattice. Since \(\mathbf{qSet}(\mathcal{X},\mathcal{Y})\) is a complete lattice, it follows from the Knaster-Tarski fixpoint theorem that: **Proposition 11.1**.: _Let \(F\colon\mathcal{Y}\to\mathcal{Y}\) be a monotone map on a quantum suplattice \(\mathcal{Y}\). Then for each quantum set \(\mathcal{X}\), the set of all functions \(K\colon\mathcal{X}\to\mathcal{Y}\) such that \(F\circ K=K\) is a complete lattice._ Categorically, a generalized fixpoint of an endomorphism \(f\colon Y\to Y\) in a given category \(\mathbf{C}\) can be defined as a monomorphism \(m\colon X\to Y\) for some object \(X\) such that \(f\circ m=m\). This leads to: **Definition 11.2**.: _Let \((\mathcal{Y},\clubsuit)\) be a quantum poset, and let \(F\colon\mathcal{Y}\to\mathcal{Y}\) be a monotone function. If the largest subset \(\mathcal{X}\) of \(\mathcal{Y}\) such that \(F\circ J_{\mathcal{X}}=J_{\mathcal{X}}\) exists, we call it the quantum set of fixpoints of \(F\), denoted by \(\operatorname{Fix}(F)\). Similarly, if the largest subset \(\mathcal{X}\) of \(\mathcal{Y}\) such that \(F\circ J_{\mathcal{X}}\sqsubseteq J_{\mathcal{X}}\) exists, we call it the quantum set of prefixpoints of \(F\), denoted by \(\operatorname{Pre}(F)\). Finally, if the largest subset \(\mathcal{X}\) of \(\mathcal{Y}\) such that \(F\circ J_{\mathcal{X}}\sqsupseteq J_{\mathcal{X}}\) exists, we call it the quantum set of postfixpoints of \(F\), denoted by \(\operatorname{Post}(F)\)._ Denoting \(\mathcal{Q}\{X\}\) for the quantum subset of \(\mathcal{Y}\) consisting of the atom \(X\mathbin{\ll}\mathcal{Y}\), we can show that the largest subset \(\operatorname{Post}(F)\) of postfixpoints of a monotone endofunction \(F\) on a quantum poset \(\mathcal{Y}\) exists and is determined by \(\operatorname{At}(\mathcal{X})=\{X\mathbin{\ll}\mathcal{Y}\colon J_{\mathcal{Q} \{X\}}\sqsubseteq F\circ J_{\mathcal{Q}\{X\}}\}\). Similarly, one can show that \(\operatorname{Pre}(F)\) and \(\operatorname{Fix}(F)\) exist. Classically, the postfixpoints \(P\) of a monotone map \(f\colon Y\to Y\) on a suplattice form a suplattice. This can be seen by a follows. \(P=\{y\in Y\colon y\leq f(y)\}\). Let \(S\subseteq P\), and let \(x=\sup S\). Then by monotonicity of \(f\), we have \(y\leq f(y)\leq f(x)\) for each \(y\in S\), hence \(x=\bigvee S\leq f(x)\), showing that \(P\) is closed under suprema, so a suplattice. In a similar way, we can prove: **Proposition 11.3**.: _Given a monotone endofunction \(F\colon\mathcal{Y}\to\mathcal{Y}\) on a quantum superlattice \(\mathcal{Y}\), the quantum set \(\operatorname{Post}(F)\) of postfixpoints of \(F\) is a quantum suplattice with respect to the relative order._ Applying Theorem 9.4, it is easy to show that \(\mathrm{Pre}(F)\) also is a quantum suplattice with respect to the relative order. Finally, we can show that \(F\) restricts and corestricts to a monotone function \(F|_{\mathrm{Post}(F)}^{\mathrm{Post}(F)}\) and that \(\mathrm{Fix}(F)=\mathrm{Pre}\left(F|_{\mathrm{Post}(F)}^{\mathrm{Post}(F)}\right)\) from which we conclude: **Theorem 11.4** (Quantum Knaster-Tarski Theorem).: _Let \(F:\mathcal{Y}\to\mathcal{Y}\) be a monotone endomap on a quantum suplattice \(\mathcal{Y}\). Then the quantum set \(\mathrm{Fix}(F)\) of fixpoints of \(F\) is a quantum sublattice with respect to the relative order._ ## 12 The relation between ordinary suplattices and quantum suplattices Quantum posets are noncommutative generalizations of ordinary posets, since we have a fully faithful functor \(\,{}^{\shortmid}(-)\colon\mathbf{Pos}\to\mathbf{qPos}\). This functor restricts and corestricts to a functor \(\mathbf{CPO}\to\mathbf{qCPO}\), hence also quantum cpos are genuine noncommutative generalizations of ordinary cpos. Remarkably, quantum suplattices are a not noncommutative generalizations of ordinary suplattices, but only noncommutative versions of suplattice: the functor \(\,{}^{\shortmid}(-)\colon\mathbf{Pos}\to\mathbf{qPos}\) does not restrict and corestrict to a functor \(\,{}^{\shortmid}(-)\colon\mathbf{Sup}\to\mathbf{qSup}\). Indeed, if \(B\) denotes the four-element Boolean algebra, which is clearly an ordinary suplattice, then \(\,{}^{\shortmid}B\) is not a quantum suplattice. In order to see this, we first recall Section 10, in which it was stated that the limit of any collection of functions with the same domain into a quantum suplattice always exists. We will give a collection of functions into \(\,{}^{\shortmid}B\) that does not have a limit. Write \(B=\{0,a,b,1\}\), where \(a^{\perp}=b\), and denote the order on \(B\) by \(\sqsubseteq\), so \(0\sqsubseteq a,b\sqsubseteq 1\). Let \((\mathcal{X},\,\includegraphics[width=14.226378pt]{X})=\,{}^{\shortmid}(B, \sqsubseteq)\). Let \(\mathcal{H}\) denote the atomic quantum set whose single atom \(H\) is two-dimensional. One can show that any function \(K\colon\mathcal{H}\to\mathcal{X}\) is of the form \(K(H,\mathbb{C}_{x})=L(H,\mathbb{C}_{x})r_{x}\) for each \(x\in B\), where \(r_{0},r_{a},r_{b},r_{1}\) are mutually orthogonal projections on \(H\) whose sum equals \(1_{H}\). Then \((\,\includegraphics[width=14.226378pt]{X})=L(H,\mathbb{C}_{x})\sum_{y\subseteq x }r_{y}\). So \((\,\includegraphics[width=14.226378pt]{X})(H,\mathbb{C}_{x})\) is of the form \(L(H,\mathbb{C}_{x})s_{x}\) for some projection \(s_{x}\) of \(H\), and clearly \(s_{0}\), \(s_{a}\), \(s_{b}\) and \(s_{1}\) mutually commute, because \(r_{0}\), \(r_{a}\), \(r_{b}\), \(r_{1}\) are mutually orthogonal. Let \(p\) and \(q\) be two distinct nontrivial noncommuting projections on \(H\). For instance, in the standard basis of \(H\), let \(p=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\) and \(q=\frac{1}{2}\begin{pmatrix}1&1\\ 1&1\end{pmatrix}\). Then \(p\) and \(q\) are also not orthogonal, whence, \(pq\neq 0\). However, since \(p\) and \(q\) are both atomic projections, we have \(p\wedge q=0\). Let \(F,G\colon\mathcal{H}\to\mathcal{X}\) be functions defined as follows. The nonzero components of \(F\) are given by \(F(H,\mathbb{C}_{0})=L(H,\mathbb{C}_{0})p\) and \(F(H,\mathbb{C}_{a})=L(H,\mathbb{C}_{a})p^{\perp}\), whereas the nonzero components of \(G\) are given by \(G(H,\mathbb{C}_{0})=L(H,\mathbb{C}_{0})q\), \(G(H,\mathbb{C}_{b})=L(H,\mathbb{C}_{b})q^{\perp}\). Then \[(\,\includegraphics[width=14.226378pt]{X})(H,\mathbb{C}_{x})=\begin{cases}L(H, \mathbb{C}_{x})p,&x=0,b;\\ L(H,\mathbb{C}_{x}),&x=a,1,\end{cases}\] \[(\,\includegraphics[width=14.226378pt]{X})(H,\mathbb{C}_{x})=\begin{cases}L(H, \mathbb{C}_{x})q,&x=0,a;\\ L(H,\mathbb{C}_{x}),&x=b,1.\end{cases}\] hence \[(\,\includegraphics[width=14.226378pt]{X})(H,\mathbb{C}_{x})=\begin{cases}0,&x =0\\ L(H,\mathbb{C}_{a})q,&x=a;\\ L(H,\mathbb{C}_{b})p,&x=b;\\ L(H,\mathbb{C}_{1}),&x=1.\end{cases}\] Assume that there is a function \(K\colon\,\mathcal{H}\to\mathcal{X}\) such that \(\bigstar\circ R=(\bigstar\circ F)\wedge(\bigstar\circ G)\). Since \((\bigstar\circ K)(H,\mathbb{C}_{a})=L(H,\mathbb{C}_{a})q\) and \((\bigstar\circ K)(H,\mathbb{C}_{b})=L(H,\mathbb{C}_{b})p\), it follows that \(p\) and \(q\) should commute, which contradicts our assumptions. We conclude that there is a \(\mathfrak{K}\subseteq\mathbf{qSet}(\mathcal{H},\mathcal{X})\) such that \(\lim\mathfrak{K}\) does not exists, namely \(\mathfrak{K}=\{F,G\}\). Hence, \(\mathcal{X}\) cannot be a quantum sublattice, for which all limits should exist. The main reason why suplattices are not quantum suplattices lies in the fact that the '\((-)\) functors do not commute with \(\mathcal{P}\) and \(\mathcal{D}\). Let \(S\) be a set. Then '\(P(S)\) does not equal \(\mathcal{P}(\text{'}S)\), but can only be identified with the subset of one-dimensional atoms of \(\mathcal{P}(\text{'}S)\). That this subset is proper follows for instance from the proof of [13, Proposition 9.3], which asserts that \((\mathbf{1}\uplus\mathbf{1})\ast(\mathbf{1}\uplus\mathbf{1})\), which can be identified with \(\mathcal{P}(\text{'}2)\), has uncountably many atoms. The same is true for '\(D(S)\) if \(S\) is a poset: '\(D(S)\) can only be identified with the subset of one-dimensional atoms of \(\mathcal{D}(\text{'}S)\). We plan to investigate whether the quantum power set monad \(\mathcal{P}\) is the right Kan extension of '\((-)\circ P\colon\mathbf{Set}\to\mathbf{qSet}\) along '\((-)\colon\mathbf{Set}\to\mathbf{qSet}\), where \(P\) denotes the ordinary power set monad. If this is true, we also expect that the quantum down-set monad qDwn is the right Kan extension of '\((-)\circ\text{Dwn}\colon\mathbf{Pos}\to\mathbf{qPos}\) along '\((-)\colon\mathbf{Pos}\to\mathbf{qPos}\). It follows from the \(\mathbf{Sup}\)-enrichment of \(\mathbf{qSup}\) that the one-dimensional atoms of a quantum suplattice form an ordinary suplattice. We expect that any ordinary suplattice is the subposet of one-dimensional atoms of some quantum suplattice. By extension, if quantum suplattices indeed form the right notion that is needed to generalize topologies to the noncommutative setting, then it follows that an ordinary topology on an ordinary set is not a quantum topology, it might only be the classical part of a quantum topology. Moreover, it might be that there are several different quantum topologies on an ordinary set that have the same ordinary topology as classical part. If this is indeed the case, it is the question how to interpret this. Perhaps this would be a quantum feature, in the same spirit as the result of two nonisomorphic graphs that are quantum isomorphic [3]. ## 13 Future work The classical Knaster-Tarski Theorem implies the Cantor-Schroder-Bernstein Theorem. We conjecture that a quantum version of Cantor-Schroder-Bernstein can be derived from the quantum Knaster-Tarski Theorem. **Conjecture 13.1**.: _Let \(F\colon\,\mathcal{X}\to\mathcal{Y}\) and \(G\colon\,\mathcal{Y}\to\mathcal{X}\) be injective functions between quantum sets. Then there exists a bijection between \(\mathcal{X}\) and \(\mathcal{Y}\)._ The biggest potential obstacle is that an atom of the quantum power set of a quantum set \(\mathcal{X}\) does not directly correspond to a subset of \(\mathcal{X}\), as is the case in the classical case. A quantum Cantor-Schroder-Bernstein Theorem can be reformulated in terms of operator algebras: **Conjecture 13.2**.: _Given surjective normal unital \(*\)-homomorphisms \(\varphi:M\to N\) and \(\psi:N\to M\) between hereditarily atomic von Neumann algebras, there must exist a \(*\)-isomorphism between \(M\) and \(N\)._ Furthermore, based on the fact that any quantum suplattice is a quantum inflattice, we expect: **Conjecture 13.3**.: \(\mathbf{qSup}\) _can be equipped with a monoidal product that makes it \(*\)-autonomous._ ## Acknowledgements We thank Andre Kornell for his advice and for inspiring us to further develop the theory of quantum sets. Furthermore, we thank Isar Stubbe for helping us to understand the connection between quantum sets and quantaloids better. Furthermore, we thank the reviewers for their comments. This research is supported by grants VEGA 2/0142/20 and 1/0036/23 and by the Slovak Research and Development Agency under the contracts APVV-18-0052 and APVV-20-0069.
2304.00122
Trajectory Control for Differential Drive Mobile Manipulators
Mobile manipulator systems are comprised of a mobile platform with one or more manipulators and are of great interest in a number of applications such as indoor warehouses, mining, construction, forestry etc. We present an approach for computing actuator commands for such systems so that they can follow desired end-effector and platform trajectories without the violation of the nonholonomic constraints of the system in an indoor warehouse environment. We work with the Fetch robot which consists of a 7-DOF manipulator with a differential drive mobile base to validate our method. The major contributions of our project are, writing the dynamics of the system, Trajectory planning for the manipulator and the mobile base, state machine for the pick and place task and the inverse kinematics of the manipulator. Our results indicate that we are able to successfully implement trajectory control on the mobile base and the manipulator of the Fetch robot.
Harish Karunakaran, Gopeshh Raaj Subbaraj
2023-03-31T20:47:32Z
http://arxiv.org/abs/2304.00122v1
# Trajectory Control for Differential Drive Mobile Manipulators ###### Abstract Mobile manipulator systems are comprised of a mobile platform with one or more manipulators and are of great interest in a number of applications such as indoor warehouses, mining, construction, forestry etc. We present an approach for computing actuator commands for such systems so that they can follow desired end-effector and platform trajectories without the violation of the nonholonomic constraints of the system in an indoor warehouse environment. We work with the Fetch robot which consists of a 7-DOF manipulator with a differential drive mobile base to validate our method. The major contributions of our project are, writing the dynamics of the system, Trajectory planning for the manipulator and the mobile base, state machine for the pick and place task and the inverse kinematics of the manipulator. Our results indicate that we are able to successfully implement trajectory control on the mobile base and the manipulator of the Fetch robot. Trajectory Control, Mobile Manipulators, Differential Drive ## I Introduction A mobile manipulator is defined as a robotic system composed of a mobile platform and a manipulator mounted on the platform equipped with non-deformable wheels. This combined system of a manipulator and mobile base is able to perform manipulation tasks in a much larger workspace than a fixed-base manipulator such as the Kuka Manipulator. Conventional robot manipulators with a fixed base have limited reachable workspace. Thus the addition a wheeled mobile platform to a robot manipulator provides added mobility to the robotic system. Typically, a mobile manipulator consists of a multi-link manipulator and a mobile platform. This increases the effective workspace of the end-effector of the manipulator when compared to fixed base systems. In recent years, these mobile manipulators have become a subject of interest because of the mobility and dexterity. Thus, many service robots employ the similar design which we use in our project. The integration of a manipulator and a mobile platform, however, gives rise to difficulties such as decomposing a given task into fine motions to be carried out by the manipulator and the gross motions to be achieved by the mobile platform and how to establish the dynamic model of the mobile manipulator system in a systematic approach. Since, in most cases, the mobile platform is subjected to nonholonomic constraints and the robot manipulator is a holonomic mechanical system, how to control the manipulator and the mobile platform to avoid possible tip over is also important [10]. A nonholonomic system is a system whose state depends on the path taken in order to achieve it. Also, unlike conventional industrial manipulators mounted on stationary bases, a mobile manipulators motions interact dynamically with its base robot, which degrades system performance, resulting in performance problems like excessive end-effector errors and poor system stability. These systems operate in highly unstructured environments, thus limiting the various sensing techniques available for their control. The above characteristics of mobile manipulators present challenging control problems. A Differential Drive robot is a mobile robot whose movement is based on two separately driven wheels placed on either side of the robot body. It can thus change its direction by varying the relative rate of rotation of its wheels and hence does not require an additional steering motion. A differential drive robot cannot move in the direction along the axis which is a singularity. They are also very sensitive to slight changes in velocity in each of the wheels. Even small changes in the relative velocities between the wheels affects the robot's trajectory. This makes the control of differential drive system difficult to implement in practice. Our project focuses on trajectory following and control for mobile manipulators and tries to address some of the problems discussed above. We develop a control method for manipulators mounted on mobile robots which applies to large manipulator motions when system characteristics are highly variable and when the accuracy and the availability of sensory information is limited. The Fetch mobile manipulator platform is used in a simulated environments which is a differentially driven platform, equipped with a 7-DOF manipulator is used. The Fetch robot is a combination of two subsystems. The torso of this robot, called the Fetch, comprises of a camera and other vision sensors along with the 7-DOF manipulator, while the base, called the Freight, enables the navigation of the entire robot. The torso is mounted on the freight. Fetch Robotics' system uses a relatively large and capable mobile manipulator to pick items off of warehouse shelves, while Freight, acts as an autonomous cargo delivery cart. Fetch can pick items continuously, while a succession of Freights can switch in and out to move different selections of goods to different parts of the distribution centre. The Fetch robot is shown in Figure 1. The Denavit-Hartenburg parameters for the 7-DOF manipulator is found out and is incorporated as part of the URDF (Unified Robot Description) of the fetch robot. Lagrange's method is used to obtain the reduced equations of motion from the base of the robot to the end-effector. This gives us the dynamics of the system. Next, we perform trajectory planning for the arm of the fetch robot to plan from one joint position to another. Trac_ik[11] package available as an open source implementation is used to find the inverse kinematics of the arm. Then, for moving the mobile base of the fetch robot, we used a planning algorithm to find a path from a start position in the simulated environment to a goal position. In order to make the mobile base and the trajectory planning work in conjunction, we developed a state-machine for a pick and place task. The performance of our method has been tested on this task in a simulated environment in Gazebo and has been discussed in the later sections. The applications of our system extend into the warehouse domain where trajectory planning and control is an important aspect of picking and placing objects in a warehouse or an industrial setting. This mobile manipulator control method would provide greater autonomy to service robots and extend their use. Our method tries to solve the problem of control in mobile manipulators described above using the Fetch robot as an example. The rest of this paper is divided as follows, the related work is discussed in Section II followed by the Methodology in Section III and then the results and discussion are presented in Section IV followed by Conclusion and Future work in Section V. ## II Related Work In the past issues related to Mobile manipulators such as dynamic and static stability, force development and application control in the presence of base compliance, dynamic coupling issues, etc. have been studied [2]. Alicija et al.[4] worked on the problem of trajectory tracking of non-holonomic mobile manipulators. They presented two versions of the tracking control algorithm, one for mobile manipulators with a non-holonomic platform only and the other for a doubly non-holonomic mobile manipulator. They then compared the control process of these in a simulation environment and made some suggestions about a choice of drives for the robotic arm. They presented a dynamic control algorithm for holonomic systems in general which can be used for a mobile platform with any robot mounted atop of it. This was one of the earlier approaches to tackle the problem of trajectory tracking as the computing power was limited back then. R.Solea et al.[5] in their paper focus on the motion planning problem of mobile manipulator systems where they propose a methodology for generating trajectories for both the mobile platform and the manipulator that will take a system from an initial configuration to a pre-specified final one, without violating the non-holonomic constraint. In order to ensure the smooth movement of the considered system they used nonlinear sliding mode control to solve the motion planning problem. They also present a controller design for the trajectory-tracking problem using the sliding mode control for a mobile platform equipped with a manipulator. Real-time trajectory planning using model predictive control with constraints was proposed by Ide.S et. al. [6] for mobile manipulators. Their method uses Quadratic Programming (QP) for optimizing control inputs. The control inputs and outputs are limited corresponding to the required motion and the hardware specifications of the mobile manipulator. The required motion is changed frequently according to the situation and the hardware limitations, concretely the torque, the angular velocity, the mobile base velocity and the acceleration, which are subject to the hardware design. Their method was implemented to a real mobile manipulator model and real-time trajectory modification was demonstrated on a real mobile robot. But one of the drawbacks in their approach is that they generalize all manipulators and do not consider system specific parameters. But, in this project we use system parameters that are specific to the fetch robot as described in their approach. Dong, W. et al [7] studied the tracking control problem of mobile manipulators considering the interaction between the mobile platform and the manipulator. They proposed a global tracking controller based on the dynamics of the defined tracking error and the extended Barbalat's lemma. Their proposed controller ensures that the full state of the system asymptotically track the given desired trajectory globally in the presence of the system coupling. Korayem, M.H. et al [8] in their paper proposed an Open-loop optimal control method as an approach for trajectory optimization of flexible mobile manipulator for a given two end-point task in point-to-point motion. Dynamic equations were derived using combined Euler Lagrange formulation which is similar to the approach we follow in our project. To solve the optimal control problem, they adopt an indirect method via establishing the Hamiltonian function and deriving the optimality condition from Pontryagins minimum principle. The equations obtained provided a two point boundary value problem which they then solved by numerical techniques. Korayem, M.H.,et al [9] in their paper presented a methodology for collision-free trajectory planning of wheeled mobile manipulators in obstructed environments by means of potential functions. But in our case we do not consider obstructed environments making our approach simpler than theirs. In their method, all mobile manipulator parts and environmental obstacles were modeled as ellipsoids. Due to collision avoidance, the ellipsoid equations were expressed in a reference coordinate system and the corresponding dimensionless potential functions were defined. Then, the trajectory planning of a spatial mobile robot in cluttered environment was performed, employing optimal control theory. One of the problems with their approach was that since they use control theory for trajectory planning, the planning times are higher than standard methods. Fig. 1: Fetch Robot Inverse kinematics is an extensive field of research where numerous sophisticated algorithms have been presented during the last decades. However, no universal solution could yet be found. Most prominent approaches either attempt to directly derive a geometric analytical solution from the kinematic model or aim to solve the problem iteratively by optimization. Some of the most widely known and popular methods are based on computing the Jacobian using the transpose, pseudoinverse or damped least squares method [12]. The solution is then found by following the gradient, but problems are often encountered by getting stuck in suboptimal extrema depending on the geometric complexity of the kinematic model. Constructive approaches successfully overcome this issue by performing heuristic restarts from random initial configurations where [11] recently proposed a novel method, TRAC-IK [11] that is publicly available under the ROS framework[13]. In this, significant performance increases regarding the success rate could be obtained over the classical Orocos KDL solver[15]. Their Trac_ik method improves upon several failure points of KDL's implementation of joint-limited pseudoinverse Jacobian inverse kinematics. They are able to outperform the KDL IK implementation by adding local minima and mitigation by reformulating the Inverse Kinematics problem as a Sequential Quadratic Programming problem and utilizing Cartesian tolerances to speed up the IK search. They are able to report a success rate of around 99.5% on the Fetch robot and therefore is the choice for our IK solver. ## III Methodology The overall work can be divided into the following modules, * Writing the Dynamics of the system * Trajectory Planning of the mobile base and the manipulator * Inverse Kinematics of the manipulator * Finite State Machine for the pick and place task * Trajectory Control using PID ### _System Dynamics_ The first step in the writing the system dynamics involved writing the Denavit-Hartenberg (DH) parameters for the manipulator arm. DH parameters are the four parameters associated with a particular convention for attaching reference frames to the links of a spatial kinematic chain, or robot manipulator. This convention is a standard method for selecting frames of reference in robotics applications. This method is very useful to carry out kinematics operations in a manipulator. The four parameters of DH convention are used for transformation from one frame to another. These four parameters are : * d : offset along previous z to the common normal * \(\theta\) : angle about previous z, from old x to new x * r : length of the common normal * \(\alpha\) : angle about common normal. The four parameters that associates two consecutive frames are shown in the figure 2. In this project we calculated the DH parameters for the fetch robot as the frames defined in the URDF file of fetch robot was not consistent with the DH convention. Figure 3 shows the frames for our system defined as per DH convention. The next step involves finding the transformation from the base frame to end-effector from the DH parameters. This gives us the transformation matrix. The last column from the transformation matrix gives us the forward kinematics of the system. Once we obtain the transformation matrix we can differentiate it with respect to each of the each of the joint angles to obtain the Jacobian matrix which relates the end-effector velocity with the joint velocities. We can then compute the pseudoinverse of the Jacobian matrix to obtain the joint velocities if we know the end-effector velocities. The equation relating the joint velocities and the end-effector velocities is equation 1, Fig. 3: Robot Frames - DH convention Fig. 2: The four parameters according to the DH convention are shown in red, \(\theta_{i},d_{i},a_{i},\alpha_{i}\) \[\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{z}\\ \dot{\omega}_{x}\\ \dot{\omega}_{y}\\ \dot{\omega}_{z}\end{bmatrix}=J*\begin{bmatrix}\dot{\theta}_{0}\\ \dot{\theta}_{1}\\ \dot{\theta}_{2}\\ \dot{\theta}_{3}\\ \dot{\theta}_{4}\\ \dot{\theta}_{5}\\ \dot{\theta}_{6}\end{bmatrix} \tag{1}\] where, J is the velocity Jacobian and \(\dot{\theta}\) represents the joint velocities and the left hand side is the end-effector velocities. Now we can compute the Kinetic and Potential energies of the system from the joint velocities obtained and the parameters of the Fetch robot provided in their documentation. We can compute the Lagrangian as L = K - P, where K is the Kinetic Energy and P is the potential energy. The torque can be computed from the Lagrangian as shown in equation 2, \[\tau=\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial\dot{\theta}}- \frac{\partial L}{\partial\theta} \tag{2}\] From the torque equation, we can compute the Inertia (M), Coriolis (C) and Gravity (G) matrices and the state space form of the system is written. The torque represented in the form of Inertia, Coriolis and Gravity matrix is shown in equation 3, \[M(q)\ddot{q}+C(q,\dot{q})\dot{q}+g(q)=\tau \tag{3}\] Figure 4 shows the joints on the Fetch robot using which we obtain the transformation from the robot base to the robot end-effector. ### _Trajectory Planning of manipulator_ We perform point to point motion for each of the seven joints of the manipulator. In our case, we have constraints on position, velocity and acceleration due to the fact that discontinuities in acceleration can lead to impulsive jerk ( derivative of acceleration), which may excite vibrational modes in the manipulator and reduce tracking accuracy. Therefore, we add constraints on the acceleration as well as on the position and velocity thus giving us a quintic polynomial trajectory planning. So, in our case, we have six constraints (one each for initial and final configurations, initial and final velocities, and initial and final accelerations). Therefore we require a fifth order polynomial as given in the equation below, \[q(t)=a_{0}+a_{1}t+a_{2}t^{2}+a_{3}t^{3}+a_{4}t^{4}+a_{5}t^{5} \tag{4}\] where, \(\tau\) is the time of execution of the trajectory and \(a_{0},a_{1},a_{2},a_{3},a_{4},a_{5}\) are the coefficients of the quintic polynomial. The problem here is to find a trajectory that connects an initial to a final configuration while satisfying other specified constraints at the endpoints (e.g., velocity and/or acceleration constraints). Without loss of generality, we will plan the trajectory for a single joint, since the trajectories for the remaining joints will be created independently and in exactly the same way. Thus, we will concern ourselves with the problem of determining q(t), where q(t) is a scalar joint variable. The equations for the initial and final position, velocities and accelerations are given in equations 5, 6, 7, 8, 9 and 10. The velocities and the accelerations are obtained by differentiating the joint positions once and twice respectively with respect to time. \[q_{0}=a_{0}+a_{1}t_{0}+a_{2}t_{0}^{2}+a_{3}t_{0}^{3}+a_{4}t_{0}^{4}+a_{5}t_{0} ^{5} \tag{5}\] \[v_{0}=a_{1}+2a_{2}t_{0}+3a_{3}t_{0}^{2}+4a_{4}t_{0}^{3}+5a_{5}t_{0}^{4} \tag{6}\] \[\alpha_{0}=2a_{2}+6a_{3}t_{0}+12a_{4}t_{0}^{2}+20a_{5}t_{0}^{3} \tag{7}\] \[q_{f}=a_{0}+a_{1}t_{f}+a_{2}t_{f}^{2}+a_{3}t_{f}^{3}+a_{4}t_{f}^{4}+a_{5}t_{f} ^{5} \tag{8}\] \[v_{f}=a_{1}+2a_{2}t_{f}+3a_{3}t_{f}^{2}+4a_{4}t_{f}^{3}+5a_{5}t_{f}^{4} \tag{9}\] \[\alpha_{f}=2a_{2}+6a_{3}t_{f}+12a_{4}t_{f}^{2}+20a_{5}t_{f}^{3} \tag{10}\] This can be represented in matrix form as follows and therefore if we know the time of planning for each joint, we are able to find the the coefficients of the polynomial which can then be substituted in the above equations and the position, velocity and acceleration for each joint can be figured out. \[\begin{bmatrix}q_{0}\\ v_{0}\\ \alpha_{0}\\ q_{f}\\ v_{f}\\ \alpha_{f}\end{bmatrix}=\begin{bmatrix}1&t_{0}&t_{0}^{2}&t_{0}^{3}&t_{0}^{4}&t _{0}^{5}\\ 0&1&t_{0}&3t_{0}^{2}&4t_{0}^{3}&5t_{0}^{4}\\ 0&0&1&6t_{0}&12t_{0}^{2}&20t_{0}^{3}\\ 1&t_{f}&t_{f}^{2}&t_{f}^{3}&t_{f}^{4}&t_{f}^{5}\\ 0&1&t_{f}&3t_{f}^{2}&4t_{f}^{3}&5t_{f}^{4}\\ 0&0&1&6t_{f}&12t_{f}^{2}&20t_{f}^{3}\end{bmatrix} \tag{11}\] Figure 5 shows the high level control used for the case of trajectory planning and control of the manipulator. We get the high level trajectory plan from the quintic polynomial trajectory for each joint and then a PID controller for each joint make the joint follow the trajectory generated by the quintic polynomial. The PID control law is given by equation 12. This torque is given as feedback to the control law as defined in equation 3. Fig. 4: Joint Diagram of the Fetch Robot \[\tau=K_{p}\tilde{q}+K_{v}\tilde{\tilde{q}}+K_{i}\int_{0}^{t}\tilde{q}(\sigma)d\sigma \tag{12}\] where, \(K_{p},K_{v}\) and \(K_{i}\) are the position, velocity and integral gains respectively and \(\tilde{q}\) is the error term. ### _Trajectory Planning of Mobile Base_ The trajectory planning for the mobile base is performed with the A* algorithm. A* is a search algorithm that is widely used in path finding and graph traversal. Peter Hart, Nils Nilsson and Bertram Raphael first described the algorithm in 1968. As A* traverses the graph, it follows a path of the lowest known cost, keeping a sorted priority queue of alternate path segment along the way. If, at any point, a segment of the path being traversed has a higher cost than another encountered path segment, it abandons the higher-cost path segment and traverses the lower-cost path segment instead. This process continues until the goal is reached [16]. It utilizes a heuristic to focus the search towards the most promising areas of the search space. A* aims to find an optimal path which may not be feasible given time constraints and the dimensionality of the problem. The priority queue is based on f = g + h, where g is the backward cost and h is the forward cost to goal or called the heuristic. For the A* algorithm to give an optimal solution in the case of the tree search, the heuristic should be admissible (i.e. it should never overestimate the true cost to the goal). It is the user who has to provide the heuristic which is admissible. The path returned by the A* planner is then time-parametrized before being passed onto the controllers which takes the path returned from the planner and the actual path followed by the robot. The error term for the controller in this case is the euclidean distance between the time-parametrized path from the A* planner and the actual path followed by the mobile base. ### _Inverse Kinematics_ We make use of the Trac_ik package available in the ROS framework to perform Inverse Kinematics of the manipulator arm. Trac_ik is an enhancement of the KDL's (Kinematics and Dynamics Library) pseudoinverse Jacobian solver which improves the performance drastically. Some of the issues with the KDL slower which are addressed, the KDL Inverse Kinematics takes as input a maximum number of iterations to try which mainly depends on the time to compute the Jacobian. Thus, there is no need to give the user the criteria to choose the number of iterations which can be decided by our solver where the user is only prompted to provide the maximum time allowed for solving. This is functionally equivalent to running KDL with a number of iterations, but the maximum number of iterations is dynamically determined based on the user-desired solve time. The Inverse Jacobian IK implementations can get stuck in local minima during the iteration process. This is valid in cases where joints have physical limits on their range. In the KDL implementation, there is nothing to detect local minima or mitigate this scenario. In this solver, the local minima are detected and mitigated by changing the next seed. Consequently, the performance of the KDL IK solver can be significantly improved by simply using random seeds for q to improve the performance of the iterative algorithm when local minima are detected. But this method does not handle the constraints of joint limits very well. One way to avoid these failures is to solve IK using methods that better handle constraints like joint limits. IK as a nonlinear optimization problem can be solved locally using sequential quadratic programming (SQP). SQP is an iterative algorithm for nonlinear optimization, which can be applied to problems with objective functions and constraints that are twice continuously differentiable. SQP minimizes the overall amount of joint movement and only considers Cartesian error as a constraint. The equation for the SQP problem in our case is given in equation 13, \[\begin{split}\operatorname*{arg\,min}_{q\in R^{n}}& (q_{seed}-q)^{T}\;(q_{seed}-q)\\ s.t.& f_{i}(q)\leq b_{i},i=1,...,m\end{split} \tag{13}\] where \(q_{seed}\) is the n-dimensional seed value of the joints, and the inequality constraints \(f_{i}(q)\) are the joint limits, the Euclidean distance error, and the angular distance error. SQP method uses dual quaternion error as a combined measure of distance error and angular error in Cartesian space. Generating the dual quaternion error takes non-trivial computational time compared to simpler metrics such as the sum of squares metric. The SQP-SS (Sum of Squares)used the sum of squares metric when the sum of the squares of the error are considered. The Sum of Squares equation is given in equation 14, \[\phi_{SS}=p_{err}*p_{err}^{T} \tag{14}\] Therefore, for a single IK solver request Trac_ik spawns two solvers, one running SQP-SS and the other running KDL. Once either finishes with a solution, both threads are immediately stopped, and the resulting solution is returned. This is the same approach that we are following in our project and the results for the IK solver are discussed in a later section. Fig. 5: Block Diagram of trajectory planner ### _State Machine_ A finite state machine (FSM) is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some external inputs. the change from one state to another is called a transition. A FSM is defined by a list of its states, its initial state, and the conditions for each transition. We have defined a state machine for the pick and place task using the Fetch robot and it can be defined in Figure 6. First, we check the robot status which gives us the information of the location of the base and the location of the end-effector. Various states call different parts of our system, thus making the execution of the overall task possible. The transition for every state is provided by a feedback mechanism as defined in the ROS framework. ### _Implementation Platform_ The platform that we use to test our system is the Fetch robot. The Fetch kinematics are defined by the robot (unified robot description format) model which specifies the attributes of the joints, links, and frames of the robot. A link element in the URDF describes a rigid body with inertia, visual features, and coordinate frames. A joint element in the URDF defines the kinematics, dynamics, safety limits, and type. We entered these information based on the DH parameters and the system dynamics derived earlier. Then we define the direction of the coordinate frames for the Fetch robot. The coordinate frames for all links in the Fetch and Freight are defined with positive z-axis up, positive x-axis forward, and positive y-axis to the robot-left when Fetch is in the home pose. Joint position and velocity limits are enforced for all the joints in the Fetch robot. This acts the constraints during trajectory planning and finding inverse kinematic solutions. We make use of various sensors available on the Fetch robot to perform the trajectory control. The sensors which are useful in our project are described below. The Fetch robot has the following sensors which is of use in our system, a SICK TIM571 scanning range finder, a 6-axis inertial measurement unit (IMU), Primesense short-range RGBD sensor and a Gripper sensor. The laser has a range of 25m, 220\({}^{\circ}\) field of view, 15Hz update rate and angular resolution of 1/3\({}^{\circ}\). The laser publishes both distance and RSSI (Received Signal Strength Indication) to the base_scan topic, The gyroscope within the IMU is capable of measuring +/-2000 degrees per second, while the accelerometers are capable of measuring +/-2g. In addition to the position and effort feedback of the gripper joint, the gripper incorporates a 6-axis inertial measurement unit (IMU). We performed the entire task in ROS/Gazebo with the Fetch robot in a simulation environment. The results obtained after performing our experiments are explained in Section IV. ## IV Results In this project, the first task of deriving the system dynamics began with formulating the DH parameters for the manipulator of fetch robot as discussed in the methodology. The DH parameters derived for each of the consecutive links are shown in figure 7. In figure 7 'D' stand for dummy frame. The Derived DH parameters were validated by performing forward kinematics in MATLAB. The task of creating the simulation world with Fetch robot and the environment was done in gazebo. The world environment was formed by placing models of the shelf and object in gazebo world. We created a world with the shelf and the object and also spawned the fetch robot to this world by importing it's URDF file in the simulation environment. Figure 3 & 4 Fig. 6: State Machine for the pick and place task Fig. 7: DH Parameters shows screen shot of the gazebo world with the Fetch robot, Shelf and the object. After this step, when the base of the robot moves we do inverse kinematics with the method explained in the previous section. We used the open source package Trac_ik solver in the ROS framework. Trac_ik performed very well compared to other classical solvers such as Orocos KDL solver etc in our case. Trac_ik took an average time of 0.44ms to find an Inverse Kinematic solution when run for 100 trials against the Orocos KDL solver which took 0.72 ms for the same 100 trials. The success rate of the Trac_ik solver was also higher when compared to the Orocos KDL solver. Next, we did trajectory planning for both the manipulator in the fetch robot as well as the mobile base. Trajectory planning for the manipulator was done in joint space considering the joint limit constraints.This trajectory planning was done in joint space so that there is no need to perform inverse kinematics operation at each instant of planning which can be costly. This saves considerable amount of time during planning. Moreover, inverse kinematics can give multiple solutions and obtaining the correct set of joint angles will be time consuming. Planning in joint space rather than Cartesian space saved us time. The position, velocity and acceleration profiles obtained for one of the joints is shown in figures 10 and 11. Similar profiles were obtained for all the joints. We performed quintic polynomial planning because when we don't consider the acceleration of the joint, it leads to a lot of jerkiness in the system. The next part was the trajectory planning for the mobile base of the Fetch robot. The trajectory planning for the mobile base was done using A* path planning algorithm as discussed in methodology. In order to visualize the working of the A* algorithm we tested our algorithm first on a PR2 robot in a simulation environment known as OpenRAVE. The A* algorithm was written as plugin in the global planner library available in ROS. The path was generated by the A* planning algorithm for four connected space as well as eight connected space. We used the manhattan and euclidean heuristic for both of these scenarios and tested the performance in OpenRAVE. In case of eight connected we observed euclidean heuristic to be superior. Number of nodes explored in case of Manhattan heuristic was 3462. Number of nodes explored in case of Euclidean heuristic was 4555. Cost in case of Manhattan was 126.70 and the cost in case of Euclidean was 104.83 for the eight connected space. Therefore, we used eight connected Euclidean heuristic and performed the planning. The path generated by A* planning algorithm for an eight connected space with euclidean distance heuristic is shown in figure 9. Now, in order to move the base of the fetch robot, we published the parameterized path found by the A* algorithm on a topic to which the base subscribed. Before doing this, Fig. 11: Velocity and Acceleration profile for each joint after Trajectory Planning Fig. 8: Simulation World Fig. 10: Position for each joint after Trajectory planning Fig. 9: Visualization of the path found by A* for an 8-connected Euclidean heuristic is shown in black we found the transformation between the base frame and the odometry frame. This was found using the tf package available in ROS where our tf class had a subscriber which waits for the transform between the odometry and the base frame and records it. Then the geometry_msgs of type Twist was published on the cmd_vel topic to move the robot base. This method was used to move the robot base to the desired location according to the The PID controller is implemented for both the trajectory planning of the manipulator and the mobile base. After getting the trajectory for each joint from the planner, we use a PID controller to ensure that each joint follows the trajectory generated by the planners. In order to make the system stable we tune the PID controller by modifying the controller gains. The trajectory which is generated for the mobile base is time parameterized and given to the PID controller as input. The PID controller minimizes the euclidean distance between the time-parameterized path from the A* planner and the actual path followed by the mobile base. The joint trajectories, Velocity profile and joint efforts of Shoulder Pan joint, Shoulder Lift joint, Upperarm Roll joint, Elbow Flex joint and Forearm Roll joint are shown in figures 12, 13 and 14 respectively. After the planning step, we implemented a state machine in C++ for the pick and place task on the fetch robot. Different states are defined in the state machine and it changes from one state to another when it gets an transition request in the form of a ROS message after the previous state has been completed. The State Machine was used to automate the entire process of the pick and place task. All the tasks which was mentioned above is included in a ROS node named as fetch_traj_node. When wrote this finite state machine as a ROS node using the rosp client library and when we run this node, all the tasks such as trajectory planning of manipulator, trajectory planning of mobile base, inverse kinematics etc. are executed sequentially according to the flowchart explained earlier. Final task involved controlling the gripping action i.e like controlling the amount by which the gripper closes to grab an object on the shelf. We used the gripper_controller or gripper_action interfaces to send gripper commands using the control_msgs topic. The gripper command takes in position and effort as parameters. Generally, the gripper is commanded to a fully closed or fully opened position, so effort is used to limit the maximum effort. As the gripper never fully reaches the closed position, the grasp strength will be determined by the maximum effort. So, in order to pick up objects we need to control the effort generated by the gripper in the fetch robot. The tf tree and rqt graph showing the transformation from Fig. 14: Joint efforts for each joint Fig. 12: Joint Trajectories of each joint Fig. 13: Velocity Profile of each joint Fig. 15: tf Tree one frame to another as well as the list of active nodes and their subscribers and publisher topic are shown in figures 15 and 16 respectively. ## V Conclusion and Future Work In this project, we have implemented Trajectory Control for the Fetch robot which is a differential drive mobile manipulator robot in a simulated environment. We have shown results and implementation of the dynamics of the fetch robot, Trajectory planning for the manipulator arm using a quintic spline and mobile base trajectory planning has been implemented with the A* planning algorithm and the results have been shown in Section IV. The state machine for the task of indoor warehouse object manipulator has been carried out and results of it's implementation has also been discussed. Also, the Trac_ik package has been used for performing the inverse kinematic calculations for the manipulator of the robot. Overall, the project goals stated in Section I have been implemented and the corresponding methodology shown in Section III and the results of our experiments carried out discussed in Section IV. In the future, we would develop a global planner which would just get the location of an object in an environment and then autonomously determine the amount at which the base should be moved and then the end-effector. Right now, we don't have this feature as we plan for the mobile base and the manipulator separately. Also, we would like to make our trajectory control package open source since there is no robust package that achieves this efficiently. But, more work is needed to make it generic for all robots. We would implement our system on an actual hardware platform in an indoor warehouse environment to test it's performance. We have preformed the picking of the object but we would want to incorporate computer vision techniques to make the state machine more powerful. One more extension of our work would be to make all the different aspects modular so that it can be ported to any platform easily. ## VI Acknowledgements We would like to thank Professor Eugene Eberbach of the department of Robotics Engineering at Worcester Polytechnic Institute for his patience, guidance, advice and support over the term of the project, and Junius Santoso for providing us useful inputs facilitating the successful completion of our project.
2309.05918
Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs
In our opinion the exuberance surrounding the relative success of data-driven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or non-factual) was created equal; (ii) due to their subsymbolic na-ture, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own; and (iii) LLMs will often fail to make the correct inferences in several linguistic contexts (e.g., nominal compounds, copredication, quantifier scope ambi-guities, intensional contexts. Since we believe the relative success of data-driven large language models (LLMs) is not a reflection on the symbolic vs. subsymbol-ic debate but a reflection on applying the successful strategy of a bottom-up reverse engineering of language at scale, we suggest in this paper applying the effective bottom-up strategy in a symbolic setting resulting in symbolic, explainable, and ontologically grounded language models.
Walid S. Saba
2023-09-12T02:14:05Z
http://arxiv.org/abs/2309.05918v3
Stochastic LLMs _do not_ Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs ###### Abstract In our opinion the exuberance surrounding the relative success of data-driven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or non-factual) was created equal; (ii) due to their subsymbolic nature, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own; and (iii) LLMs will often fail to make the correct inferences in several linguistic contexts (e.g., nominal compounds, copredication, quantifier scope ambiguities, intensional contexts. Since we believe the relative success of data-driven large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate but a reflection on applying the successful strategy of a bottom-up reverse engineering of language at scale, we suggest in this paper applying the effective bottom-up strategy in a symbolic setting resulting in symbolic, explainable, and ontologically grounded language models. Keywords:Bottom-up reverse engineering of language, Symbolic large language models, Language Agnostic Ontology. ## 1 Introduction The recent successes of so-called large language models (LLMs) have taken the world of artificial intelligence (AI) and natural language processing (NLP) by storm. Indeed, with the release of GPT-4 it has become apparent that large language models (LLMs), that are essentially a massive experiment in a bottom-up reverse engineering of language, have crossed some threshold of scale at which point there was an obvious qualitative improvement in their capabilities1. In our opinion, however, the spectacular exuberance towards these advances is slightly misguided. For one thing, these large 'language models' are not exactly models of language but are statistical models of regularities found in language. In fact, and due to their subsymbolic nature, whatever 'knowledge' these models acquire about how language works will always be buried in billions of microfeatures (weights) none of which is meaningful on its own. This is also the reason why _explainability_ can never be attained in such models since explainability
2306.17541
Rigorous Function Calculi in Ariadne
Almost all problems in applied mathematics, including the analysis of dynamical systems, deal with spaces of real-valued functions on Euclidean domains in their formulation and solution. In this paper, we describe the the tool Ariadne, which provides a rigorous calculus for working with Euclidean functions. We first introduce the Ariadne framework, which is based on a clean separation of objects as providing exact, effective, validated and approximate information. We then discuss the function calculus as implemented in \Ariadne, including polynomial function models which are the fundamental class for concrete computations. We then consider solution of some core problems of functional analysis, namely solution of algebraic equations and differential equations, and briefly discuss their use for the analysis of hybrid systems. We will give examples of C++ and Python code for performing the various calculations. Finally, we will discuss progress on extensions, including improvements to the function calculus and extensions to more complicated classes of system.
Pieter Collins, Luca Geretti, Sanja Zivanovic Gonzalez, Davide Bresolin, Tiziano Villa
2023-06-30T10:53:27Z
http://arxiv.org/abs/2306.17541v1
# Rigorous Function Calculi in Ariadne ###### Abstract Almost all problems in applied mathematics, including the analysis of dynamical systems, deal with spaces of real-valued functions on Euclidean domains in their formulation and solution. In this paper, we describe the the tool Ariadne, which provides a rigorous calculus for working with Euclidean functions. We first introduce the Ariadne framework, which is based on a clean separation of objects as providing exact, effective, validated and approximate information. We then discuss the function calculus as implemented in Ariadne, including polynomial function models which are the fundamental class for concrete computations. We then consider solution of some core problems of functional analysis, namely solution of algebraic equations and differential equations, and briefly discuss their use for the analysis of hybrid systems. We will give examples of C++ and Python code for performing the various calculations. Finally, we will discuss progress on extensions, including improvements to the function calculus and extensions to more complicated classes of system. **Keywords:** Rigorous numerics; Computable analysis; Function calculus; Mathematical software. ###### Contents * 1 Introduction * 2 Overview of Rigorous Numerical Methods * 2.1 The inclusion property * 2.2 Interval arithmetic * 2.3 Automatic differentiation * 2.4 Taylor polynomial models * 2.5 Algebraic equations * 2.6 Differential equations * 2.7 Constraint propagation * 2.8 Constrained optimisation * 3 Computable Analysis * 4 Conceptual foundations * 4.1 Types * 4.2 Information * 4.3 Properties * 5 Core Functionality: Logic, Numbers, Algebra * 5.1 Logic * 5.2 Numbers * 5.3 Linear algebra * 5.4 Abstract algebra * 6 Function Calculus * 6.1 Generic functions * 6.2 Function patches and models * 6.3 Polynomial models * 6.4 Affine models * 7 Solving Algebraic and Differential Equations * 7.1 Algebraic equations * 7.2 Differential equations * 7.3 Hybrid systems * 8 Examples * 8.1 Fixed points * 8.2 Differential equation * 9 Extensions * 9.1 Alternative bases * 9.2 \(C^{1}\) function calculus * 9.3 Multivalued functions * 9.4 Measurable functions * 9.5 Weierstrass approximation * 10 Conclusions ## 1 Introduction Many problems in applied mathematics are formulated in terms of functions. Examples include trajectories and flow tubes of ordinary differential equations, crossing times for hybrid systems, feedback for control systems, and state spaces and solutions of partial differential equations. To be able to solve such problems reliably using computational software, we need to be able to work with functions in a _natural_, _rigorous_, and _efficient_ way! Rigour is especially important in mathematical proofs, verification of safety-critical systems, and long chains of reasoning. The Ariadne software package (Benvenuti et al., 2008) provides analysis and verification tools for dynamic systems with continuous state spaces. This includes nonlinear hybrid systems in which continuous evolution is interspersed with discrete events triggered by conditions on the continuous state. This is a very general and complex problem, and requires functionality for many different operations, including solution of algebraic and differential equations, and of constrained feasibility problems, and requires general-purpose software. The computational kernel of Ariadne was subsequently developed, based on rigorous numerical methods for working with real numbers, functions and sets in Euclidean space, using techniques such as interval arithmetic, automatic differentiation and polynomial function models as a foundation. The purpose of this paper is to describe the low-level operations on sets and functions that have been used to build up the high-level algorithms for the analysis of hybrid systems in Ariadne. The computational kernel of Ariadne is written in C++, with a Python interface for easy scripting. It provides complete support for the operations of rigorous numerics described in Section 2. The main functionality is based around standard abstract _interfaces_ which can be implemented in various ways. Interfaces for various _solvers_ are also defined, with each solver being required to implement a closely-related set of operations. In order to make the package as self-contained as possible, and because existing software libraries did not meet requirements on functionality, it was decided to implement all necessary operations within the package itself, including the core operations of interval arithmetic, linear algebra, automatic differentiation, and polynomial arithmetic. Recent extensions of Ariadne's function calculus have taken place within the E.U. Horizon Europe project "Computing with Infinite Data". In this project, a number of packages with capabilities for function calculus are being developed: * Ariadne (Collins, Geretti, Villa et al.) for verification of hybrid systems (C++ & Python). [https://www.ariadne-cps.org/](https://www.ariadne-cps.org/) * iRRAM (Muller, Brauisse ) for real number arithmetic (C++). [http://irram.uni-trier.de/](http://irram.uni-trier.de/) * AERN (Konecny et al.) for effective real computation (Haskell). [http://michalkonecny.github.io/aerm/site/](http://michalkonecny.github.io/aerm/site/) * ERC (Ziegler, Park et al. Brauisse et al. (2018)): a language for exact real computation. Other tools for rigorous numerics with a function calculus include: * COSY Infinity (Makino and Berz, 2006), the seminal package in which Taylor models were developed Makino and Berz (2003). * CAPD Library (Kapela et al., 2021), a comprehensive library for analysis of dynamical systems. * Flow* (Chen et al., 2013), a tool for hybrid system evolution which also has Taylor models with interval coefficients. * JuliaReach (Bogomolov et al., 2019), a tool for set-based reachability analysis. * DynIBEX (Alexandre dit Sandretto and Chapoutot, 2016), has a custom class for trajectories. * CORA (Althoff et al., 2018) has Taylor models, but operations use floating-point approximations. Other tools have rigorous numerical functionality for differential equations, but lack a full function calculus: * AWA (Lohner, 1994) * VNode (Nedialkov, 2006) Finally, we mention some important foundational computational approaches: * Ellipsoidal calculus (Kurzhanski and Varaiya, 2002) * Set-valued viability (Cardaliaguet et al., 1999) * Taylor-ellipsoid models (Houska et al., 2013) Since the main aim of this paper is to introduce the framework and capabilities of Ariadne, we do not give a full comparison with these other tools here. For a comparison of the performance of Ariadne against other tools on benchmark problems, see Geretti et al. (2020, 2021, 2022). The paper is organised as follows. In Section 2, we describe relevant existing rigorous numerical methods, in Section 3 we give an overview of the foundational theory of computable analysis. In Section 4 we give the conceptual foundations of Ariadne, and in Section 5 the core logical, numeric and algebraic operations. In Section 6 we describe the function calculus as implemented in Ariadne, and in Section 7 we describe how the function calculus is used to solve differential and algebraic equations. Some examples of the use of Ariadne via the Python interface are given in Section 8 Concluding remarks are given in Section 10. ## 2 Overview of Rigorous Numerical Methods The real numbers \(\mathbb{R}\) and the set of continuous functions \(\mathbb{R}^{n}\to\mathbb{R}^{m}\) have continuum cardinality, so there is no possible way of describing all elements exactly using a finite amount of data. The traditional approaches to handling uncountable sets computationally are either to restrict to a countable subset and use symbolic manipulation, or to work with approximations in a finite subset. For the purpose of computer-assisted mathematical proofs, or verification of system models, neither approach is feasible; in the former, the computations take far too long, and in the latter, we lose rigour. Instead, we construct set-based approximations which can be expressed with a finite amount of data. ### The inclusion property The main idea of rigorous numerics is to represent an element \(x\) of an uncountable set \(X\) by a set \(\hat{x}\) containing \(x\). The set \(\hat{x}\) is taken from a countable set \(\widehat{X}\) of subsets of \(X\), and has a concrete description given by a finite amount of data. An implementation of an operation \(\operatorname{op}:X_{1}\times\cdots\times X_{n}\to Y\) is a function \(\widehat{\operatorname{op}}:\widehat{X}_{1}\times\cdots\times\widehat{X}_{n} \to\widehat{Y}\) such that the following _inclusion property_ holds: \[\hat{x}_{i}\ni x_{i}\text{ for }i=1,\ldots,n\] \[\implies\ \widehat{\operatorname{op}}(\hat{x}_{1},\ldots,\hat{x}_{n })\ni\operatorname{op}(x_{1},\ldots,x_{n}).\] Throughout this paper, we will use \(\hat{x}\) to denote a set containing an exact value \(x\), and \(\check{x}\) to denote an approximation to \(x\). We frequently take \(\hat{x}\) to be compact. The main advantage is that compactness is preserved under forward images. A secondary advantage is that compact sets include many important singleton constants, which can thus be handled exactly. The two main ways of defining validated sets of objects are to use _metric_ or _order_ information. If \((X,d)\) is a metric space, let \(\Xi\) be a countable dense subset of \(X\). Then a pair \((\xi,\epsilon)\) for \(\xi\in\Xi\) and \(\epsilon\in\mathbb{Q}^{+}\) defines a subset \(\{x\in X\mid d(\xi,x)\leq\epsilon\}\), the _ball_\(B(\xi,\epsilon)\). Note that we usually take closed balls rather than open balls as is more common in topology, since these are (typically) compact for locally-compact spaces. If \((X,\leq)\) is a partially-ordered space, then a pair \((\underline{\xi},\overline{\xi})\) defines a subset \(\{x\in X\mid\underline{\xi}\leq x\leq\overline{\xi}\}\), the _interval_\([\underline{\xi};\overline{\xi}]\). Again, closed intervals are typically compact, supporting forward images. ### Interval arithmetic The most well-known example of a rigorous numerical calculus is _interval arithmetic_(Moore, 1966); see also Jaulin et al. (2001); Moore et al. (2009). We can use interval arithmetic to rigorously evaluate a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) at a point \(x\in\mathbb{R}^{n}\) or over a range of values \(\lfloor x\rceil\in\mathbb{R}^{n}\). A real number \(x\in\mathbb{R}\) is represented by an interval \([\underline{x};\overline{x}]\ni x\). Typically, the endpoints \(\underline{x},\overline{x}\) are taken to lie in a set \(\mathbb{F}\) of floating-point numbers of a given precision; we denote by \(\mathbbm{F}\) the set of all such intervals. Alternatively, we take intervals defined by a centre and radius, so \(x\in\hat{x}\pm e_{x}\). An _interval version_\(\lfloor\underline{x}\rceil\) of a (binary) operator \(\star\) must satisfy the inclusion property \[x\in[\underline{x}\!:\!\overline{x}]\ \wedge\ y\in[\underline{y}\!:\!\overline{y}] \ \implies\ x\star y\in[\underline{x}\!:\!\overline{x}]\,|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\! ### Algebraic equations To solve the algebraic equation \(f(y)=0\), a standard technique is to use the _interval Newton_ operator (Moore, 1966): \[N(f,\hat{y},\tilde{y})=\tilde{y}-[\mathrm{D}f(\hat{y})]^{-1}f(\tilde{y}),\] where \(\hat{y}\) is a box and \(\tilde{y}\in\hat{y}\). Any solution to \(f(y)=0\) in \(\hat{y}\) also lies in \(N(f,\hat{y},\tilde{y})\), and further, if \(N(f,\hat{y},\tilde{y})\subset\hat{y}\), then the equation \(f(y)=0\) has a unique solution in \(\hat{y}\). The solution is found by iteratively applying the operator until inclusion is satisfied and the domain is sufficiently small. ### Differential equations There are many methods in the literature for solving differential equations (Lohner, 1987; Berz and Makino, 1998; Nedialkov et al., 1999; Zgliczynski, 2002). However, most use some variation on the following procedure to compute the flow \(\phi\) of \(\dot{x}=f(x)\) starting in an initial set \(X_{0}\) at time \(0\) for a time step \(h\). The flow is computable as a fixed-point of the Picard operator \[\phi(x,t)=x+\int_{0}^{t}\!f(\phi(x,\tau)d\tau.\] First a _bound_\(B\) is computed such that \(\phi(X_{0},[0,h])\subset B\). A sufficient condition for \(B\) to be such a bound is that \(X_{0}+[0,h]f(B)\subset B\). Either this operator is used exactly, or to find the Taylor coefficients of \(\phi\) are using automatic differentiation, both at the point \((x_{0},0)\) and over the set \((B,[0,h])\). Finally, the results are combined to give a set \(X_{1}\) containing \(\phi(X_{0},h)\). As an additional step, some packages have a _reconditioning_ step to control the errors. ### Constraint propagation Constraint propagation (Kearfott, 1996; Jaulin et al., 2001) is a collection of methods for determining whether sets of the form \(\{x\in D\mid g(x)\in C\}\) are empty, where \(C\) and \(D\) are coordinate-aligned boxes and \(D\) is bounded. The result is given as a collection of boxes \(X=\bigcup_{i=1}^{k}X_{i}\) such that \(D\cap g^{-1}(C)\subset X\). The main technique is to use _contractors_ to reduce the size of a box \(X_{i}\) containing points of \(g^{-1}(C)\), starting with \(X=D\). Given a symbolic representation of \(g\) as a directed acyclic graph whose nodes are operators and leaves are constants and variables, the _reusing hull consistency_ algorithm (Benhamou et al., 1999) uses interval arithmetic to propagate constraints; other contractors use monotonicity properties of \(g\). If no reduction in the solution set \(X\) can be made, then it is split into two pieces, each of which are considered separately. ### Constrained optimisation Constrained optimisation aims to solve the problem of minimising a function \(f\) over a bounded domain \(D\), subject to constraints of the form \(g(x)\in C\) A particularly powerful approach (Hansen and Walster, 2004) is to introduce Lagrange multipliers to the constraints and solve a related optimization problem. In order to validate a candidate solution, methods based on applying the interval Newton test to the Karush-Kuhn-Tucker conditions can be used. ## 3 Computable Analysis Computable analysis (Weihrauch, 2000) provides a theoretical foundation for rigorous numerical methods. Essentially, an operation is _computable_ if it the result can be computed to arbitrary accuracy by rigorous numerical methods. Objects, such as numbers, functions and sets, are described by infinite _streams_ of data, in such a way that useful information is provided by a finite part of the stream. Representations of mathematical types are strongly related to topologies on the represented sets, and well-behaved representations of a topological space are known as _admissible quotient representations_. Functions and operators on represented sets are described by Turing machines (finite programs) acting on data streams. Such computations next terminate, and may use an infinite amount of memory (though memory use after a finite time must be finite). One of the main properties of computable analysis is that comparison of two real numbers is undecidable; if \(x,y\in\mathbb{R}\) and \(x\neq y\) then we can indeed prove either \(x<y\) or \(x>y\) in finite time, but if \(x=y\), then we will never know this. Even for the case of symbolic numbers defined using arithmetic, exponential and trigonometric functions, it is unknown whether equality is decidable. To ensure all decision computations are finite, we allow such comparisons to return a value "indeterminate", which contains no information. Functions are _defined_ by evaluation. Thus a real function \(f:\mathbb{R}\to\mathbb{R}\) is an object which, given (a description of) a real number \(x\), yields (a description of) the real number \(f(x)\). Real functions can represent by _interval extensions_: Given a class \(\mathbb{I}_{\mathbb{F}}\) of real number bounds, an interval extension for \(f:\mathbb{R}\to\mathbb{R}\) is a function \(\hat{f}:\mathbb{I}_{\mathbb{F}}\to\mathbb{I}_{\mathbb{F}}\) such that \[\hat{f}(\hat{x})\supset\{f(x)\mid x\in\hat{x}\}.\] To define a function, the interval extension must allow arbitarily accurate evaluation: \[\bigcap_{n}\hat{x}_{n}=\{x\}\implies\bigcap_{n}\hat{f}(\hat{x}_{n})=\{f(x)\}.\] The most important properties of computable analysis are: **Cartesian closed category**: Product \(\mathcal{X}\times\mathcal{Y}\) and function \(\mathcal{X}\to\mathcal{Y}\) types exist, with computable projection functions \(\pi_{i}:\mathcal{X}_{1}\times\mathcal{X}_{2}\to\mathcal{X}_{i}\), evaluation \(\varepsilon:(\mathcal{X}\to\mathcal{Y})\times\mathcal{X}\to\mathcal{Y}\), and a computable isomorphism between \((\mathcal{X}\times\mathcal{Y})\to\mathcal{Z}\) and \(\mathcal{X}\to(\mathcal{Y}\to\mathcal{Z})\). **Quotients of countably-based spaces**: A topological space \(X\) has an admissible quotient representation if, and only if, it is the quotient of a countably-based space. Any (effectively) separable complete metric space has an admissible quotient representation given by Cauchy sequences \((x_{n})\) which are fast-converging: \(d(x_{m},x_{n})\leq 2^{-\min(m,n)}\). **Canonical types**: There are unique canonical types for \(\mathbb{B}\) (Booleans), \(\mathbb{S}\) (Sierpinskians), \(\mathbb{K}\) (Kleeneans), \(\mathbb{N}\), \(\mathbb{Z}\), \(\mathbb{Q}\) and \(\mathbb{R}\). The _Kleenean_ type \(\mathbb{K}\) has values \(\{\top,\bot,\uparrow\}\) where \(\uparrow\) ("indeterminate") indicates that a proposition is undecidable. It has Kleene semantics, with implication \(\uparrow\to\uparrow=\uparrow\). Note that \(\uparrow\) is not considered an "error", since any nontrivial predicate on continuous data has undecidable instances. \(\mathbb{S}\) is the subtype with values \(\{\top,\uparrow\}\). The real type \(\mathbb{R}\) is the unique type of the set of real numbers supporting arithmetic, comparisons, and limits of strongly-convergent Cauchy sequences. Comparisons yield values in \(\mathbb{K}\), with \(x<y=x\leq y=\uparrow\) if \(x=y\). **Derived types**: The Cartesian-closed category property allows the easy construction of types derived from other types. For example, open subsets of a space \(X\) are described by the type \(\mathcal{X}\to\mathbb{S}\), since a set \(U\) is open if, and only if, its indicator function \(\chi_{U}:X\to\mathbb{S}\) is continuous. Similarly, compact sets are described by \((\mathcal{X}\to\mathbb{S})\to\mathbb{S}\) using the predicate \(C(U)\iff C\subset U\). For a more complete introduction to computable analysis, see (Weihrauch, 2000; Collins, 2020b). ## 4 Conceptual foundations In Ariadne, we identify three orthogonal concepts in the abstract description of objects, namely _type_, _information_, and a distinction between _generic_ objects, and _concrete_ objects defined by _properties_ affecting the available precision. ### Types A _type_ corresponds to a mathematical type of computable analysis. For example, the type Real corresponds to the mathematical real numbers. The type defines the permitted operations; for example, we can add two real numbers, \(+:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\), compare real numbers \(\lesssim:\mathbb{R}\times\mathbb{R}\to\mathbb{K}\), where \(\mathbb{K}\) is the Kleenean type representing the result of quasidecidable propositions, and convert from a rational \(\mathbb{Q}\hookrightarrow\mathbb{R}\). A related type is the type UpperReal, denoted \(\mathbb{R}_{>}\). While a real number can be bounded arbitrarily accurately, an upper real only contains enough information to compute upper bounds. Hence the predicate \(<:\mathbb{R}_{>}\times\mathbb{Q}\to\mathbb{S}\), where \(\mathbb{S}\) is the _Sierpinskian_ type, is computable, since we can verify \(x<q\) for \(x\in\mathbb{R}_{>}\) and \(q\in\mathbb{Q}\), but we cannot verify \(x>q\). Upper reals can be added \(+:\mathbb{R}_{>}\times\mathbb{R}_{>}\to\mathbb{R}_{>}\) and positive upper reals multiplied \(\times:\mathbb{R}_{>}^{+}\times\mathbb{R}_{>}^{+}\to\mathbb{R}_{>}^{+}\). An important usage of the positive upper reals is as the radius of open balls in metric spaces. ### Information In Ariadne, every object indicates the kind of _information_ that object provides about the value of a quantity*. With _exact_ information, objects are specified with enough information to identify them exactly; further, equality on such objects is decidable. For example, \(2/3\) is an exact rational number, and \(0.625=0.101_{2}\) is an exact dyadic number. With _symbolic_ information objects are specified by symbolic formulae with enough information to identify them exactly, but equality is not necessarily decidable. For example, \(2/3\), \(\pi\) and \(\sin(1)\) are all symbolic reals, and both \(\exp(1)\) is an symbolic representations of \(e\). With _effective_ information, objects are specified by an algorithm computing them to arbitrary precision. The distinction between symbolic and effective objects is that for an object to be effective, an algorithm computing it must be given. For example, and algorithm computing \(\exp(x)\) for rational \(x\) is \(\exp(x)=\sum_{n=0}^{N-1}x^{n}/n!\pm 2|x|^{N}/N!\) whenever \(N\geq 2|x|\). However, there are other implementations of the exponential function possible, yielding different approximations to \(\exp(x)\). With _validated_ information, objects are specified with sufficient information is given to compute all operations to some fixed accuracy. For example, a floating-point interval \([\underline{x}\!:\!\overline{x}]\) is a concrete data type for the real numbers in the validated paradigm, representing all real numbers \(x\) with \(\underline{x}\leq x\leq\overline{x}\). The interval \([2.625\!:\!2.75]\) is a validated representation of \(e\). The floating-point numbers provide a concrete date type for the upper reals, with \(\overline{x}\) representing all numbers \(x\leq\overline{x}\). With _approximate_ paradigm, a single value is given for a quantity with no guarantees on the exact value of the quantity. For example, 2.75 is a floating-point approximation to \(e\), with relatively poor accuracy. It should be noted that a conversion is _safe_ if, and only if, the corresponding mathematical types can be converted and the information is weaker. Hence an exact element of \(\mathbb{R}\) can be safely converted to a validated element of \(\mathbb{R}_{>}\), but an exact element of \(\mathbb{R}_{>}\) should not be converted to a validated element of \(\mathbb{R}\) i.e. an interval (though the conversion could in principle return an interval \([-\infty,\overline{x}]\)). It is sometimes useful to convert an approximate object to an exact object (for example, in preconditioning an iterative validated algorithm); in this case the conversion requires an explicit cast. The kind of _information_ about an object was previously called the _(computational) paradigm_ used. In Ariadne The type definition Paradigm<X> describes the paradigm of class X, either ExactTag, EffectiveTag, ValidatedTag or ApproximateTag. Where a class has a template parameter P denoting the information tag, a synonym is provided. For example, EffectiveNumber is a synonym for Number<EffectiveTag>. ### Properties We also distinguish between _generic_ and _concrete_ representations, so that e.g. FloatMPBounds is a concrete representation of a ValidatedReal. Generic classes such as Real or ValidatedReal can support arbitrary objects of a type with a given information. However, for efficient computation, the need for virtual dispatching of every operation is inefficient. For this reason, Ariadne supports concrete objects for actual computations. Every concrete object has an underlying type and information (typically Validated or Approximate), and is further specified by _properties_) giving computational parameters. Any floating-point number has a _precision_ property, with e.g. the FloatMP::precision() method returning a MultiplePrecision (or MP) value. A polynomial function approximation may have a maximum _degree_ property. Any concrete object can be constructed by giving a value of its GenericType and a tuple of its PropertyType. ## 5 Core Functionality: Logic, Numbers, Algebra In this section, we give an overview of the implementation of core logical, numerical and algebraic data types of Ariadne. ### Logic Ariadne's core logical type is the Kleenean type \(\mathbb{K}\), which represents the result of a quasidecidable predicate, notably comparison of two real numbers, which has signature operator<(Real, Real) -> Kleenean; The Kleenean type supports negation, conjunction and disjunction, represented by the usual C++ operators!, && and!. Although a Kleenean represents a value which is either true, false, or indeterminate, to obtain a value in a finite computation, one needs to specify an Effort parameter, and obtains a ValidatedKleenean object using Kleenean::check(Effort) -> ValidatedKleenean; A ValidatedKleenean can have one of the three values TRUE, FALSE or INDETERMINATE (or indeterminate). To be used in a C++ conditional, one needs to convert to a value of the builtin type bool, which can be done using definitely(ValidatedKleenean k) -> bool; possibly(ValidatedKleenean k) -> bool; where definitely returns false if k has value INDETERMINATE, whereas possibly returns true. Comparing Validated numbers directly gives Validated logical types: ValidatedKleenean operator<(ValidatedReal, ValidatedReal); ValidatedKleenean operator<(FloatMPBounds, FloatMPBounds); Comparing Approximate numbers gives a ApproximateKleenean, with values LIKELY and UNLIKELY, which can be converted to a bool using the probably function. Note that in the case of the approximate information, the values TRUE and FALSE are never used, since if \(\tilde{x}\approx x\) and \(\tilde{y}\approx y\), but no error bounds are provided, we cannot deduce \(x>y\) even if \(\tilde{x}>\tilde{y}\); in this case the result LIKELY is used. Any logical object can be forced into a boolean context using the decide predicate, which is implementation-defined on an INDETERMINATE value. ### Numbers In Ariadne, we provide algebraic Integer, Dyadic and Rational classes, based on the GMP library (GMP). Classes Int and Int32 are wrappers for C++ builtin integral types, and a Decimal class is provided for user input. Real numbers are represented by the generic Real number class, with bounds on a number by the ValidatedReal class, and approximations by the ApproximateReal class. Each of these classes supports the standard arithmetical operations +, -, * and /, with named versions pos, neg (for unary + and -), and add, sub, mul and div. The real numbers also support unary operations nul (returning 0), hlf and reciprocal rec, a ternary fused-multiple-and-add fma, an integer power operation pow, and the square-root sqrt. Elementary transcendental operations exp, log, sin, cos, tan, asin, acos and atan are provided, the lattice operations max and min, and the absolute value abs. Concrete representations of numbers for use in computations are defined using fundamental floating-point types, of which FloatDP (based on C++ double) and FloatMP (based on the mpfr_t of the MPFR library (MPFR). These classes provide rounded operations Validated Ball<X,E> and Bounds<X> classes, and an approximate Approximation<X> class. _Remark 2_ (_Implementation_).: In Ariadne, we use Interval exclusively to refer to a _geometric_ interval, rather than a _numerical_ value as is traditional in interval arithmetic. A geoeemtric interval with upper endpoint of type b is represented by the class Interval<UB>. Thus an over-approximation to an interval \([a:b]\) has type Interval<UpperBound<X>, with endpoints \(\underline{a}\) and \(\overline{b}\) of type LowerBound<X> and UpperBound<X>, respectively whereas Bounds<X> is used for an interval of possible values \([\underline{x}:\overline{x}]\) for a single number \(x\). Note that data \(\langle\underline{a},\overline{b}\rangle\) for an over-approximation to an interval \([a:b]\) is the same as that of an over-approximation \(\langle\underline{x},\overline{x}\rangle\) to a single point \(x\), but even lthough both classes have the same underlying data, the semantics is subtly different. Arithmetical operations preserving the inclusion property can be implemented using _rounded_ arithmetic. Modern microprocessors implement rounded versions of standard arithmetical operations \(\star\in\{+,-,\times,\div\}\) on \(\mathbb{F}\) satisfying: \[x\star_{d}y\leq x\star y\leq x\star_{u}y;\] \[|x\star_{n}y-x\star y|\ \leq\ |z-x\star y|\ \forall z\in\mathbb{F}\] where \(\star_{d}\), \(\star_{u}\) and \(\star_{n}\) are, respectively, downwards, upwards, and to nearest rounded versions of \(\star\). Error bounds on operations easily be computed: \[e_{\star}(x,y):=|(x\star y)-(x\star_{n}y)|\ \leq\ \big{(}x\star_{u}y)-_{u}(x \star_{d}y)\big{)}\div 2\leq\tfrac{1}{2}\operatorname{\mathrm{ulp}}(x\star y).\] Here \(\mathrm{ulp}(z)\) is the number of _units in the last place_ of \(z\), defined as \[\mathrm{ulp}(z):=2^{-(\lfloor logzz\rfloor+d)}\] where \(d\) is the number of binary digits in the mantisse of \(z\). Interval arithmetic can easily be implemented in terms of rounded arithmetic e.g. \[[\underline{x}{:}\overline{x}]-[\underline{y}{:}\overline{y}]=[\underline{x} {-}_{d}\,\overline{y}{:}\overline{x}{-}_{u}\,\underline{y}]\] Interval arithmetic is implemented using _rounded_ arithmetic. \[x\star_{d}y\leq x\star y\leq x\star_{u}y;\] \[|x\star_{n}y-x\star y|\ \leq\ |z-x\star y|\ \forall z\in\mathbb{F}\] where \(\star_{d}\), \(\star_{u}\) and \(\star_{n}\) are, respectively, downwards, upwards, and to nearest rounded versions of \(\star\). Error bounds can easily be computed: - \[-2|(x\star y)-(x\star_{n}y)|\ \leq\ (x\star_{u}y)-_{u}(x\star_{d}y).-\] -Interval arithmetic can easily be implemented in terms of rounded arithmetic e.g. - \[[\underline{x}{:}\overline{x}]-[\underline{y}{:}\overline{y}]=[\underline{x} {-}_{d}\,\overline{y}{:}\overline{x}{-}_{u}\,\underline{y}]\] While C++ is a powerful and expressive language, it was not designed with numerical rigour in mind, and contains features which may result in unsafe computations, notably relating to the built-in types. We now briefly describe two of these features, namely automatic conversions and floating-point literals. _Remark 3_ (Conversions).: The builtin types, including bool, char, unsigned int, int, long int and double, may be automatically converted to each other. This is particularly dangerous since floating-point types may be converted to integral types, or integral types may be converted to others with a smaller number of bits or different sign. For this reason, Ariadne does not support mixed operations involving Ariadne classes and builtin numbers, or conversion from builtin numbers to Ariadne classes, unless these operations are safe. The main restriction is that builtin floating-point numbers can only be use where Ariadne would accept a real number carrying Approximate information. The use of template methods can be controlled using C++ concepts, as in the following constructor of Integer: template+class N> requires Integral+N> Integer(N const& n); _Remark 4_ (Literals).: The C++11 standard allows for user-defined integer, floating-point and string literals. This allows for disabiguation of the meaning of a floating-point number in text. _x denotes an _exact_ floating-point literal. Hence r=0.25_x explicitly states that the literal 0.25 denotes a the number 1/4, represented as a floating-point number. Hence r=0.3_x means that \(r\) is a floating-point number with value given by the _compiler's approximation of the string "0.3" as a floating-point number_. The value of this number is \[0.29999999999999998897769753748434595763683319091796875\.\] The next highest representable double-precision floating-point number is \[0.30000000000000000444089209850062616169452667236328125\.\] A basic principle is that if a number is closer to a "reasonable" interpretation of the string value than the nearest single-precision number, that interpretation is taken to be exact. A _rational literal_ has subscript _q. A floating-point is converted to a rational using continued fractions. If any iterate of the exact continued-fraction expansion \(a_{i+1}=1/(a_{i}-\lfloor a_{i}\rfloor)\) has value less than \(\epsilon\), this is assumed to be a rounding error, and the value is set to 0. A _decimal literal_ has subscript _d. The nearest decimal number with at most \(s\) significant digits is computed. If this has smaller error than machine epsilon \(\epsilon\), the decimal approximation is accepted. ### Linear algebra The Vector<X> class represents vectors over a (numeric) field, or a module over an algebra. The value type is given by Vector<X>::ScalarType and the type of the underlying field by Vector<X>::NumericType. It is required that it must be possible to add (or subtract) any two elements of the vector. It is sometimes necessary to consider the characteristics of a zero-length vector, hence as well as indexing we also require a method X Vector<X>::Zero_element() The operations supported by a vector \(\mathbb{V}\) over some scalar type \(\mathbb{X}\) are \[\mathbb{V}+\mathbb{V}\rightarrow\mathbb{V};\quad\mathbb{X}\cdot\mathbb{V} \rightarrow\mathbb{V}.\] Further, in Ariadne, all vectors are expressed in a canonical basis so support subscripting \[\mathbb{V}[\mathbb{N}]\rightarrow\mathbb{X}.\] Ariadne provides three basic vector norms, supremum (the default), Euclidean, and one-norm, given by \[\|v\|_{\infty}=\sup\{|v_{i}|\mid i=1,\ldots,n\};\quad\|v\|_{2}=\sqrt{\sum_{i=1 }^{n}v_{i}^{2}};\quad\|v\|_{1}=\sum_{i=1}^{n}|v_{i}|.\] The Matrix<X> class provides dense matrices, and the SparseMatrix<X> class sparse matrices. ### Abstract algebra An _algebra_\(\mathbb{A}\) over a field \(\mathbb{X}\) (which we henceforth take to be the real numbers \(\mathbb{R}\)) is a type supporting addition, multiplication and scalar multiplication \[A+\mathbb{A}\rightarrow\mathbb{A};\qquad\mathbb{A}\times\mathbb{A}\to \mathbb{A};X\cdot\mathbb{A}\rightarrow\mathbb{A};\qquad\quad X\cdot\mathbb{A} \rightarrow\mathbb{A}\] such that \(\mathbb{A}\) is a vector space over \(\mathbb{X}\), and multiplication satisfies \((c\cdot a)\times b=c\cdot(a\times b)=a\times(c\cdot b)\) and \(a\times(b+c)=a\times b+a\times c\). An algebra is _associative_ if multiplication is associative, and _commutative_ algebras if multiplication is commutative. An algebra is _unital_ if it has a multiplicative identity \(1\); in this case we also have a scalar addition operation \(\mathbb{X}+\mathbb{A}\rightarrow\mathbb{A}\) satisfying \(c+a:=c\cdot 1+a\). Note that most algebras should be thought of as representing _scalar_ quantities with some additional structure, rather than as _vectors_ over \(\mathbb{X}\). In Ariadne, the property of being an algebra is defined by the C++ concepts AnAlgebra<A> and AnAlgebraOver<A,X>, where the former specifies a NumericType describing the underlying field \(\mathbb{X}\). Examples of algebras in Ariadne include polynomials over one or many variables, denoted by the class Polynomial<I,X> where I is the _index_ type and \(\mathbb{X}\) the _coefficient_ type, symbolic RealExpressions, continuous Function<P,Real(ARG)> types, and Differential<X> objects for automatic differentiation. The numeric type \(\mathbb{X}\) may be a ValidatedNumber, or a concrete number type such as for the ValidatedTaylorModel<\(\cdots\)> classes used as a core concrete function type, as given in Section 6. An _elementary algebra_ supports elementary operations (except perhaps abs), and an _analytic algebra_ supports general analytic functions, and a _continuous algebra_ supports continuous operation. Continuous algebras include the algebra of continuous functions \(\mathbb{X}\rightarrow\mathbb{R}\), since given any \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\), we can define \(h(f_{1},\ldots,f_{n}):x\mapsto h(f_{1}(x),\ldots,f_{n}(x))\). (Note that in the current implementation, Function<ARG,Real> only supports elementary operations.) Similarly, RealExpression can be defined as a pointwise algebra, but again, only elementary operations are currently supported. Given an algebra A over X, the syntax apply(f,a) applies f to a. Elements of a _Banach algebra_ have a _norm_\(\|a\|\), expressed expressed a.norm() or norm(a). Further, it is often useful to be able to give a number \(c\) such that \(\|a-c\|\) is minimised; this number is the unit_coefficient(), denoted \(\langle\mathbb{A}\rangle\rightarrow\mathbb{X}\). In order to support validated operations, we use the unit ball \(\hat{e}=\{a\in\mathbb{A}\mid\|a\|\leq 1\}\). For elements of a unital Banach algebra, it is possible to compute arbitrary analytic functions using the power-series expansion. The exponential \(\hat{r}=\exp(x)\) can be expressed as \[c=\langle x\rangle;\quad y=x-c;\quad k=\max\{0,\lceil\log_{2}\|y\|\rceil\}; \quad z=(1/2^{k})\cdot y;\] \[\hat{s}=\sum_{n=0}^{N-1}z^{n}/n!+(\|z\|^{N}/N!)\hat{e};\qquad\hat{r}=\exp(c) \cdot\hat{s}^{(2^{k})}\] Note that the power \(\hat{s}^{2^{k}}\) can be computed by squaring \(k\)-times. The TaylorModel<I,F,FE> classes described in Section 6.3 form a Banach algebra, and analytic operations on them are implemented using generic Banach algebra algorithms. A _differential_ algebra over \(n\) variables supports derivative operations \(\partial_{j}\) for \(j=1,\ldots,n\), which are an abstraction of partial derivatives. These are linear operators which commute and satisfy the Leibnitz rule: In a unital algebra, we have \(\partial_{i}c=0\) for a constant \(c\). An element \(v_{i}\) is a _variable_ if \(\partial_{j}v_{i}=\delta_{i,j}\). Elements of a differential algebra can be used to implement automatic differentiation of functions. Ariadne provides a Differential<X> class which models a quantity and its derivatives with respect to independent variables. Versions optimized for first and second order derivatives only, and for univariate functions, are also provided. For efficiency reasons, rather than store the derivatives, we store the coefficients of the corresponding polynomial, and extract the derivatives using \[\partial^{|\alpha|}x^{\alpha}/\partial x_{1}^{\alpha_{1}}\cdots\partial x_{n}^ {\alpha_{n}}=\prod_{i=1}^{n}\alpha_{i}!\.\] Differential algebras are _graded_, meaning that any element can be written in the form \(a=\sum_{i=0}^{\infty}a_{i}\) where each \(a_{i}\) has total degree \(i\). Hence constants \(c\cdot 1\) lying in \(\mathbb{\Lambda}_{0}\), and \(a\in A_{k}\) if \(\partial_{i}a\in A_{k-1}\) for all (any) \(i\). Multiplication in a graded algebra satisfies \[\Bigl{(}\sum_{i=0}^{\infty}a_{i}\Bigr{)}\times\Bigl{(}\sum_{i=0}^{\infty}b_{i }\Bigr{)}=\sum_{i=0}^{\infty}\Bigl{(}\sum_{j=0}^{i}a_{j}b_{i-j}\Bigr{)}.\] Since elements of grade \(i\) do not depend on elements of grade \(j\) for \(j>i\), graded algebras are analytic. This property is used to in generic code to apply the elementary operators to the Differential classes. ## 6 Function Calculus The main generic function types are EffectiveFunction for exact scalar functions described symbolically, and ValidatedFunctionModel for interval functions on a box domain. ### Generic functions The main generic function classes in Ariadne are provided by the template Function<P,RES(ARGS...)>, where P is the information type, RES is the result type, and ARGS... the argument types. Currently, EffectiveTag, ValidatedTag and ApproximateTag functions are supported, and the argument and result types can be Real or Vector<Real>, or synonymously RealScalar and RealVector. A function with a Scalar arguments is Univariate, and with Vector arguments Multivariate. Hence ValidatedScalarMultivariateFunction is a synonym for Function<ValidatedTag,RealScalar(RealVector)>. Functions may be constructed using the C++ "named constructors" Function<P,RES(ARG)>::constant(...); Function<P,RES(ARG)>::coordinate(...); or using named variables e.g. x = RealVariable("x"); y=RealVariable("y"); f = EffectiveScalarMultivariateFunction([x,y],exp(x)*cos(y)) Functions form an elementary algebra, so f1*f2 denotes the function \(x\mapsto f_{1}(x)\times f_{2}(x)\), and sin(f) denotes the function \(x\mapsto\sin(f(x))\). Functions may be composed, with compose(f,g) denoting \(f\circ g\). Other operations include derivative(f,k) for \(\partial f/\partial x_{k}\), where \(k\) may be omitted if \(f\) is univariate, join(f1,f2) denoting \(x\mapsto(f_{1}(x),f_{2}(x))\) and combine(f1,f2) for \((x_{1},x_{2})\mapsto(f_{1}(x_{1}),f_{2}(x_{2}))\). Ariadne provides extended evaluation operations, so the class ApproximateFunction<Real(Real)> can be evaluated not only on ApproximateNumber, but also on FloatDPApproximation and FloatMPApproximation, and indeed, over any pointwise-algebra over ApproximateNumber. ValidatedFunction<Real(Real)> also supports evaluation on ValidatedNumber, FloatBounds<PR> and FloatBall<PR,PRE>. Derivatives can be computed by evaluating over Differential objects as defined in Section 5.4 and polynomial approximations by evaluating over TaylorModel objects (described in Section 6.3). Functions are assumed to be total, but may throw a DomainException if they are passed an invalid argument, such as one causing a divide-by-zero. ### Function patches and models Most function approximations are only valid over bounded domains. The Ariadne FunctionPatch classes define partial functions over coordinate-aligned boxes \(D=\prod_{i=1}^{n}[a_{i}:b_{i}]\). The domain method returns the domain of the function. Concrete calculations are performed by FunctionModel objects, which also specify concrete numerical precision PR used for evaluation and error bounds PRE. Unlike Function objects, FunctionModels are _mutable_, and support in-place modification e.g. using operator*=. A _uniform function model_ is defined by a tuple \(\langle D,g,e\rangle\) where \(D\) is a _domain_, \(g:D\to\mathbb{R}\) is a _validated approximation_ and \(e\) is an _error bound_. The tuple represents any function \(f:D\to\mathbb{R}\) such that \[\mathrm{dom}(f)\supset D\text{ and }\sup_{x\in D}|f(x)-g(x)|\leq e.\] Typically the function \(g\) is a finite linear combination of basis functions \(\phi_{\alpha}\), so \[g(x)=\sum_{\alpha\in\mathbb{R}}c_{\alpha}\phi_{\alpha}(x).\] When these basis functions are products of univariate basis functions \(\phi_{k}\), we have \[g(x)=\sum_{\alpha\in\mathbb{N}}c_{\alpha}\prod_{i=1}^{n}\phi_{\alpha_{i}}(x_{i})\] where \(\alpha\in\mathbb{N}^{n}\) is a _multi-index_. In particular, for the monomial basis \(\phi_{k}=x^{k}\) we have \(x^{\alpha}:=\prod_{i=1}^{n}x_{i}^{\alpha_{i}}\). The coefficients \(c_{\alpha}\) are typically floating-point numbers of a fixed precision, either FloatDP or FloatMP. A _unit_ function model is defined over the unit box \(D=[-1\,{:}\,{+}1]^{n}\). The main advantage of using a unit domain is that important basis univariate functions, including the monomials \(x^{i}\) and Chebyshev functions, have maximum absolute value equal to \(1\) over \([-1\,{:}\,{+}1]\), making it easy to determine which coefficients have a large impact on the function behaviour. Function models over other box domains are defined in terms of unit function models via a _scaling_ function. This is an affine bijection \(s:[-1,+1]^{n}\to\prod_{i=1}^{n}[a_{1}\,{:}\,b_{i}]\) defined componentwise by \([s(z)]_{i}=s_{i}(z_{i})\). If \(c_{i}=(a_{i}+b_{i})/2\) and \(r_{i}=(b_{i}-a_{i})/2\), then \(s_{i}(z_{i})=r_{i}z_{i}+c_{i}\) and \(s_{i}^{-1}(x_{i})=(x_{i}-c_{i})/r_{i}\). A scaled function model is defined by a tuple \(\hat{f}=\langle s,g,e\rangle\) where \(s\) is a scaling function with codomain \(D\) and \(g\) defined on the unit domain. The scaled function model \(\hat{f}\) represents a function \(f\) if \(\mathrm{dom}(f)\supset D\) and \[\|g\circ s^{-1}-f\|_{\infty,D}:=\sup_{x\in D}|g(s^{-1}(x))-f(x)|\leq e,\] or equivalently if \[\|g-f\circ s\|_{\infty,[-1\,{:}\,{+}1]^{n}}\leq e,\] A validated function model \(\hat{f}\) represents a set of functions \(f\), and has operations preserving the inclusion property. Hence for a binary operation \(\star\), we must have \[f_{1}\in\hat{f}_{1}\ \wedge\ f_{2}\in\hat{f}_{2}\ \implies\ f_{1}\star f_{2}\in\hat{f}_{1} \nmid\hat{f}_{2}.\] ### Polynomial models The main class of validated function model in Ariadne are the TaylorFunctionModels, based on the Taylor models of Makino and Berz (2003). Since the only existing package implementing Taylor models when Ariadne was under development, COSY Infinity (Makino and Berz, 2006) is not open-source, an implementation within Ariadne itself was developed. Unlike the original Taylor models of Makino and Berz (2003), where an arbitrary interval \(I\) is used as the remainder term, here we only give an error bound, corresponding to an interval remainder \(I=[-e\,{:}\,{+}e]\). In Ariadne, a _scaled polynomial model_ or _scaled Taylor model_ is a scaled polynomial approximation on a box, defined by a tuple \(\langle s,p,e\rangle\) where 1. each \(s_{i}:[-1,+1]\to[a_{i},b_{i}]\) is a scaling function \(z_{i}\mapsto r_{i}z_{i}+c_{i}\). 2. \(p:[-1,+1]^{n}\to\mathbb{R}\) is a polynomial \(z\mapsto\sum_{\alpha}c_{\alpha}z^{\alpha}\) with each \(c_{\alpha}\in\mathbb{F}\). Figure 1: A scaled function model. 3. \(e\in\mathbb{F}_{e}^{+}\) is an error bound. Here \(\mathbb{F}\) and \(e\in\mathbb{F}_{e}\) are concrete numerical types, such as FloatDP or FloatMP. The _domain_ of the model is \(D=\prod_{i=1}^{n}[a_{i},b_{i}]\). A scaled polynomial model \(\langle s,p,e\rangle\) represents \(f:\mathbb{R}^{n}\to\mathbb{R}\) if \[\sup_{x\in D}\bigl{|}f(x)-p\circ s^{-1}(x)\bigr{|}\leq e.\] In particular, we can only represent functions over a bounded domain. In Ariadne, unit polynomial models \(p\pm e\) are defined by the template class TaylorModel<ValidatedTag,F,FE>, where F is the type used for the coefficients of the polynomial and FE the type used for the error bound, The current implementation uses a sparse representation of the polynomial, with separate lists for the multi-indices \(\alpha\in\mathbb{N}^{n}\) and the coefficients \(c\in\mathbb{F}\). These are ordered by _reverse lexicographic order_, which makes inserting a constant term fast, and allows efficcient evaluation using Horner's rule. Alternative representations include a _dense_ format, with only coefficeients being stored, which may be useful for low-dimensional polynomials, and a representation using sparse storage for the multi-indices themselves, which may be useful for very high-dimensional polynomials. Different methods are provided to evaluate and compose polynomial models, including direct evaluation and evaluation based on Horner's rule. For narrow interval vectors, a standard Horner evaluation is sufficient, which for univariate polynomials is \[p(x)=c_{0}+x(c_{1}+x(c_{2}+\cdots x(c_{n-1}(x+c_{n}x)))).\] Evaluation can be performed using function call syntax via operator(), or by the free function evaluate. Since the problem of finding tight bounds for the range of a polynomial is known to be NP-complete, heuristics are used to balance the trade-off between speed and accuracy of the result. The current implementation of the range uses a simple over-approximation \[\operatorname{range}\bigl{(}p\pm e\bigr{)}\subset c_{0}\pm\bigl{(}\sum_{ \alpha\neq 0}^{\lambda}\lvert c_{\alpha}\rvert\,+_{u}\,e\bigr{)}.\] Similarly, the norm function computes an over-approximation to the uniform norm by \[\lVert p\pm e\rVert_{\infty,[-1:1+1]^{\alpha}}\leq\sum_{\alpha}^{\lambda} \lvert c_{\alpha}\rvert\,+_{u}\,e.\] The refines function tests whether one polynomial model refines another. Note that \(p_{1}\pm e_{1}\) is a refinement of \(p_{2}\pm e_{2}\) if \(\lVert p_{1}-p_{2}\rVert+e_{1}\leq e_{2}\). This can be checked using the over-approximation to the norm function, yielding sufficient condition: A sufficient condition for refinement is \[\sum_{\alpha}^{\lambda}\lvert c_{1,\alpha}-c_{2,\alpha}\rvert_{u}+_{u}e_{1} \leq e_{2}.\] Various Sweeper classes allow the accuracy versus efficiency tradeoff of the polynomial to be controlled. To avoid growth of the number of coefficients, we can _sweep_ small coefficients into the uniform error. Removing the term \(c_{\alpha}x^{\alpha}\) introduces an error of size \(\lvert c_{\alpha}\rvert\), so we can show \[\bigl{(}\sum_{\alpha}c_{\alpha}x^{\alpha}+\sum_{\beta}c_{\beta}x^{\beta} \bigr{)}\pm e\prec\sum_{\alpha}c_{\alpha}x^{\alpha}\pm\bigl{(}\sum_{\beta}^{ \lambda}\lvert c_{\beta}\rvert\,+_{u}e\bigr{)}.\] Here, we make critical use of the fact that the domain of \(p\) is \([-1,+1]^{n}\). More sophisticated sweeping schemes are possible; finding good sweeping schemes is a crucial ingredient in the _efficiency_ of polynomial model arithmetic. Ordinary arithmetical operations can be performed on polynomial models. Care must be taken to ensure roundoff errors in floating-point arithmetic are handled correctly. For example, addition is performed using \[(p_{1}\pm e_{1})+(p_{2}\pm e_{2})= \sum_{\alpha}(c_{1,\alpha}+_{n}c_{2,\alpha})x^{\alpha}\] \[\qquad\pm\sum_{\alpha}e_{u}(c_{1,\alpha},+,c_{2,\alpha})\,+_{u} \,(e_{1}+_{u}e_{2}).\] where \(e_{u}(x,+,y)\) is an upper bound of \(\lvert(x+y)-(x+_{n}y)\rvert\), as defined in Section 5.2. Currently the scheme \(e_{u}(x,+,y):=\bigl{(}(x+_{u}y)-_{u}(x+_{d}y)\bigr{)}\div_{u}2\) is used, but a more faster, though less accurate, alternative would be to use \(\frac{1}{2}\operatorname{ulp}(x+_{n}y)\). To define multiplication, we first need to consider the error bound. Since in the uniform norm we have \[\lVert f_{1}\times f_{2}-p_{1}\times p_{2}\rVert\leq\lVert p_{1}\rVert\!\cdot \!\lVert f_{2}-p_{2}\rVert+\lVert f_{1}-p_{1}\rVert\!\cdot\!\lVert p_{2} \rVert+\lVert f_{1}-p_{1}\rVert\!\cdot\!\lVert f_{2}-p_{2}\rVert\] the algorithm for computing \((p_{1}\pm e_{1})\times(p_{2}\pm e_{2})\) reduces to computing the product \(p_{1}\times p_{2}\). For this the current algorithm uses term-by-term monomial multiplication \[(\sum_{\alpha}c_{\alpha}x^{\alpha})\times c_{\beta}x^{\beta}= \sum_{\alpha}(c_{\alpha}\times_{n}c_{\beta})x^{\alpha+\beta}\] \[\pm\sum_{\underline{\beta}}c_{\underline{e}}u_{(c_{\alpha}, \times,c_{\beta})}\] and adds the resulting polynomials using the addition algorithm. Composition in Ariadne is provided by the compose function. Separate implementations are available for precomposing with a polynomial model or a general function. If \(\hat{g}_{i}=\langle r,p_{i},e_{i}\rangle\) are scaled polynomial models and \(\hat{f}=\langle s,q,d\rangle\) is a scaled polynomial model with \(\operatorname{dom}(\hat{f})\supset\operatorname{range}(\hat{g})\), then the composition \(\hat{f}\circ\hat{g}\) can be computed by applying the polynomial expression \(q\pm d\) to each \(s_{i}^{-1}\circ(p_{i}\pm e_{i})\). We obtain \[\hat{f}\circ\hat{g}=q\big{(}s_{1}^{-1}\circ(p_{1}\pm e_{1}),\ldots,s_{n}^{-1} \circ(p_{n}\pm e_{n})\big{)}\pm d.\] The resulting computation uses only addition and multiplication, so can be implemented using basic function model arithmetic. The polynomial \(q\) is evaluated using Horner's to improve the efficiency of the subsequent evaluations. If \(f:\mathbb{R}\to\mathbb{R}\) is an analytic function, and \(\hat{g}\) is a polynomial model with \(\operatorname{range}(\hat{g})\subset c\pm r\), then by Taylor's theorem with remainder, \[f\circ\hat{g}=\sum_{i=0}^{n}f^{(i)}(c)\hat{g}^{i}\pm\,|f^{(n)}([a,b])-f^{(n)}( c)|\,r^{n}.\] If \(f\) is an elementary function, then \(f\circ\hat{g}\) is computed by applying the component functions in order. Alternatively, a polynomial model approximation \(\hat{f}\) can be constructed and applied to \(\hat{g}\). It is unclear which approach yields best results in practise. The antidifferentiate method computes an indefinite integral with respect to a variable: \[\int p\circ s^{-1}\,dx_{j}\circ s=\tfrac{b_{j}-a_{j}}{2}\big{(}\sum_{\alpha} \tfrac{1}{\alpha_{j}+1}\,c_{\alpha}\,x^{\alpha+e_{j}}\pm e\big{)}.\] If \(e=0\), then the derivative function allows a polynomial model to be differentiated term-by-term: \[d(p\circ s^{-1})/dx_{j}\circ s=\tfrac{2}{b_{j}-a_{j}}\sum_{\alpha}\alpha_{j}c _{\alpha}x^{\alpha-e_{j}}.\] If \(e>0\), then there are non-differentiable functions captured by the polynomial model. However, the derivative of the _midpoint_ model \((s,p,0)\) can often be used in calculations, notably to solve algebraic equations. The split functions split the domain \(D\) into subdomains \(D_{1},\ldots,D_{r}\) and replace the polynomial model \(p\circ s^{-1}\pm e\) with the corresponding _restrictions_\(p_{j}\circ s_{j}^{-1}\pm e_{j}\) where \(s_{j}:D_{j}\to[-1:+1]^{n}\). The restriction from domain \(D\) to \(D_{i}\) is performed by precomposing \(p\) with the rescaling function \(s^{-1}\circ s_{j}\), so \(p_{j}=p\circ(s^{-1}\circ s_{j})\). Ariadne provides two main approaches to constructing a polynomial model for a function \(f\). If \(f\) defined in terms of elementary functions, we can compute the composition \(\hat{f}=f\circ\operatorname{id}\) to obtain a polynomial model for \(f\). Taylor polynomial models can also be computed from the Taylor series with remainder term. In one-dimension over the interval \(c\pm r\) we have (using exact arithmetic). \[\hat{f}(x)=\sum_{i=0}^{n}(f^{(i)}(c)/r^{i})x^{i}\,\pm\,|f^{(n)}([a,b])-f^{(n)} (c)|/r^{n}.\] When implemented using rounded arithmetic, the values \(f^{(k)}(c)\) are computed as intervals, and the accumulated roundoff errors are swept into the uniform error term. ### Affine models An AffineModel over \([-1,+1]^{n}\) is a function of the form \[\hat{f}(x)=b+\sum_{i=1}^{n}a_{i}x_{i}\pm e.\] The product of two affine models cannot be evaluated exactly, even in exact arithmetic, since quadratic terms need to be discarded. We obtain \[\left(b_{1}+\sum_{i=1}^{n}a_{1,i}x_{i}\pm e_{1}\right)\times\left(b_ {2}+\sum_{i=1}^{n}a_{2,i}x_{i}\pm e_{2}\right)\\ =b_{1}b_{2}+\sum_{i=1}^{n}(b_{1}a_{2,i}+b_{2}a_{1,i})x_{i}\\ \pm\sum|a_{1,i}|\cdot\sum|a_{2,i}|+\left(|b_{1}|+\sum|a_{1,i}| \right)e_{2}+\left(|b_{2}|+\sum|a_{2,i}|\right)e_{1}+e_{1}e_{2}.\] Affine models are used in Ariadne in conjunction with specialised methods for solving linear differential equations and linear constrained optimisation problems; for more general operations polynomial models are preferred. ## 7 Solving Algebraic and Differential Equations Having provided the basic function calculus and "direct" operations of evaluation and composition, we turn to more complicated problems of solving equations. In Ariadne, these operations are implemented by _solver_ classes. The advantage of using solver classes is that multiple solvers can be provided for the same kind of problem, enabling the solving policy to be specified at run-time. Further, solvers can encapsulate details of their accuracy parameters, which means that the calling syntax need only specify the problem variables, simplifying use and improving encapsulation. Additionally, (sensible) default parameters can be provided so that users do not need to know the details of the internal workings of the class. ### Algebraic equations A Solver is a evaluator for computing the solution of a simple algebraic equation \(f(y)=0\) or a parameterised algebraic equation \(f(x,y)=0\) with solution \(y=h(x)\). Ariadne implements an IntervalNewtonSolver based on the interval Newton operator Moore (1966), and a KrawcykzSolver based on the related Krawcykz operator, which is more reliable but slower. The method SolverInterface:: solve(ValidatedVectorMultivariateFunction f, ExactBoxType D) -> Vector<ValidatedNumber>; attempts to find a solution of the equation \(f(x)=0\) in the box \(D\), throwing a SolverException if no solution is found. The method delegates to an abstract method SolverInterface::step(ValidatedVectorMultivariateFunction f, Vector<ValidatedNumber> X) -> Vector<ValidatedNumber>; which performs a single step of the algorithm. A single step of the solver computes an operator \(S(f,\hat{y})\) with the property that any solution to \(f(y)=0\) in \(\hat{y}\) also lies in \(S(f,\hat{y})\), and further, if \(S(f,\hat{y})\subset\hat{y}\), then the equation \(f(y)=0\) has a unique solution in \(\hat{y}\). Note that if \(C(f,\hat{y})\cap\hat{y}=\emptyset\), then there are no solutions in \(\hat{y}\). The step may fail, in which case no information about the solutions can be deduced. Tight bounds to the solution are found by the iteration \[\hat{y}_{n+1}=C(f,\hat{y}_{n})\cap\hat{y}_{n}.\] A single step of the interval Newton solver is given by \[\mathrm{IN}(f,\hat{y},\hat{y})=\check{y}-[\mathrm{D}f(\hat{y})]^{-1}f(\check{ y}),\] where \(\check{y}\) is an arbitrary point in \(\hat{y}\), typically chosen to be the midpoint. If \(\mathrm{D}f(\hat{y})\) contains a singular matrix, then the interval Newton method fails and provides no information. A single step of the Krawczyk solver is given by \[\mathrm{Kr}(f,\hat{y},\check{y},\check{J})=\check{y}-\check{J}^{-1}f(\check{y })+(I-\check{J}^{-1}Df(\check{y}))(f(\hat{y})-f(\check{y}))\] where \(\check{J}^{-1}\) is an arbitrary matrix, typically an approximation to the inverse Jacobian \(Df(\check{y})^{-1}\). To solve the parameterised equation \(f(x,h(x))=0\) we use a parameterised interval Newton operator \[\mathrm{IN}(f,\hat{h},\check{h})=x\mapsto\check{h}(x)-[\mathrm{D}_{2}f(\hat{h} (x))]^{-1}f(\check{h}(x))\] similar to Berz and Hoefkens (2001). If \(f:\mathbb{R}^{n}\times\mathbb{R}\to\mathbb{R}\), then \[\operatorname{IN}(f,\hat{h},\hat{h})(x)=\hat{h}(x)-f(\hat{h}(x))/f_{,y}(\hat{h}( x)).\] _Remark 5_.: Solutions of linear algebraic (matrix) equations are also possible, but are defined by functions such as \(\mathtt{lu\_solve}\) rather than solver classes. Solver classes for matrix equations will be provided in a future release. ### Differential equations An _integrator_ is a evaluator for computing the flow of a differential equation. Ariadne currently provides three integrators, an AffineIntegrator for computing the flow of an affine system \(\dot{x}=Ax+b\), a PicardIntegrator, which uses Picard's iteration, and a TaylorIntegrator, which computes the flow using a Taylor series expansion. In practise, the Taylor integrator outperforms the Picard integrator, since it generates sharper error bounds after fewer iterations. The flow \(\phi(x,t,a)\) of the parameterised differential equation \(\dot{y}=f(y,a)\) is \[\dot{\phi}(x,t,a)=f(\phi(x,t,a),a);\quad\phi(x,0,a)=x.\] If there are no parameters, we drop the variable \(a\), obtaining \(\dot{\phi}(x,t)=f(\phi(x,t))\). In this case, a time step for the flow over the domain \(T=[0,h]\) can be computed by IntegratorInterface:: flow_step(ValidatedVectorMultivariateFunction f, BoxDomainType X0, IntervalDomainType T) -> ValidatedVectorMultivariateFunctionPatch; A general approach involves first finding a _bound_ for the flow, which is performed by a \(\mathtt{Bounder}\) class. A bound for the flow starting in a set \(D\) with time step \(h\) is any (convex) set \(B\) such that \[\forall t\in[0,h],\ \forall x\in D,\ \phi(x,t)\in B.\] It can be shown that a convex set \(B\supset D\) is a bound if \(B\supset D+hf(B)\), and further, if \(B\) is a bound, then so is \[B^{\prime}:=D+[0,h]\operatorname{conv}(f(B))\subset B,\] so bounds can be iteratively refined. To find an initial bound, note that for any neighbourhood \(B\) of \(D\), there exists \(h>0\) such that \(B\) is a bound for the time step \(h\). As a heuristic to find a bound, we take \(B=\widehat{D}+[0,2h]\operatorname{conv}(f(\widehat{D}))\), where \(\widehat{D}\) is given by \(c+2(D-c)\) where \(c\) is the midpoint of \(D\). If a bound \(B\) is known, then \(\mathtt{flow\_step}\) can be called with an extra parameter \(\mathtt{BoxDomainType}\) B. A simple method to compute the solution is to use Picard's iteration \[\phi_{n+1}(x,t,a)=x+\int_{0}^{t}f(\phi_{n}(x_{0},\tau,a),a)\,d\tau,\] which is a contraction for \(t\in[0,h]\) if \(h<1/L\) where \(L\) is the Lipschitz constant of \(f\) with respect to the state \(y\). Since the only operations used are composition and antidifferentiation, which are computable on function models, we can apply Picard's iteration directly. If \(\hat{\phi}_{n+1}\) refines \(\hat{\phi}_{n}\), then \(\hat{\phi}_{n+1}\) is a function model for the flow, as is any further iteration. Further, if \(\hat{\phi}_{0}(x,t,a)\) is the constant box \(B\), then \(\hat{\phi}_{1}\) refines \(\hat{\phi}_{0}\), automatically yielding a solution. An alternative method is to compute the Taylor series of the flow and estimate the error bounds. For \(y=\phi(x,t)\) satisfying \(\dot{y}=f(y)\), we find for any function \(g(y)\) that \[\tfrac{d}{dt}g(\phi(x,t))=\nabla g(\phi(x,t))\cdot f(\phi(x,t))\] This allows us to compute time derivatives of the flow \(\phi(x,t)\) by taking \(g_{k}(\phi(x,t))=\tfrac{d}{dt}g_{k-1}(\phi(x,t))=\tfrac{d^{k}}{dt^{k}}\phi(x,t)\). Time derivatives of the spacial derivatives \(\partial^{|\alpha|}\phi(x,t)/\partial x^{\alpha}\) can be computed similarly. The \(\mathtt{TaylorIntegrator}\) uses the \(\mathtt{Differential}\) template class to compute all partial derivatives \(\partial^{|\alpha|+k}\phi(x,t)/\partial x^{\alpha}\partial t^{k}\) together. Let \(x_{c}\) be the midpoint of \(D\) and \(B\) be a bound for \(\phi(D,[t_{0},t_{1}])\). Then \[\hat{\phi}(x,t)= \sum_{|\alpha|\leq m\wedge k\leq n}c_{\alpha,k}(x-x_{c})^{\alpha} (t-t_{0})^{k}\] \[\pm\sum_{|\alpha|\leq m\wedge k\leq n}|C_{\alpha,k}-c_{\alpha,k}| \,|x-x_{c}|^{\alpha}\,|t-t_{0}|^{k}\] where \[c_{\alpha,k}=\left.\frac{\partial\phi}{\partial x^{\alpha}\partial t^{k}}\right|_{ x_{\varepsilon},t_{0}}\quad\text{and}\quad C_{\alpha,k}=\left.\frac{\partial\phi}{ \partial x^{\alpha}\partial t^{k}}\right|_{B\times[t_{0},t_{1}]}.\] In practise, this Taylor method yields better results than the Picard method. ### Hybrid systems A _hybrid system_ is a dynamic system where the state \(x\) evolves continuously via \(\dot{x}=f(x)\) until some _guard condition_\(g(x)\geq 0\) is satisfied, at which time the state instantaneously jumps to another state via a _reset_\(x^{\prime}=r(x)\). Assuming an set \(X_{0}\) of possible starting states, the dependence of the current state on the initial state \(x\) and the current time \(t\) is described by a function \(\psi(x,t)\). During continuous evolution, we the solutions \(\psi\) are found by solving the differential equation \(\dot{\psi}(x,t)=f(\psi(x,t))\), and during a discrete jump the solution is updated by composition \(\psi^{\prime}(x,t)=[r\circ\psi](x,t)\). To determine the crossing times \(\gamma\), we need to solve the parametrised algebraic equation \(g(\psi(x,t))=0\) for \(t=\gamma(x)\). Using the interval Newton iteration \[\hat{\gamma}_{n+1}(x)=\hat{\gamma}_{n}(x)-\frac{g(\psi(x,\hat{\gamma}_{n}(x) ))}{(\nabla g\cdot f)(\psi(x,\hat{\gamma}_{n}(x)))},\] where \(\hat{\gamma}\) is a validated function model for \(\gamma\) containing \(\hat{\gamma}\). We therefore see that the function calculus we have described provide exactly the building blocks needed to rigorously compute the evolution of a hybrid system. For more details, see (Collins et al., 2012) and the website (Ariadne). ## 8 Examples We now give examples of the use of Ariadne's function calculus for solving algebraic and differential equations using the Python interface. We shall use as a running example the _Fitzhugh-Nagumo_ system R. (1961); Nagumo et al. (1962), which is defined by the equations \[\dot{v}=v-v^{3}/3-w+R\,I_{\mathrm{ext}},\qquad\dot{w}=(v+a-bw)/tau\] where \(\tau\) is the _time-scale_ of the slow variable \(w\), and \(I_{\mathrm{ext}}\) is an external forcing current. Unless otherwise mentioned, we take parameter values \[\alpha=0.7,\quad\beta=2.0,\quad\tau=12.5,\quad R=0.1,\quad I_{\mathrm{ext}}=3.5.\] ### Fixed points Consider the problem of finding the fixed-points of the Fitzhugh-Nagumo, which are given by solving the equations \(\dot{v}=\dot{w}=0\). To solve this in Ariadne, we first define the Fitzhugh-Nagumo system itself: a=Decimal(0.7); b=Decimal(2.0); tau=Decimal(12.5); R=Decimal(0.1); Text=Decimal(3.5) v=RealVariable("v"); w=RealVariable("w") f=Function([v,w],[v-(v*v*v)/3-w+R*Text,(v+a-b*w)/tau]) We need to define a bounded domain in which to search for the fixed-points: domain=BoxDomainType([{-2:3},{-1:2}]) We then construct the solver class; here we use the interval Newton solver: tolerance=1e-12; max_steps=32 solver=IntervalNewtonSolver(tolerance,max_steps) Finally, we use the solve_all method to compute all the fixed-points. fixed_points=solver.solve_all(g,domain) print("fixed_points:",fixed_points) The result is fixed_points: [ [-1.2247448713915[9:8],-0.26237243569579[5:4]], [-0.0000000000000000[-13:110],0.3500000000000000[-8:6]], [1.224744871391[48:70],0.962372435695[776:813]] ] Given a fixed-point, we can consider the functional dependence on the parameters. Suppose we vary \(I_{\rm ext}\) over the range \([3.0\div 4.0]\), and look for the fixed-points near \((1.22,0.96)\) We make \(I_{\rm ext}\) a variable and redefine \(f\). ``` Text=RealVariable("Text") f=Function([Text,v,w],[v-(v*v*v)/3-w+R*Iext,(v+a-b*w)/tau]) ``` We then define the domain of \(I_{\rm ext}\) and the range of \((v,w)\) to consider: ``` Idom=BoxDomainType([[dy_(3.0),dy_(4.0)]]) w_rng=BoxDomainType([[dy_(1.0),dy_(1.5)],[dy_(0.75),dy_(1.25)]]) ``` Finally, we use the implicit method to compute the parametrised fixed-point \((v(I_{\rm ext}),w(I_{\rm ext}))\): ``` parametrised_fixed_point=solver.implicit(f,Idom,vw_rng) print("paramrised_fixed_point:"',paramrised_fixed_point) ``` The result is ``` parametrised_fixed_point: VectorScaledFunctionPatch( dom=[3.0:4.0], rng=[f[1.1721951:1.2720745],{0.93564590:0.98603711}], [ { -0.00007341*x0^6+0.001752*x0^5-0.01789*x0^4+0.1014*x0^3 -0.3482*x0^2+0.7955*x0+0.2577+/-0.00000578}, { -0.00002506*x0^6+0.0006315*x0^5-0.006803*x0^4+0.04071*x0^3 -0.1479*x0^2+0.3610*x0+0.5003+/-0.00000301} ] ) ``` Here, the result is displayed as a non-centred polynomial, with \(x_{0}\) being the variable \(I_{\rm ext}\). Note that if we were to use a wider initial range such as \([0.75:1.75]\times[0.5:1.5]\) then the algorithm fails, throwing an UnknownSolutionException. ### Differential equation To solver the Fitzhugh-Nagumo system, we use an Integrator class. We first define the point and desired time interval. IntervalDomainType(1.25,1.5) vv0=BoxDomainType([ [0,0], [0,0] ]) h=Dyadic(1.0) tolerance=1e-3 integrator = TaylorPicardIntegrator(tolerance) flow_step=integrator.flow_step(f,vw0,h) print("flow_step:",flow_step) Note that currently the starting point must be represented as a singleton box. The result is: flow_step: VectorScaledFunctionPatch( dom=[{0.0:0.0},{0.0:0.0},{0.0:0.25}], rng=[{-0.00036560251:0.097590134},{-0.000080380828:0.014675381}], [ { 0.1585*x2^2+ +0.3493*x2+ 0.000000000000000000003469+/-0.000366}, { 0.009520*x2^2+0.05600*x2+ 0.0000000000000000004337+/-0.0000804} ] ) Note that the time range only goes up to 0.25. If we consider the flow of all points starting in the box \([0\div 1]\times[0\div 1]\) using ``` vv0=BoxDomainType([ [0,1], [0,1] ]) flow_step=integrator.flow_step(f,vw0,h) print("flow_step:",flow_step) we find ``` flow_step: VectorScaledFunctionPatch( dom=[{0.0:1.0},{0.0:1.0},{0.0:0.09375}], rng=[{-0.10411338:1.1214595},{-0.0041834586:1.0056835}], [ { 0.5000*x0^2*x1*x2^2-0.4200*x1*x2^2-0.9667*x0^2*x2^2 +0.7167*x0*x2^2+0.1385*x2^2-0.00000000000000003469*x0^2*x1*x2 -1.000*x1*x2-0.3451*x0^3*x2+0.01758*x0^2*x2+0.9953*x0*x2 +0.3494*x2+0.000000000000000000003469*x1+0.00000000000000000000001735*x0^3 +0. Here, an even smaller step-size of 0.09375 was used. To use the TaylorSeriesIntegrator, a maximal order for the power series must also be given: order=8 integrator = TaylorSeriesIntegrator(tolerance,order) flow_step = integrator.flow_step(f,ww0,h) print("flow_step:",flow_step) flow_step: VectorScaledFunctionPatch( dom={[0.0:1.0),{0.0:1.0},{0.0:0.09375}], rng=[{-0.10484287:1.1221622},{-0.0039900682:1.0055644}], [ {... +1.0000*x0 +0.0000005001+/-0.00138}, {... +1.000*x1 -4.337*-19*x0 +2.103e-17+/-0.0000155} ] ) To compute more than a single time-step, an Evolver class must be used. ## 9 Extensions We now briefly mention extensions to the basic function calculus of Ariadne whose implementation is in progress. ### Alternative bases Work is in progress on using other bases, notably the Chebyshev basis. The underlying representation is the same, \[p(x)=\sum_{\alpha}c_{\alpha}\prod_{i=1}^{n}\phi_{\alpha_{i}}(x_{i})\] but different algorithms are needed for multiplication and evaluation. The Chebyshev basis polynomials \(T_{k}\) have products \(T_{j}\cdot T_{k}=\frac{1}{2}(T_{|j-k|}+T_{j+k})\) so multiplication is harder than for the monomial basis. However, they satisfy \(\sup_{x\in[-k+1]}T_{k}(z)=1\) and are orthogonal, so tighter bounds on the range can be produced than using the monomial basis. Other classes of approximating function possible, such as Bernstein bases, Fourier series, neural networks, convolutions. ### \(C^{1}\) function calculus A \(C^{1}\) function calculus includes uniform errors on both zeroth and first derivatives. We represent univariate \(C^{1}\) function over a unit domain, \(f:[-1,+1]\to\mathbb{R}\) by a polynomial \(p\), and provide error bounds \[\sup_{x\in[-1,+1]} |f(x)-p(x)|\leq\delta_{0},\quad\sup_{x\in[-1,+1]} |f^{\prime}(x)-p^{\prime}(x)|\leq\delta_{1},\] \[|f(0)-p(0)|\leq\bar{\delta}_{0}.\] In higher dimensions, we provide uniform bounds for each partial derivative. Note that it is useful to provide both a uniform error and a punctual error at the midpoint of the domain. The uniform error \(\delta_{0}\) is bounded by \(\delta_{0}\leq\bar{\delta}_{0}+\delta_{1}\), but can usually be bounded more tightly by direct estimates. The error bounds for addition are simply added, but for multiplication we use: \[\|f\cdot g-p\cdot q\|\leq\|f^{\prime}-p^{\prime}\|\cdot\|q\|+\|p^ {\prime}\|\cdot\|g-q\|+\|f^{\prime}-p^{\prime}\|\cdot\|g-q\|+\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\|f-p\|\cdot\|q^{\prime}\|+ \|p\|\cdot\|g^{\prime}-q^{\prime}\|+\|f-p\|\cdot\|g^{\prime}-q^{\prime}\|.\] where \(\|\cdot\|\) is the uniform norm over the domain \([-1+1]\). ### Multivalued functions Set-valued functions play an important role in defining nondeterministic dynamical systems. Support for these functions is under development, with compact-valued functions being the most important. As well as generic classes for set valued functions, concrete realisations are also provided, such as multivalued functions of the form \[F(x):=\{f(x,p)\mid p\in D\mid g(x,p)\in C\}.\] Here, \(p\) parametrised the set \(F(x)\), with \(D\) providing bounds for \(p\) and \(g(x,p)\in C\) a constraint. ### Measurable functions Since measurable functions are only defined up to equivalence almost everywhere, they cannot be computably evaluated. Instead, a general representation is given in terms of preimages of open sets, with \(f^{-1}(V)=U_{\infty}\) where \(U_{\infty}\) is a _lower-measurable set_(Collins, 2020): \[U_{\infty}=\bigcap_{i=0}^{\infty}U_{i}\text{ where each }U_{i}\text{ is open, and }\mu(U_{i}\setminus U_{j})\geq 2^{-j}\text{ if }i<j.\] For measurable functions taking values in a separable complete metric space, this definition is equivalent to taking completions of continuous or piecewise-constant functions under the Fan metric \(d(f_{1},f_{2})=\sup\bigl{\{}\varepsilon\in\mathbb{Q}^{+}\mid\mu\bigl{(}\{x\mid d (f_{1}(x),f_{2}(x))>\varepsilon\}\bigr{)}>\varepsilon\bigr{\}}.\) Similarly, spaces of \(p\)-integrable functions are defined as completions under the \(p\)-norms. ### Weierstrass approximation By the Weierstrass approximation theorem, any continuous function \(f:D\to\mathbb{R}\) over a bounded domain \(D\subset\mathbb{R}^{n}\) can be uniformly approximated by polynomials. For a univariate function on the unit domain, \(f:[-1:+1]\to\mathbb{R}\), we can approximate by a unit polynomial model \(\hat{f}=p\pm e\). For a rigorous computation, we need to use only information provided by interval evaluation \(\lfloor f\rceil\). ## 10 Conclusions Ariadne is a software tool implementing a rigorous general-purpose calculus for working with functions over real variables. The calculus includes support for interval arithmetic, linear algebra, automatic differentiation, function models with evaluation and composition, and solution of algebraic and differential equations. Due to the modular structure, additional function types can be introduced, and different implementations of the algorithms provided. We have shown examples of the rigorous solution of algebraic and differential equations, and seen how the functionality is sufficient to rigorously compute the evolution of nonlinear hybrid dynamic systems. Work in the immediate future will focus on further improvements to the efficiency and accuracy of the tool. Partially implemented future extensions include function models in Chebyshev bases, a calculus for differentiable functions, and Weierstrass approximation of general continuous functions. Work is also in progress on more advanced classes of systems, notably on the evolution of nondeterministic hybrid systems described by differential inclusions (Zivanovic and Collins, 2010), and on model-checking linear temporal logic. Theoretical work is in progress on the evolution of stiff continuous dynamics, the analysis of stochastic systems. ## Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 731143.
2301.13780
Commuting Cohesions
Shulman's spatial type theory internalizes the modalities of Lawvere's axiomatic cohesion in a homotopy type theory, enabling many of the constructions from Schreiber's modal approach to differential cohomology to be carried out synthetically. In spatial type theory, every type carries a spatial cohesion among its points and every function is continuous with respect to this. But in mathematical practice, objects may be spatial in more than one way at the same time; for example, a simplicial space has both topological and simplicial structures. In this paper, we put forward a type theory with "commuting focuses" which allows for types to carry multiple kinds of spatial structure. The theory is a relatively painless extension of spatial type theory, and enables us to give a synthetic account of simplicial, differential, equivariant, and other cohesions carried by the same types. We demonstrate the theory by showing that the homotopy type of any differential stack may be computed from a discrete simplicial set derived from the \v{C}ech nerve of any good cover. We also give other examples of commuting cohesions, such as differential equivariant types and supergeometric types, laying the groundwork for a synthetic account of Schreiber and Sati's proper orbifold cohomology.
David Jaz Myers, Mitchell Riley
2023-01-31T17:30:45Z
http://arxiv.org/abs/2301.13780v1
# Commuting Cohesions ###### Abstract Shulman's spatial type theory internalizes the modalities of Lawvere's axiomatic cohesion in a homotopy type theory, enabling many of the constructions from Schreiber's modal approach to differential cohomology to be carried out synthetically. In spatial type theory, every type carries a spatial cohesion among its points and every function is continuous with respect to this. But in mathematical practice, objects may be spatial in more than one way at the same time; a simplicial space has both topological and simplicial structures. Moreover, many of the constructions of Schreiber's differential cohomology and Schreiber and Sati's account of proper equivariant orbifold cohomology require the interplay of multiple sorts of spatiality -- differential, equivariant, and simplicial. In this paper, we put forward a type theory with "commuting focuses" which allows for types to carry multiple kinds of spatial structure. The theory is a relatively painless extension of spatial type theory, and enables us to give a synthetic account of simplicial, differential, equivariant, and other cohesions carried by the same types. We demonstrate the theory by showing that the homotopy type of any differential stack may be computed from a discrete simplicial set derived from the Cech nerve of any good cover. We also give other examples of multiple cohesions, such as differential equivariant types and supergeometric types, laying the groundwork for a synthetic account of Schreiber and Sati's proper orbifold cohomology. ###### Contents * 1 Introduction * 2 A Type Theory with Commuting Focuses * 3 Specializing a Focus * 3.1 Detecting Continuity * 3.2 Detecting Connectivity * 4 Examples of Focuses * 4.1 Real Cohesions * 4.2 Simplicial Cohesion * 4.2.1 The Cech Complex * 4.3 Global Equivariant Cohesion * 4.4 Topological Toposes * 5 Multiple Focuses * 5.1 Commuting Cohesions * 6 Examples with Multiple Focuses * 6.1 Simplicial Real Cohesion * 6.2 Equivariant Differential Cohesion * 6.3 Supergeometric Cohesion ## 1 Introduction Homotopy type theory is a novel foundation of mathematics which centers the notion of identification of mathematical objects. In homotopy type theory, every mathematical object is of a certain _type_ of mathematical object; and, if \(x\) and \(y\) are both objects of type \(X\), then we know by virtue of the definition of the type \(X\) what it means to identify \(x\) with \(y\) as elements of the type \(X\). For example, if \(x\) and \(y\) were real vector spaces (so that \(X\) was the type of real vector spaces), then to identify \(x\) with \(y\) would be to give a \(\mathbb{R}\)-linear isomorphism between them. If \(x\) and \(y\) were smooth manifolds, then to identify them would be to give a diffeomorphism between them. If \(x\) and \(y\) were mere numbers, then to identify them would be simply to prove them equal. And so on, for any type of mathematical object. Homotopy theory, in the abstract, is the study of the identifications of mathematical objects. Homotopy type theory is well suited for synthetic homotopy theory (e.g. [12, 24, 13, 18] and many others), but to apply these theorems in algebraic topology -- where objects are identified by giving continuous deformations of one into the other -- requires a modification to the theory. To emphasize the difference here, compare the higher inductive circle \(S^{1}\), which is the type freely generated by a point with a self-identification, with the topological circle \(\mathbb{S}^{1}\) defined as the set of points in the real plane with unit distance from the origin: \[\mathbb{S}^{1}\equiv\{(x,y):\mathbb{R}^{2}\mid x^{2}+y^{2}=1\}.\] The base point of the higher inductive circle \(S^{1}\) has many non-trivial self-identifications, whereas two points of the topological circle may be identified (in a unique way) just when they are equal. The two types are closely related however: the higher inductive circle \(S^{1}\) is the _homotopy type_ of the topological circle \(\mathbb{S}^{1}\) obtained by identifying the points of the latter by continuous deformation. Ordinary homotopy type theory does not have the language to express this relationship, and therefore cannot apply the synthetic theorems concerning the higher inductive circle to topological questions about the topological circle. What is needed is a way to distinguish between types which carry topological structure and discrete types with only homotopical structure. In his _Cantor's 'Lauter Einsen' and Cohesive Toposes_[27], Lawvere points out that this distinction between natively cohesive and discrete sets is already present in the writings of Cantor as the distinction between the _Menge_ of mathematical practice and the abstract _Kardinalzahlen_ which arise by abstracting away from the relationships among the points of a space. In the paper, and his subsequent _Axiomatic Cohesion_[28], Lawvere formalizes this opposition between cohesion and discreteness as an adjoint triple between toposes: \[\begin{array}{c}\text{Mengen}\\ \text{\tiny{discrete}}\Big{\{}\begin{array}{c}\text{\tiny{$\begin{array}{c} \text{\tiny{points}\\ \text{\tiny{$\begin{array}{c}\text{\tiny{$\begin{array}{c}\text{\tiny{$ \begin{array}{c}\text{\tiny{$\begin{array}{c}\text{\tiny{$\begin{array}{c} \text{\tiny{$\begin{array}{c}\text{\tiny{$\begin{array}{c}\text{\tiny{ \@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \ \(\Pi_{\infty}\) which takes the _shape_ (in the sense of Lurie [31, SS7.1.6]) of a stack. All in all, a cohesive \(\infty\)-logos has three adjoint endofunctors \[\int\dashv\flat\dashv\sharp\] where \(\int\) takes the _shape_ or homotopy type of a higher space considered as a discrete space, \(\flat\) takes its underlying homotopy type of discrete points, and \(\sharp\) takes the underlying homotopy type of points but retopologized codiscretely. In _Brouner's Fixed Point Theorem in Real-Cohesive Homotopy Type Theory_[47] (henceforth _Real Cohesion_), Shulman brings this distinction between cohesive _Mengen_ and discrete _Kardinalen_ to homotopy type theory via his _spatial type theory_. Spatial type theory internalizes the \(\flat\) and \(\sharp\) modalities from Schreiber's DCCT which relate discrete (but homotopically interesting) types like \(S^{1}\) and spatial types like \(\mathbb{S}^{1}\). Spatial type theory also improves upon a previous axiomatization of these modalities in HoTT due to Schreiber and Shulman [45], by replacing axioms with judgemental rules. _Cohesive homotopy type theory_ is spatial type theory with an additional axiom that implies the local contractibility of the sorts of spaces in question; from this axiom the further left adjoint \(\int\) to \(\flat\) may be defined. Homotopy type theory may be interpreted into any \(\infty\)-topos [26, 46], so that a type in homotopy type theory becomes a sheaf of homotopy types externally. In particular, if we interpret the topological circle \(\mathbb{S}^{1}\) defined as a subset of \(\mathbb{R}^{2}\) into the \(\infty\)-topos of sheaves on the site of continuous manifolds, it becomes the sheaf (of sets) represented by the external continuous manifold \(\mathbb{S}^{1}\), while the higher inductive circle \(S^{1}\) gets interpreted as the constant sheaf at the homotopy type of the circle. By the Yoneda lemma, then, any function definable on \(\mathbb{S}^{1}\) is necessarily continuous. Since functions \(f:X\to Y\) in HoTT are defined simply by specifying an element \(f(x):Y\) in the context of a free variable \(x:X\), variation in a free variable confers a liminal sort of continuity: such an expression could be interpreted in a spatial \(\infty\)-topos in which case it necessarily defines a continuous function. Shulman's spatial type theory works by introducing the notion of a _crisp free variable_ to get around this liminal continuity. An expression in spatial type theory depends on its crisp free variables _discontinuously_. The modalities \(\flat\) and \(\sharp\) of spatial type theory represent crisp variables universally on the left and right respectively. In this way, \(\flat X\) is the discrete retopologization of the spatial type \(X\), while \(\sharp X\) is its codiscrete retopologization -- a map out of \(\flat X\) is a discontinuous map out of \(X\), while a map into \(\sharp X\) is a discontinuous map into \(X\). Spatial type theory is intended to be interpreted into _local_ geometric morphisms \(\gamma:\mathcal{E}\to\mathcal{S}\) of \(\infty\)-toposes, those for which \(\gamma_{*}\) has a fully faithful right adjoint \(\gamma^{\downarrow}\) which gives a geometric morphism \(f:\mathcal{S}\to\mathcal{E}\) (with \(f_{*}:=\gamma^{\downarrow}\)) adjoint to \(\gamma\) which acts as the _focal point_ of \(\mathcal{E}\) as a space over \(\mathcal{S}\). The adjoint modalities \(\flat\) and \(\sharp\) are interpreted as the adjoint idempotent (co)monal pair \(\gamma^{*}\gamma_{*}\) and \(\gamma^{\downarrow}\gamma_{*}\) respectively. A crisp free variable is then one which varies over an object of the focal point \(\mathcal{S}\): a free variable is crisp when it is _in focus_. There is not only one way for mathematical objects to be spatial. Spaces may cohere with smooth, analytic, algebraic, condensed, and simplicial or cubical combinatorial structures -- and more. Each of these cases would give rise to a particular spatial type theory as the internal language of an appropriate local \(\infty\)-topos. But there are many cases arising in practice where we need not just one axis of spatiality, but many at once. For example, it is a classical theorem that the homotopy type of a manifold may be computed as the realization of a (topologically discrete) simplicial set associated to the Cech nerve of a good open cover of the manifold. This theorem relates a simplicial set to a continuous space, via an intermediary simplicial space which is both continuous and simplicial at the same time -- the Cech nerve of the cover. But in spatial homotopy type theory there is only one notion of crisp variable, and therefore just one sort of spatiality. For simplicial types, the discrete reflection is the 0-skeleton \(\operatorname{sk}_{0}\), while the codiscrete reflection is the 0-coskeleton. For simplicial spaces, we then have both the (topologically) discrete \(\flat\) and codiscrete \(\sharp\), as well as the simplicially 0-skeletal \(\operatorname{sk}_{0}\) and 0-coskeletal \(\operatorname{csk}_{0}\). Interestingly, the Cech nerve itself arises from these modalities: the Cech nerve of a map \(f:X\to Y\) between 0-skeletal types (that is, continuous or differential stacks with no simplicial structure) is its \(\operatorname{csk}_{0}\)-image, as we will see later in Proposition 4.2.13. A simplicial space has both a shape \(\int\!\!X\) and a realization (or colimit) \(\operatorname{re}X\); the first is a topologically discrete simplicial type, while the latter is a 0-skeletal but spatial type. With all these modalities, we can prove the theorem about good covers described above as Theorem 6.1.5. Another use case for multiple axes of spatiality is Sati and Schreiber's _Proper orbifold cohomology_[43], where orbifolds are understood both as having both differential structure (as differential stacks) and global equivariant structure (concerning their singularities). In order to get the correct generalized cohomology of orbifolds without relying on ad-hoc constructions based on a global quotient presentation of the orbifold, Sati and Schreiber work with the \(\infty\)-topos of global equivariant differential stacks, which is local both over the global equivariant topos and the topos of differential stacks. Here the differential modalities \(\int\), \(\flat\) and \(\sharp\) are augmented with the modalities of equivariant cohesion [41]: \(\vee\), \(\cup\), and \(\ \mathcal{V}\), which take the strict quotient, the underlying space as an invariant type, and the Yoneda embedding of the underlying space of a global equivariant type respectively. Again the modalities play a central role in the theory, with the ordinary Borel cohomology of a global quotient orbifold \(X\mathbin{/\!\!/}G\) being the ordinary cohomology of \(\int\!\!\vee(X\mathbin{/\!\!/}G)\), while the proper equivariant Bredon cohomology of \(X\mathbin{/\!\!/}G\) is the cohomology of \(\ \mathcal{V}(X\mathbin{/\!\!/}G)\), twisted by the map to \(\ \mathcal{V}\mathbf{B}G\) classifying the quotient map \(\ \mathcal{V}X\to\mathcal{V}(X\mathbin{/\!\!/}G)\). In these cases, modalities that lie in the same position in their adjoint chain commute with each other, so, for example, \(\flat\) commutes with \(\mathsf{sk}_{0}\) and \(\sharp\) commutes with \(\mathsf{csk}_{0}\). However, there are cases where these modalities are nested, with one spatiality being a refinement of another. This occurs for example in supergeometry as formulated by Schreiber in [44] with the modalities of _solid cohesion_. The supergeometric focus is given by the even comodality \(\rightrightarrows\) (which takes the even part of a superspace) and the rheonomic modality \(\mathrm{Rh}\) which is given by localizing at the _odd line_\(\mathbb{R}^{0|1}\). In this paper, we put forward a modification of spatial type theory to allow for multiple axes of spatiality. Our theory works by allowing for a meet semi-lattice of _focuses_\(\ \mathbb{V},\mathbb{A},\ldots\), each with a separate notion of \(\mathbb{V}\)-crisp variable and pair of adjoint (co)modalities \(\flat_{\mathbb{V}}\) and \(\sharp_{\mathbb{V}}\). Like spatial type theory, our custom type theory gets us to the coallface of synthetic homotopy theory very efficiently while staying simple enough to be used in an informal style. The presence of multiple notions of crispness forces a more complex context structure than spatial type theory's separation of the context into a crisp zone and cohesive zone. Similar to many other modal type theories [30, 22, 21, 9, 38], we annotate each variable with modal information, here, the focuses for which that variable is crisp. The typing rules for the modalities of each focus then work essentially independently. The exception is \(\flat\)-elimination, which is upgraded to allow the crispness of the term being eliminated to be maintained in the variable bound by the induction (a 'crisp' induction principle). Ours is far from the only extension of type theory with multiple modalities, but as we discuss in more detail later, no existing theory has the combination of features that we are looking for: dependent types (ruling out [30]) that may depend on modal variables (ruling out [9]), multiple commuting comodalities (ruling out [47, 11, 38]) each with a with right-adjoint modality (ruling out [33]) and no further left-adjoints (ruling out [22, 21] and [16, SS14]). In addition to allowing us to formalize the theorem about Cech nerves of open covers as Theorem 6.1.5, our type theory will be able to handle the equivariant differential cohesion used by Sati and Schreiber in their _Proper orbifold cohomology_[43], as well as the nested focuses of Schreiber's supergeometric _solid_ cohesion [44]. This extends the work of Cherubini [17] and the first author [34, 35, 36] of giving synthetic accounts of the constructions of Schreiber [44] and Sati-Schreiber [43]. Positing an additional focus does not disturb arguments made using existing focuses, so we also expect our theory to be helpful when dipping into simplicial arguments in the course of other reasoning by adding a simplicial focus and making use of the new modalities. The problem of defining simplicial types in ordinary Book HoTT remains open, and there are now a number of different approaches to constructing simplicial types which each use some extension to the underlying type theory. In this paper, we will axiomatize the 1-simplex \(\Delta[1]\) as a linear order with distinct top and bottom and use the cohesive modalities to define the Cech nerve of a map and the realization or colimit of a simplicial type. We believe our approach here would pair nicely with other approaches to simplicial types for the purposes of synthetic \((\infty,1)\)-category theory such as [42, 14, 49, 48], where the \(\mathsf{sk}_{0}\) modality would take the core of a Rezk type.1 Outline of the present paper.After presenting our type theory in SS2, we will look at ways to specialize the spatiality of a focus in SS3. In particular, we will observe that in many cases there is a small class of test spaces \(G_{i}\) so that codiscreteness (that is, being \(\sharp\)-modal) is detected by uniquely lifting against the \(\flat\)-counts \(\flat G_{i}\to G_{i}\); such \(G_{i}\) will be said to _detect continuity_. Externally, the \(G_{i}\) could be any family which generates the logos under colimits. In practice, the \(G_{i}\) will be test spaces which minimally carry the appropriate spatiality: in the simplicial case, the simplices \(\Delta[n]\); in the real-cohesive case, the Euclidean spaces \(\mathbb{R}^{n}\); for condensed sets, the profinite sets, etc. In SS3, we will also meet a family of axioms which hold for spatialities that are _locally contractible_. For example, continuous manifolds which are built from Euclidean spaces by colimits are locally contractible, while condensed sets which are built from profinite sets by colimits need not be locally contractible. In general, a space is locally contractible when it has a constant _shape_ in the sense of Lurie [31, SS7.1.6]. We may define a space \(C\) to be contractible when any map \(C\to S\) to a discrete space \(S\) is constant. If the converse holds -- a space \(S\) is discrete (\(\flat\)-modal) if every map \(C\to S\) is constant -- then we say that _\(C\) detects the connectivity_ of spaces. For example, \(\mathbb{R}\) detects the connectivity of continuous \(\infty\)-groupoids, and \(\Delta[1]\) detects the connectivity of simplicial \(\infty\)-groupoids. If there is a space (or family of spaces) which detects connectivity, then the local geometric morphism \(p\) corresponding to the morphism is furthermore strongly locally contractible in that \(p^{*}\) has a left adjoint \(p_{!}\) which takes the (constant value of the) shape of a space. In the case that \(p\) is both local and strongly locally contractible, we say that \(p\) is _cohesive_ following Lawvere [28], Schreiber [44], and Shulman [47]. Nullifying at the family of spaces which detect connectivity gives a modality \(\int\) which is left adjoint to \(\flat\); it may be thought of as taking the homotopy type of a space. In SS4 we will give example axioms for specializing single focuses. We will review Shulman's axioms for _real cohesion_, where the Euclidean spaces \(\mathbb{R}^{n}\) detect continuity and connectivity. We will then see simplicial cohesion in some detail, where the simplices \(\Delta[n]\) detect continuity and connectivity. We give our types simplicial structure by axiomatizing the 1-simplex \(\Delta[1]\) as a total order with distinct top and bottom elements, following Joyal's characterization of simplicial sets as the classifying topos for such orders [50]. We use the \(\mathsf{csk}_{0}\) modality to construct Cech nerves of maps. Then we will describe the global equivariant cohesion first observed by Rezk [41] and used by Sati and Schreiber in [43]. Finally, we will briefly describe axioms for topological focuses such as Johnstone's topological topos of sequential spaces [25] and the condensed/pyknotic topos of Clausen-Scholze [19] and Barwick-Haine [10]. After surveying some of the different sorts of spatiality which types might carry, we turn our attention to multiple focuses in SS5. In Definition 5.1.3, we define what it means for two cohesions to be _orthogonal_: when the family which detects the connectivity of one is discrete with respect to the other, and vice versa. We then prove a few lemmas concerning orthogonal cohesions, in particular concerning when it is possible to commute the various modalities past each other. Finally, we give examples of multiple focuses in SS6. We begin with simplicial real cohesion, which has both a simplicial focus and a real-cohesive focus which are orthogonal. We prove, in Theorem 6.1.5, that the shape of any 0-skeletal type \(M\) may be computed as the realization of a topologically discrete simplicial type constructed from the Cech nerve of any _good_ cover \(U\) of \(M\) -- one for which finite intersections of the \(U_{i}\) are contractible in the sense of being \(\int\)-connected. Next, we combine equivariant cohesion with differential cohesion to give the series of modalities used in Sati and Schreiber's _Proper orbifold cohomology_[43]. Happily, no extra axioms are needed to show that the two cohesions are orthogonal; we prove this in Lemma 6.2.1. Finally, we describe the supergeometric or "solid" cohesion of Schreiber's _Differential Cohomology in a Cohesive \(\infty\)-topos_. This extends real cohesion with the _odd line_\(\mathbb{R}^{0|1}\), where the "discrete" comodality of the supergeometric focus takes the even part of a supergeometric space, and the "codiscrete" modality takes a _rheonomic_ reflection of the space, one whose super structure is uniquely determined by its even structure. Unlike our other examples where the focuses involved are orthogonal, here the differential focus is included in the supergeometric focus: any discrete space is also purely even, as is any codiscrete space. **Acknowledgements.** We would like to thank Urs Schreiber for his careful reading and extensive comments during the drafting process of the paper. And we would like to thank Hisham Sati for his feedback and words of encouragement. The authors are grateful for the support of Tamkeen under the NYU Abu Dhabi Research Institute grant CG008. ## 2 A Type Theory with Commuting Focuses The fundamental duality in higher topos theory is between the \(\infty\)-topos -- a general sort of space -- and the \(\infty\)-logos -- the category of sheaves of homotopy types on such a space [7]. This duality is perfect: a map of \(\infty\)-toposes \(\mathcal{E}\to\mathcal{F}\) is defined to be a lex accessible functor \(\operatorname{Sh}_{\infty}(\mathcal{F})\to\operatorname{Sh}_{\infty}( \mathcal{E})\) between their corresponding \(\infty\)-logoses in the opposite direction. This duality between toposes and logoses gives a nice perspective on the distinction between the _petite_ toposes, which are used as generalized spaces in practice, and the _gros_ toposes -- or rather, their dual logoses -- which are used as categories _of_ spaces, rather than as spaces in their own right. Quite opposite to their names, the petite toposes are "big" spaces, while the gros toposes are "small" spaces; it is their dual logoses which are correctly described by the adjectives "petite" and "gros". Since the logos is the category of sheaves on the topos, or equivalently the category of etale maps into the topos, the "larger" the topos the more constraining the etale condition becomes. For that reason, the gros toposes have qualitatively "smaller" categories of sheaves. On the other hand, the more general the etale spaces may be, the "smaller" the base topos must be. In general, the "biggest" logoses, the logoses _of_ spaces, must correspond to the "smallest" toposes: those toposes which are infinitesimal patches around a focal point. This point of view is emphasized in Chapter 4 of DCCT [44]. We may therefore, as a first pass, identify logoses of spaces as dual to those toposes \(\mathcal{E}\) which are _local_ over a focal point \(\mathcal{F}\). A geometric morphism \(p:\mathcal{E}\to\mathcal{F}\) is local when it admits a left adjoint right inverse \(f:\mathcal{F}\to\mathcal{E}\) in the \((\infty,2)\)-category of toposes which we call the _focal point_ of \(p\). If \(\mathcal{E}\) is a topological space (that is, if its corresponding logos \(\operatorname{Sh}_{\infty}(\mathcal{E})\) is the category of sheaves \(\operatorname{Sh}_{\infty}(X)\) on a sober topological space \(X\)), then the terminal geometric morphism \(\gamma:\mathcal{E}\to\mathcal{S}\) is local just when \(X\) has a focal point: a point \(f\in X\) whose only open neighborhood is the whole of \(X\). In particular, the prime spectrum of a ring \(A\) is local if and only if \(A\) is a local ring; in this case, the focal point is the unique maximal ideal. On the logos side, this means that the direct image \(p_{*}\) admits a fully faithful right adjoint \(p^{!}\) (which is \(f_{*}\)). All together, this gives an adjoint triple between the corresponding logoses: \[\begin{array}{c}\operatorname{Sh}_{\infty}(\mathcal{E})\\ \par p^{!}\Big{\{}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad being a constant presheaf on the global orbit category, and codiscrete means being a presheaf representable by an ordinary \(\infty\)-groupoid). Shulman [46] has shown that every \(\infty\)-logos may be presented by a model of homotopy type theory, allowing reasoning conducted in homotopy type theory to be interpreted in any \(\infty\)-logos. In this sense, homotopy type theory is to \(\infty\)-logoses as set theory is to the 1-logoses of Grothendieck, Lawvere, and Tierney. In _Brounver's Fixed Point Theorem in Real-Cohesive Homotopy Type Theory_[47], Shulman also put forward a _spatial type theory_ which may (conjecturally) be interpreted into any local geometric morphism. Spatial type theory is characterized by including an adjoint pair \(\flat\dashv\sharp\) of a lex comodality \(\flat\) and lex modality \(\sharp\). These are to be interpreted as \(p^{*}p_{*}\) and \(p^{!}p_{*}\) respectively. In spatial type theory, any type has a spatial structure. The existence of this spatial structure is witnessed by the two opposite ways that we can get rid of it: either we can remove all the spatial relationships between points, using the "discrete" \(\flat\) comodality, or we can trivialize the spatial relations using the "codiscrete" \(\sharp\) modality. We emphasize that this spatial structure is distinct from the _homotopical_ structure that all types have by virtue of the identifications between their elements. For example, the topological circle \[\mathbb{S}^{1}:=\{(x,y):\mathbb{R}^{2}\mid x^{2}+y^{2}=1\}\] has a spatial structure as a subset of the Euclidean plane (as a sheaf on the site of continuous manifolds, for example), but is a homotopy 0-type (or "set") without any non-trivial identifications between its points; in particular \(\Omega\mathbb{S}^{1}=*\). The homotopy type \(S^{1}\) of the circle, however, is spatially discrete but has many non-trivial identifications of its point: in particular \(\Omega\mathbb{S}^{1}=\mathbb{Z}\). There is not, however, only one way to be spatial in mathematics. For example a simplicial topological space has both a simplicial structure and a topological structure. This can be witnessed at the level of toposes as well. If \(p:\mathcal{E}\to\mathcal{F}\) admits a focal point \(f:\mathcal{F}\to\mathcal{E}\), then \(f^{\Delta^{\text{op}}}:\mathcal{F}^{\Delta^{\text{op}}}\to\mathcal{E}^{\Delta^ {\text{op}}}\) is also a focal point of \(p^{\Delta^{\text{op}}}:\mathcal{E}^{\Delta^{\text{op}}}\to\mathcal{F}^{\Delta ^{\text{op}}}\), where the logos \(\operatorname{Sh}_{\infty}(\mathcal{E}(\mathcal{E})^{\Delta^{\text{op}}}):=( \operatorname{Sh}_{\infty}(\mathcal{E}))^{\Delta^{\text{op}}}\) is the category of simplicial objects in the logos \(\operatorname{Sh}_{\infty}(\mathcal{E})\). But there is another local geometric morphism \(\gamma:\mathcal{E}^{\Delta^{\text{op}}}\to\mathcal{E}\) where \(\gamma_{*}\) sends a simplicial sheaf \(X_{\bullet}\) to \(X_{0}\) and \(\gamma^{!}\) is given by the \(0\)_-coskeleton_\(\operatorname{csk}_{0}S_{n}:=S^{n}\). These two different axes of spatiality on the objects of \(\operatorname{Sh}_{\infty}(\mathcal{E})^{\Delta^{\text{op}}}\) commute, in that the following diagram of adjoints commutes: In particular, we have that \(p^{*\Delta^{\text{op}}}p_{*}^{\Delta^{\text{op}}}\) and \(\gamma^{*}\gamma_{*}\) commute as endofunctors of \(\operatorname{Sh}_{\infty}(\mathcal{E})^{\Delta^{\text{op}}}\). The former discretely retopologizes a simplicial space, while the latter includes the space of 0-simplices as a 0-skeletal simplicial space. Each focus gives an axis along which the objects of the top logos \(\operatorname{Sh}_{\infty}(\mathcal{E})^{\Delta^{\text{op}}}\) may carry spatial structure. When working in, say, simplicial differential spaces, we would like to have access to both the \(\int\dashv\flat\dashv\sharp\) of real cohesion and the \(\operatorname{re}\dashv\operatorname{sk}_{0}\dashv\operatorname{csk}_{0}\) of simplicial cohesion. Shulman's spatial type theory offers no way to do this: the \(\flat\) and \(\operatorname{sk}_{0}\) comonads have incompatible claims on the notion of 'crisp' variable. The solution is to allow a separate notion of crispness for each focus we are interested in. In this section, we will describe the rules for a type theory with commuting focuses, generalizing ordinary spatial type theory in the case of a single non-trivial focus. We will then describe axioms which make these into commuting _cohesions_, in the sense of cohesive type theory. To this end, we will fix an commutative idempotent monoid \(\operatorname{Focus}\) of focuses; we will write the product of the focus \(\operatorname{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \(\clubsuit\)\(=\)\(\clubsuit\). With respect to this ordering, the product becomes the meet; we may therefore also think of \(\mathsf{Focus}\) as a meet semi-lattice. We will write the identity element of \(\mathsf{Focus}\) as \(\top\), and note that it is the top focus with respect to the order. For most of our purposes in this paper, our commutative idempotent monoid \(\mathsf{Focus}\) of focuses will be freely generated by a finite set of basic focuses. Explicitly, we may take \(\mathsf{Focus}=\mathcal{P}_{f}(\mathsf{BasicFocuses})^{\mathsf{op}}\) to be the set of finite subsets of the set of basic focuses with union as the product, and therefore the opposite of the ordering of subsets by inclusion. All variables in the context will be annotated with the focus that they are in: \[x:_{\clubsuit}X\vdash t:T\] In general, we will abbreviate the context entry \(x:_{\top}X\) as \(x:X\). In the case that \(\mathsf{Focus}=\{\clubsuit\leq\top\}\) is freely generated by one basic focus, we recover the split context used in Shulman's spatial type theory, where our context \(x:_{\clubsuit}X,y:_{\top}Y\)\(\mathsf{ctx}\) corresponds to Shulman's context \(x::X\mid y:Y\)\(\mathsf{ctx}\). To describe the typing rules, we will need a couple of auxiliary operations on contexts. The first operation \(\clubsuit\Gamma\) adds a specific focus \(\clubsuit\) to the annotation on every variable in a context. So: \[\clubsuit(\cdot) \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{ \mathrel}}}}}}}{}{}{}{}{{{{{{{{{{{{{{{{ { { { { }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \ \ \ \ \ \ \ **Remark 2.0.2**.: Rather than annotating variables, may be tempting to try a floating context separator \(|_{\psi}\) for each focus, so that the variables to the left of \(|_{\psi}\) are precisely the \(\bigtriangledown\)-crisp ones. Such contexts are not sufficiently general; specifically, the \(\flat\)-elimination rule will let us produce a context containing \(x:\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! element of \(C[M/x]\). We write this element as \((\mathsf{let}\;u^{\varphi}:=M\,\mathrm{in}\,N):C[M/x]\) where \(N:C[u^{\varphi}\,/x]\) is the element we defined assuming that \(M\) was of the form \(u^{\varphi}\). The equation \(\bigtriangledown(\bigtriangleup\,\Gamma)\equiv(\bigtriangleup\,\varphi)\setminus\Gamma\) is necessary here to know that the type \(\flat\,\flat\,\flat A\) is well-formed in context \(\bigtriangleup\,\bigvee\,\Gamma\). * \(\flat\,\)-beta: If \(M\) actually is of the form \(K^{\flat\,\flat\,\flat}\) for suitably crisp \(K\), then we simply substitute \(K\) in for \(u\). The term \(K\) must be \(\bigtriangleup\,\)-crisp for both the \(\flat\,\)-intro and \(\flat\,\)-elim to have been applied, and so its substitution for the \(\bigtriangleup\,\)-crisp variable \(u\) is well-formed. **Remark 2.0.3**.: These rules are stronger than the ones used by Shulman for spatial type theory, even in the case of a single focus. We have built in a \(\bigtriangleup\,\)-_crisp induction principle_ for \(\flat\,\varphi\), for any two focuses \(\bigtriangledown\) and \(\bigtriangleup\,\): if the term we are inducting on is already \(\bigtriangleup\,\)-crisp, then we may maintain that crispness in the new assumption \(u\). If we have a single non-trivial focus \(\bigtriangledown\), as is the case in Shulman's type theory, then taking \(\bigtriangleup\,=\,\bigtriangledown\) in the above expression yields the 'crisp \(\flat\) induction' principle of [47, Lemma 5.1]. This induction principle is proven by taking a detour through \(\sharp\), but here we choose to build it into the rule directly. Our elimination rule is in fact also admissible from the less general one that requires the freshly bound variable to only be \(\bigtriangledown\)-crisp, but we choose the more general rule for convenience. Rules for \(\sharp\).The rules for \(\sharp\) are a little simpler, and in the case of a single focus specialize exactly to the rules of spatial type theory. \(\sharp\,\)-form\(\frac{\bigtriangledown\,\Gamma\vdash A\,\,\mathrm{type}}{\Gamma\vdash\sharp\, \varphi A\,\,\mathrm{type}}\)\(\frac{\bigtriangledown\,\Gamma\vdash M:A}{\Gamma\vdash M^{\sharp\,\varphi}: \sharp\,\varphi A}\)\(\frac{\bigtriangledown\,\Gamma\vdash N:\sharp\,\varphi A}{\Gamma \vdash N_{\sharp\,\varphi}:A}\)\(\frac{\bigtriangledown\,\setminus\Gamma\vdash M:A}{\Gamma\vdash(M^{ \sharp\,\varphi})_{\sharp\,\varphi}\equiv M:A}\)\(\frac{\bigtriangledown\,\Gamma\vdash N:\sharp\,\varphi A}{\Gamma \vdash N\equiv(N_{\sharp\,\varphi})^{\sharp\,\varphi}:\sharp\,\varphi A}\) In prose, these rules read as follows: * \(\sharp\,\)-form: When forming the type \(\sharp\,\varphi A\), all variables may be used in \(A\) as though they are \(\bigtriangledown\)-crisp. * \(\sharp\,\)-intro: When forming a term \(M^{\sharp\,\varphi}:\sharp\,\varphi A\), all variables may be used in \(M\) as though they are \(\bigtriangledown\)-crisp. * \(\sharp\,\)-elim: If \(N\) is a \(\bigtriangledown\)-crisp element of \(\sharp\,\varphi A\), we may extract an element \(N_{\sharp\,\varphi}:A\). * \(\sharp\,\)-beta: If \(M\) is a \(\bigtriangledown\)-crisp element of \(A\), then \(M^{\sharp\,\varphi}{}_{\sharp\,\varphi}\equiv M\). * \(\sharp\,\)-eta: Any term of \(N:\sharp\,\varphi A\) is definitionally equal to \(N_{\sharp\,\varphi}{}^{\sharp\,\varphi}\). As in ordinary spatial type theory, the term \(N_{\sharp\,\varphi}\) may not be well-typed on its own, because it may use non-crisp variables of the context \(\Gamma\). It is however well-typed underneath the outer \((-)^{\sharp\,\varphi}\), since the introduction rule allows us to use any variable as though it is \(\bigtriangledown\)-crisp. **Remark 2.0.4**.: Perhaps surprisingly, the shape of the \(\sharp\,\)-form and \(\sharp\,\)-intro rules is what builds the left-exactness of \(\flat\) into the theory. This is the case even in ordinary spatial type theory, not a feature that only appears in this multi-focus setting. The trick is that the promotion operation \(\bigtriangledown\,\Gamma\) distributes over the context extensions in \(\Gamma\) rather than being a'stuck' context former applied to \(\Gamma\) as a whole. Specifically, when using \(\sharp\) to derive crisp \(\mathsf{Id}\)-induction, one applies \(\sharp\) to a type \[x::A,y::A,p::(x=y)\mid\cdot\vdash C\,\,\mathrm{type},\] yielding a type \[\cdot\mid x:A,y:A,p:(x=y)\vdash\sharp C\,\,\mathrm{type}.\] Internalized, the former context represents the type \((x:\flat A)\times(y:\flat A)\times\flat(x_{y}=y_{\flat})\), but \(\sharp\,\)-form treats it as identical to \(\flat\,((x:A)\times(y:A)\times(x=y))\) when applying adjointness. **Remark 2.0.5**.: In most cases of interest, our commutative idempotent monoid of focuses is freely generated by a finite set of basic focuses. In this situation, it suffices to provide the \(\flat\) and \(\sharp\) only for the basic focuses, as the remainder can be derived. The top focus \(\top\) (which semantically corresponds to the entire topos we are working in) has both \(\flat\top A\) and \(\sharp\top A\) canonically equivalent to \(A\). And given focuses \(\blacktriangledown\) and \(\blacktriangledown\), it is quickly proven that \(\flat\blacktriangleright_{\blackblackblackblackblackblack}\) is equivalent to \(\flat\blacktriangleright\flat_{\blackblackblackblackblackblackblackblack}\) and similarly for the \(\sharp\)s. Related Type Theories.Besides the original spatial type theory, there are several other dependent modal type theories that come close to our needs. The 'adjoint type theory' perspective [40, 29, 30] was the guiding principle that led to the original spatial type theory of [47]. Indeed, when instantiated with appropriate mode theory, the framework of [30] reproduces a simply typed version of the theory presented here. The specific mode theory to be used is a cartesian monoid with a system of commuting, product-preserving endomorphisms. A dependently typed variant of adjoint type theory is not yet forthcoming, but we expect that our dependent type theory would be an instance of it. An separate line of work on modal type theories is Multimodal Type Theory [22, 23]. In MTT, every mode morphism \(\mu\) is reified in the type theory as a _positive_ type former, and each modality \(\mathsf{mod}_{\mu}\) must have a left-adjoint-like context operator written \(\blacksquare_{\mu}\). If we do not assume the existence of \(\int\), then we are only able to describe \(\sharp\) in this way. Later work [21] describes a multimodal type theory where each mode morphism becomes a (more convenient) _negative_ type former. The semantic requirements are even stronger: the functor corresponding to the modality must be a dependent right-adjoint [11], whose left adjoint is itself a parametric right adjoint. This is too strict even to capture \(\sharp\) without additional assumptions. In [16, SS14], an alternative 'cohesive type theory' is presented, using a combination of the above two styles of modal operator. Rather than working with the endofunctors on the topos of interest, the cohesive setting is kept as an adjoint quadruple \(\Pi_{0}\dashv\operatorname{Disc}\dashv\Gamma\dashv\operatorname{CoDisc}\). A positive type former is used for Disc and negative type formers for \(\Gamma\) and \(\operatorname{CoDisc}\), due to the requirements on having one or two left-adjoints. It is likely that this could be extended to commuting cohesions, but the interactions of the various context \(\blacksquare_{-}\) operations for the left-adjoints may be difficult to describe. The type theory with context structure most formally similar to ours is ParamDTT [38, 37], where variables in the context annotated with a modality indicate a variable under that modality directly, not its left adjoint. It is from this work that we take the left-division notation \(-\setminus\Gamma\) for the clearing operation on contexts, which itself has appeared in other guises, for example [39, 2, 3]. ParamDTT uses a fixed'mode theory' with three modalities \(\{\mathbb{I},\mathsf{id},\sharp\}\) equipped with a particular composition law, but it is clear that the rules for contexts and basic type formers would work equally well for other sets of modalities. A version of the cohesive \(\flat\) can be derived from the'modal \(\Sigma\)-type', fixing the second component to be the unit type. There does not appear to be a way to derive the ordinary (negative) rules for \(\sharp\) in ParamDTT. ## 3 Specializing a Focus A focus gives a specific axis along which a type may be spatial. In simplicial cohesion, we have a simplicial focus \(\mathsf{sk}_{0}\dashv\operatorname{csk}_{0}\) and in differential cohesion a differential focus \(\flat\dashv\sharp\). But what makes the simplicial focus _simplicial_ and the differential focus _differential_? In this section, we will investigate two axioms schemes which can determine the peculiarities of a given focus. In the next section, we will see these axioms in use. First, we note that with a single focus, type theory with commuting focuses is the same theory as Shulman's _spatial type theory_ in [47]. **Theorem 3.0.1**.: _Any of the lemmas and theorems proven in SS3, 4, 5, and 6 of Real Cohesion [47] concerning \(\flat\) and \(\sharp\) and using no axioms are true also of \(\flat\blackblackblack\) and \(\sharp\blackblackblack\) for any fixed focus \(\blacktriangledown\). Theorems which do involve the use of axioms are also valid, so long as the crispness used in those axioms is interpreted as \(\blacktriangledown\)-crispness._ Proof.: The rules for \(\flat\blackblackblack\) and \(\sharp\blackblack\) specialize to Shulman's rules, and therefore his proofs carry through directly. Specifically, \(\flat_{\blacktriangledown}\) is a coreflector and \(\sharp_{\blacktriangledown}\) is a monadic modality, both are lex, and \(\flat_{\blacktriangledown}\) is (\(\blacktriangledown\)-crisply) left-adjoint to \(\sharp_{\blacktriangledown}\). Since adding a focus only expands the rules of the type theory and does not restrict the application of any of the rules for any of the other focuses, any of the theorems proven in this section for a single focus will apply when working with multiple focuses as well. For the rest of this section, we will work within a single focus \(\blacktriangledown\), and for that reason we will drop the annotations by \(\blacktriangledown\) in our expressions. For example, we will write \(\flat_{\blacktriangledown}\) as simply \(\flat\), and we will write \(x:_{\blacktriangledown}X\) as \(x::X\), following Shulman. ### Detecting Continuity In this section, we will look at an axiom which ties the liminal sort of "continuity" implied by the crisp variables of the type theory to the concrete continuity of a particular type \(G\). Our axiom will take the form of a lifting property characterizing those crisp maps which are \(\sharp\)-modal. As we will show in the upcoming Theorem 3.1.2, a crisp map is \(\sharp\)-modal if and only if it lifts crisply (in a sense made precise in Definition 3.1.1) against all of the \(\flat\)-counts. **Definition 3.1.1**.: Let \(c::A\to B\) and \(f::X\to Y\) be crisp maps. We say that \(c\)_lifts crisply against \(f\)_ if for any crisp square as on the left below, there is a unique crisp filler. More formally, we write \(c\perp_{\flat}f\) for the proposition that the square on the right is a pullback. **Theorem 3.1.2**.: _A crisp map \(f::X\to Y\) is \(\sharp\)-modal if and only if for all crisp \(A\), \((\varepsilon:\flat A\to A)\perp_{\flat}f\)._ Proof.: If \(f\) is \(\sharp\)-modal, then since \(\sharp\) is lex, it lifts on the right against all \(\sharp\)-equivalences. For any crisp \(A\), the \(\flat\)-counit \(\varepsilon:\flat A\to A\) is a \(\sharp\)-equivalence by [47, Theorem 6.22]. Therefore, the square is a pullback, and since \(\flat\) preserves crisp pullbacks ([47, Theorem 6.10]), we see that \(\varepsilon\perp_{\flat}f\). On the other hand, suppose that \(f\) lifts crisply on the right against all \(\flat\)-counts. To show that \(f\) is \(\sharp\)-modal, it will suffice to show that its \(\sharp\)-naturality square is a pullback. Let \(X\to\sharp X\times_{\sharp Y}Y\) be the gap map of the \(\sharp\)-naturality square of \(f\), seeking to show that this map is an equivalence. It suffices to split the gap map over the naturality square, by the universal property of the pullback. So, consider the crisp square where \(F(t):=(\mathsf{let}\;t:=p^{\flat}\operatorname{in}\left(\mathsf{fst}\,p\right) _{\sharp})\) is a version of the first projection. To check that the square commutes, it suffices by \(\flat\)-induction to give, for crisp elements \(u::\sharp X\), \(y::Y\), and \(p::(\sharp f(u)=y^{\sharp})\), a term of type \(f(u_{\sharp})=y\). But we have crisply that \(\sharp f(u)\equiv\sharp f(u_{\sharp}^{\sharp})=(f(u_{\sharp}))^{\sharp}\) by the definition of \(\sharp f\), and composing this path with \(p\) we know \(p^{\prime}::(f(u_{\sharp}))^{\sharp}=y^{\sharp}\). By the lexness of \(\sharp\), we therefore also have \(p^{\prime\prime}::\sharp(f(u_{\sharp})=y)\), so that the square commutes by \({p^{\prime\prime}}_{\sharp}\). By hypothesis, there is a unique crisp map \(k:\sharp X\times_{\sharp Y}Y\to X\) filling this square. The bottom triangle says precisely that \(k\) lives over the second projection. We will turn the top triangle into a proof that \(k\) lives over the first projection. Let \((u,y,p):\sharp X\times_{\sharp Y}Y\), seeking to show that \(k(u,y,p)^{\sharp}=u\). This latter type of paths is codiscrete (because \(\sharp X\) is codiscrete), and so when mapping into it we may assume by that \(u\) is of the form \(x^{\sharp}\), reducing our goal to \(k(x^{\sharp},y,p)^{\sharp}=x^{\sharp}\). By the lexness of \(\sharp\), it suffices to give an element of \(\sharp(k(x^{\sharp},y,p)=x)\), and for this it suffices to give an element of \(k(x^{\sharp},y,p)=x\) under the hypotheses that \(x\), \(y\), and \(p\) are crisp. In this case, \((x^{\sharp},y,p)^{\flat}:\flat(\sharp X\times_{\sharp Y}Y)\), and so we have that \(k(x^{\sharp},y,p)=F((x^{\sharp},y,p)^{\flat})\) by the upper triangle. But by definition, \(F((x^{\sharp},y,p)^{\flat})\equiv x^{\sharp}_{\sharp}\equiv x\), so that we have succeeded in giving the desired identification. We have shown that \(k\) lives over the naturality square; now we need to show that it splits the gap map \(X\to\sharp X\times_{\sharp Y}Y\). To that end, consider the following diagram: Showing that the diagram commutes as drawn follows easily by \(\flat\)-induction. We then have two crisp fillers of the outer square: first we have \(\operatorname{id}_{X}:X\to X\) and \(k\circ\operatorname{gap}:X\to X\). By the uniqueness of crisp fillers, we conclude that these must be identical. Knowing that the crisp \(\sharp\)-modal maps may be characterized by lifting crisply against \(\flat\)-counts suggests that we could axiomatize the particular qualities of \(\sharp\) by restricting the class of \(\flat\)-counts which it suffices to lift against. To that end, we make the following definition. **Definition 3.1.3**.: Let \(G::I\to\mathbf{Type}\) be a crisp type family indexed by a \(\flat\)-modal and inhabited type \(I\). We say that \(G\)_detects continuity_ when, for every crisp map \(f::X\to Y\), \[\{f\text{ is }\sharp\text{-modal}\}\xleftrightarrow{\left\{\begin{matrix}( \varepsilon:\flat G_{i}\to G_{i})\perp_{\flat}f\\ \text{ for all }i::I\end{matrix}\right\}}\] **Remark 3.1.4**.: Thinking externally, it is straightforward to see that any family \(G_{i}\) which generates a local topos \(\mathcal{E}\) in question under colimits will detect continuity for the focus given by the terminal map of toposes. This is because \(\flat\), as a left adjoint, commutes with all colimits; therefore the problem of lifting against the \(\flat\)-counit of any object of \(\mathcal{E}\) can be reduced to that of lifting against the \(\flat\)-counts of the generators \(G_{i}\). **Lemma 3.1.5**.: A crisp type \(X\) is \(\sharp\)-modal if and only if \(\flat(A\to X)\to\flat(\flat A\to X)\) is an equivalence for all crisp types \(A\), and if \(G\) detects continuity then it suffices to check for each \(G_{i}\). Proof.: When \(f:X\to 1\), the square defining crisp lifting is a pullback iff the top map is an equivalence. If a family detects continuity, then it is a separating family for crisp maps in the following precise sense. **Theorem 3.1.6**.: _Suppose that \(G::I\to\mathbf{Type}\) detects continuity. Let \(f::X\to Y\) be a crisp map for which \(\flat f:\flat X\to\flat Y\) is an equivalence and for all \(i::I\), the induced map \(\flat(G_{i}\to X)\to\flat(G_{i}\to Y)\) given by post-composing with \(f\) is an equivalence. Then \(f\) is an equivalence._ Proof.: First, note that \(f\) is a \(\sharp\)-equivalence since it is by hypothesis a \(\flat\)-equivalence. It therefore suffices to show that \(f\) is \(\sharp\)-modal, which by the assumption that \(G\) detects continuity means showing that \(f\) lifts crisply against all \(\flat\)-counts \(\flat G_{i}\to G_{i}\) for \(i::I\). Consider the following diagram: The square on the left is the one we need to show is a pullback. For this, it will suffice to show that the middle map \(\flat(f\circ):\flat(X^{\flat G_{i}})\to\flat(Y^{\flat G_{i}})\) is an equivalence, since the leftmost vertical map is an equivalence by hypothesis and any square with two sides equivalences is a pullback. For that, the middle vertical map is equivalent by the adjunction \(\flat\dashv\sharp\) to the rightmost vertical map ([47, Corollary 6.26]). But the rightmost vertical map is an equivalence because it is post-composition by the equivalence \(\sharp f\). ### Detecting Connectivity A focus is said to be _cohesive_ if \(\flat\) has a further left adjoint \(\int\) which is itself a modality: \[\flat([A\to X)\simeq\flat(A\to\flat X).\] This adjunction only determines \(\int\) for crisp types. It is better to define \(\int\) by nullifying a family of objects; then \(\int\) is determined for all types (of any size). To this end, we make the following definition. **Definition 3.2.1**.: Let \(G::I\to\mathbf{Type}\) be a crisp type family indexed by a \(\flat\)-modal type. We say that \(G\)_detects connectivity_ when, for any crisp type \(X\), If \(G\) detects connectivity, then \(\int\) is defined to be nullification at the family \(G_{i}\). **Remark 3.2.2**.: In _Real Cohesion_[47], the assertion that a given family \(G\) detects connectivity is known as Axiom C0. In [44, Definition 5.2.48], a single object with this property is said to 'exhibit the cohesion'. If there is a family \(G\) which detects connectivity, then we say that the focus is _cohesive_. This is justified by the following theorem, which we may import directly from _Real Cohesion_[47]. **Theorem 3.2.3**.: _Suppose that \(G\) detects connectivity. Then a crisp type is \(\int\)-modal if and only if it is \(\flat\)-modal, and furthermore \(\int\) is crisply left-adjoint to \(\flat\):_ \[\flat([A\to X)\to\flat(A\to\flat X)\] Proof.: This is [47, Theorem 9.15]. ## 4 Examples of Focuses To keep the various operators visually distinct, we will use completely different symbols for each focus we are interested in. The rules governing the type formers are unchanged. * \(\int\dashv\dashv\dashv\sharp\) denotes _real cohesion_, where a set of real numbers (possible the Dedekind reals or an axiomatically asserted set of "smooth reals") detects connectivity. * \(\mathsf{re}\dashv\mathsf{sk}_{0}\dashv\mathsf{csk}_{0}\) denotes _simplicial cohesion_, where the (axiomatically asserted) 1-simplex \(\Delta[1]\) detects connectivity. * \(\vee\dashv\cup\dashv\) denotes _global equivariant cohesion_, where connectivity is detected by \(\gamma\cdot\mathbf{B}G\) for finite groups \(G\). This notational convention follows Sati and Schreiber [43]. * Various _topological toposes_ exhibit spatial type theory, with \(\flat\dashv\sharp\) retopologizing types with the discrete and codiscrete topologies respectively. In particular, Johnstone's topological topos has a focus whose continuity is detected by the walking convergent sequence \(\mathbb{N}_{\infty}\), which may be constructed as the set of monotone functions \(\mathbb{N}\to\{0,1\}\). ### Real Cohesions In _Real Cohesion_[47], Shulman gives the axiom \(\mathbb{R}\flat\) which states that a crisp type is \(\flat\)-modal if and only if it is \(\mathbb{R}_{D}\)-null, where \(\mathbb{R}_{D}\) is the set of Dedekind cuts. In the terminology of Definition 3.2.1, this says that \(\mathbb{R}_{D}\) detects continuous real connectivity. **Axiom 1** (Continuous Real Cohesion).: We assume that \(\mathbb{R}_{D}\) detects continuous real connectivity, and also Shulman's Axiom T: For every \(x:\mathbb{R}_{D}\), the proposition \((x>0)\) is \(\sharp\)-modal. **Remark 4.1.1**.: Though Shulman does not consider this axiom, we may also add the assumption that the family \(\mathbb{R}_{D}^{n}\) detects continuous real continuity. Using this assumption, we may internalize the arguments of Example 8.33 of [47] to show that the mysterious Axiom T follows from the proposition that if \(f:\mathbb{R}_{D}^{n}\to\mathbb{R}_{D}\) is crisp and \(f(x)>0\) for any crisp \(x::\mathbb{R}_{D}^{n}\), then in fact \(f(x)>0\) for all (not necessarily crisp) \(x:\mathbb{R}_{D}^{n}\). Since, by Corollary 8.28 of [47] (assuming the crisp LEM or the axiom of countable choice), any crisp Dedekind real is a Cauchy real, we are equivalently asking if a function \(f:\mathbb{R}_{D}\to\mathbb{R}_{D}\) is positive on all Cauchy reals, is it always positive. This seems obvious, but as Shulman notes in Example 8.34, this obvious statement is not always true; though assuming countable choice it is likely provable. There are continuous but non-differentiable functions \(f:\mathbb{R}_{D}\to\mathbb{R}_{D}\). If we want to work in a topos where the types have a _smooth_ structure instead of just a continuous structure, then we must work with a type of _smooth_ reals \(\mathbb{R}_{S}\). The most common way to axiomatize the type of smooth reals is using the Kock-Lawvere axiom and the other axioms of synthetic differential geometry. See, e.g. Section 4.1 of [36] for a list of these axioms. In any case, if \(\mathbb{R}_{S}\) is a type of smooth reals, then we will take differential cohesion to mean that \(\mathbb{R}_{S}\) detects connectivity. **Axiom 2** (Differential Real Cohesion).: If \(\mathbb{R}_{S}\) is a type of smooth reals (say, from synthetic differential geometry), then we assume that \(\mathbb{R}_{S}\) detects differential connectivity. ### Simplicial Cohesion There is a well known difficulty in describing simplicial types in ordinary homotopy type theory -- the infinite amount of coherence data is difficult to describe formally given the tools of type theory. This difficulty has led to extensions of type theory such as two-level type theories [5, 8, 4] which augment HoTT with strict equalities which can be use to define simplicial homotopy types satisfying the simplicial identities strictly, bypassing the problematic tower of coherences. Another approach to avoiding simplicial difficulties is to simply interpret type theory into a topos of _simplicial_ homotopy types, rather than mere homotopy types. This is the approach taken by Riehl and Shulman in [42], where they present a type theory that makes every type into a simplicial type and has as primitives the simplices \(\Delta[n]\), so that a simplex in a type \(A\) is a function \(\sigma:\Delta[n]\to A\). In this section we will also work with simplicial homotopy types, but by different means to Riehl and Shulman's type theory. Instead, we will describe _simplicial cohesion_, with adjoint modalities \(\mathsf{re}\dashv\mathsf{sk}_{0}\dashv\mathsf{csk}_{0}\). These are defined semantically as follows: * The ("simplicial flat") 0-skeletal comodality \(X\mapsto\operatorname{\mathsf{sk}}_{0}X\) sends a simplicial type to its 0-skeleton: \[\begin{array}{ From these axioms, we may define the other simplices \(\Delta[n]\) to be the chains of length \(n\) in \(\Delta[1]\): \[\Delta[n]\mathrel{\mathop{:}}\equiv\{\vec{x}:\Delta[1]^{n}\mid x_{1}\leq x_{2} \leq\cdots\leq x_{n}\}.\] We also assume the following: * (Axiom \(\Delta\mathsf{sk}_{0}\)) \(\Delta[1]\) detects simplicial connectivity: a simplicially crisp type \(X\) is \(0\)-skeletal if and only if every map \(\Delta[1]\to X\) is constant. * The family \(\Delta[-]:\mathbb{N}\to\mathbf{Type}\) detects simplicial continuity. * (Axiom \(\partial\Delta\)) For \(i:\Delta[1]\), we have \(\mathsf{csk}_{0}((i=0)\vee(i=1))\). * Each \(\Delta[n]\) is crisply projective. That is, for a simplicially crisp \(E:\Delta[n]\to\mathbf{Type}\), we have a map \[\mathsf{sk}_{0}((i:\Delta[n])\to\exists E_{i})\to\exists\,\mathsf{sk}_{0}((i: \Delta[n])\to E_{i}).\] As there is an obvious map the other way, this map is an equivalence. Let us quickly set the stage by proving that the \(n\)-simplices have trivial geometric realization and the \(0\)-skeleton of the \(n\)-simplex \(\Delta[n]\) is the ordinal \([n]\equiv\{0,\ldots,n\}\). **Lemma 4.2.1**.: The order \(\Delta[1]\) has finite meets and joins, and they distribute over each other. Moreover, the inclusion \(\{0,1\}\hookrightarrow\Delta[1]\) is an inclusion of lattices. Proof.: Suppose that \(x,y:\Delta[1]\). Then either \(x\leq y\) or \(y\leq x\). In the former case, define \(x\wedge y\mathrel{\mathop{:}}\equiv x\) and \(x\lor y\mathrel{\mathop{:}}\equiv y\), and in the latter case \(x\wedge y\mathrel{\mathop{:}}\equiv y\) and \(x\lor y\mathrel{\mathop{:}}\equiv x\). If both hold, then \(x=y\) and the definitions agree. If \(x,y,z:\Delta[1]\), then these three may find themselves in any of \(6\) orderings. One may then check that in each of these cases, meets distribute over joins and vice versa. For example, supposing that \(x\leq y\leq z\), then \(x\wedge(y\lor z)=x\wedge z=x\), while \((x\wedge y)\vee(x\wedge z)=x\lor x=x\). **Lemma 4.2.2**.: The \(n\)-simplex \(\Delta[n]\) is a retract of the \(n\)-cube \(\Delta[1]^{n}\). Moreover, the inclusion \([n]\hookrightarrow\Delta[n]\) given by \[i\mapsto 0\leq\cdots\leq 0\leq\frac{i\text{ times}}{1\leq\cdots\leq 1}\] is a retract of the inclusion \(\{0,1\}^{n}\to\Delta[1]^{n}\). Proof.: Given \(x_{1},\ldots,x_{n}:\Delta[1]\), define \(m_{1}\mathrel{\mathop{:}}\equiv\bigwedge_{i:\mathfrak{n}}x_{i}\) and let \(i_{1}:\mathfrak{n}\) be its index, then \(m_{2}\mathrel{\mathop{:}}\equiv\bigwedge_{\mathfrak{n}\setminus\{m_{1}\}}x_{i}\), and so on. Note that \(m_{1}\leq m_{2}\leq\cdots\leq m_{n}\), so that \(m:\Delta[n]\). Finally, if the \(x_{i}\) were already in increasing order, then \(m_{i}=x_{i}\), showing that this is a retract. We note that this retract argument works just as well on \(\{0,1\}^{n}\to[n]\), if we identify \([n]\) with the subset \(\{\vec{x}:\{0,1\}^{n}\mid x_{1}\leq\cdots\leq x_{n}\}\) of increasing sequences. Since it only makes use of the lattice structure of \(\{0,1\}^{n}\) and \(\Delta[1]^{n}\), and the inclusion is a lattice homomorphism, we conclude that the necessary squares commute. **Theorem 4.2.3**.: _The \(n\)-simplex has trivial realization: \(\mathsf{re}\,\Delta[n]=*\)._ Proof.: The realization \(\mathsf{re}\,\Delta[n]\) is a retract of the realization \(\mathsf{re}\,\Delta[1]^{n}\), and this is contractible since \(\Delta[1]\) detects simplicial connectivity. **Theorem 4.2.4**.: _The inclusion \([n]\hookrightarrow\Delta[n]\) given by_ \[i\mapsto 0\leq\cdots\leq 0\leq\overbrace{1\leq\cdots\leq 1}^{i\text{ times}}\] _is a \(\mathsf{sk}_{0}\)-equivalence, showing that \(\mathsf{sk}_{0}\,\Delta[n]\simeq[n]\)._ Proof.: We will show that \([1]\hookrightarrow\Delta[1]\) is a \(\operatorname{sk}_{0}\)-equivalence. This will imply that \([1]^{n}\hookrightarrow\Delta[1]^{n}\) is a \(\operatorname{sk}_{0}\)-equivalence; since \([n]\hookrightarrow\Delta[n]\) is a retract of this, we may conclude that it is a \(\operatorname{sk}_{0}\)-equivalence as well. Since this inclusion \(\{0,1\}\hookrightarrow\Delta[1]\) is simplicially crisp, to show that it is a \(\operatorname{sk}_{0}\)-counit it will suffice to show that it is a \(\operatorname{csk}_{0}\)-equivalence. We therefore need an inverse \(\operatorname{csk}_{0}\Delta[1]\to\operatorname{csk}_{0}\{0,1\}\). Since the codomain is \(0\)-coskeletal, it suffices to define this map on \(\Delta[1]\). So let \(i\colon\Delta[1]\), seeking \(\operatorname{csk}_{0}\{0,1\}\). By Axiom \(\partial\Delta\), we have \(\operatorname{csk}_{0}((i=0)\vee(i=1))\), and since our goal is \(0\)-coskeletal, we may assume that \(i=0\) or \(i=1\). If \(i=0\), then we send it to \(0^{\operatorname{csk}_{0}}\), if \(i=1\), then we send it to \(1^{\operatorname{csk}_{0}}\). To show that this map is the inverse of \(\operatorname{csk}_{0}\{0,1\}\to\operatorname{csk}_{0}\Delta[1]\), we may appeal to the fact that identities in a modal type are modal, and so we may remove the \(\operatorname{csk}_{0}\) around \(\operatorname{csk}_{0}((i=0)\vee(i=1))\) and check that the maps invert each other on these elements, which they clearly do. We can also define the type of \(n\)-simplices in a simplicially crisp type, and prove a few elementary lemmas concerning the \(n\)-simplices of types. **Definition 4.2.5**.: Let \(X\) be a simplicially crisp type. Then define the type \(X_{n}\) of \(n\)-simplices in \(X\) as \[X_{n}\coloneqq\operatorname{sk}_{0}(\Delta[n]\to X).\] If \(f:X\to Y\) is a simplicially crisp map, then it induces a map \(f_{n}:X_{n}\to Y_{n}\) by post-composition. **Lemma 4.2.6**.: Let \(f:X\to Y\) be a simplicially crisp map. If \(f_{n}:X_{n}\to Y_{n}\) is an equivalence for all \(n\), then \(f\) is an equivalence. Proof.: This is a special case of Theorem 3.1.6, noting that \(X_{0}\simeq\operatorname{sk}_{0}X\). **Lemma 4.2.7**.: Let \(f:X\to Y\) be a simplicially crisp map. Then for a crisp \(y:\Delta[n]\to Y\), we have \[\operatorname{fib}_{f_{n}}(y^{\operatorname{sk}_{0}})\simeq\operatorname{sk}_ {0}((i:\Delta[n])\to\operatorname{fib}_{f}(y(i))).\] Proof.: We compute: \[\operatorname{fib}_{f_{n}}(y^{\operatorname{sk}_{0}}) \equiv(x:X_{n})\times(f_{n}x=y^{\operatorname{sk}_{0}})\] \[\equiv(x:\operatorname{sk}_{0}(\Delta[n]\to X))\times\text{let }\tau^{ \operatorname{sk}_{0}}:=x\operatorname{in}(f\circ\tau)^{\operatorname{sk}_{0}} =y^{\operatorname{sk}_{0}}\] \[\simeq(x:\operatorname{sk}_{0}(\Delta[n]\to X))\times\text{let }\tau^{ \operatorname{sk}_{0}}:=x\operatorname{in}\operatorname{sk}_{0}(f\circ\tau=y)\] \[\simeq\operatorname{sk}_{0}((x:\Delta[n]\to X)\times(f\circ x=y))\] \[\simeq\operatorname{sk}_{0}((x:\Delta[n]\to X)\times((i:\Delta[n] )\to(f(x(i))=y(i))))\] \[\simeq\operatorname{sk}_{0}((i:\Delta[n])\to\operatorname{fib}_{f }(y(i)))).\] **Lemma 4.2.8**.: Let \(f:X\to Y\) be a simplicially crisp map. Then \((\operatorname{im}f)_{n}\simeq\operatorname{im}f_{n}\). Proof.: We use the projectivity of the simplices.4 Footnote 4: This lemma is in fact equivalent to assuming the projectivity of the simplices. \[(\operatorname{im}f)_{n} \equiv\operatorname{sk}_{0}(\Delta[n]\to(y:Y)\times\exists \operatorname{fib}_{f}(y))\] \[\simeq(y:Y_{n})\times\text{let }\sigma^{\operatorname{sk}_{0}}:=y \operatorname{in}\operatorname{sk}_{0}((i:\Delta[n])\to\exists\operatorname{ fib}_{f}(\sigma i))\] \[\simeq(y:Y_{n})\times\text{let }\sigma^{\operatorname{sk}_{0}}:=y \operatorname{in}\exists\operatorname{sk}_{0}((i:\Delta[n])\to\operatorname{ fib}_{f}(\sigma i))\] \[\simeq(y:Y_{n})\times\exists\operatorname{fib}_{f_{n}}(y)\] \[\equiv\operatorname{im}f_{n}.\] The definition of the \(n\)-simplices that we gave above is simple, but it is not that straightforward to see that it is functorial in the ordinal \([n]\). We can give an alternative definition of the \(n\)-simplices which makes the functoriality evident. **Definition 4.2.9**.: Let \(\mathsf{Interval}\) denote the category of _intervals_: totally ordered sets with distinct top and bottom. The maps of \(\mathsf{Interval}\) are the monotone functions preserving top and bottom. Let \(\mathsf{FinOrd}_{+}\) denote the category of finite inhabited ordinals and order preserving maps between them -- the usual "simplex category". We denote by \([n]\) the ordinal \(\{0,\ldots,n\}\). We will need a standard reformulation of the category of finite ordinals in terms of intervals (see e.g. [32, SSVIII.7]) **Lemma 4.2.10**.: There is a contravariant, fully faithful functor \(t:\mathsf{FinOrd}_{+}^{\mathrm{op}}\to\mathsf{Interval}\) sending \([n]\) to \([n+1]\) with top element \(n+1\) and bottom element \(0\). To a map \(f:[n]\to[m]\), we define \(tf:[m+1]\to[n+1]\) by \[tf(i)\mathrel{\mathop{:}}\equiv\begin{cases}\min\{j\mid i\leq f(j)\}\\ n+1\qquad\qquad\qquad\text{if no such minimum exists.}\end{cases}\] Conversely, to a monotone map \(g:[m+1]\to[n+1]\) preserving top and bottom, we define \(t^{-1}g:[n]\to[m]\) by the dual formula \[(t^{-1}g)(j)\mathrel{\mathop{:}}\equiv\max\{i\mid g(j)\leq i\}.\] We may now define the \(n\)-simplices in a way which makes clear their functoriality in the category of finite inhabited ordinals. **Definition 4.2.11**.: We define the \(n\)-simplex \(\Delta[n]\) to be \[\Delta[n]\mathrel{\mathop{:}}\equiv\mathsf{Interval}(t[n],\Delta[1]).\] Therefore, \(\Delta:\mathsf{FinOrd}_{+}\to\mathbf{Set}\) gives a functor from finite inhabited ordinals to the category of sets, where \(\Delta(f):\Delta[n]\to\Delta[m]\) is given by precomposing with \(tf:[m+1]\to[n+1]\). Noting that \([n]\cong\mathsf{Interval}(t[n],[1])\) by the fully-faithfulness of \(t\), the inclusion of top and bottom elements \([1]\hookrightarrow\Delta[1]\) induces a natural inclusion \([n]\hookrightarrow\Delta[n]\) by post-composition. As we saw in Theorem 4.2.4, these inclusions are \(\mathsf{sk}_{0}\)-counts. #### 4.2.1 The Cech Complex The \(0\)-coskeleton modality \(\mathsf{csk}_{0}\) is useful for working in simplicial cohesion since it enables us to give an easy construction of the Cech complex of a map \(f:X\to Y\) between \(0\)-skeletal types. The Cech complex of such a map is, externally speaking, the simplicial type formed by repeatedly pulling back \(f\) along itself: \[\begin{array}{c}\vdots\\ X\times_{Y}X\times_{Y}X\\ \check{\mathsf{C}}(f)\mathrel{\mathop{:}}\equiv\\ \end{array}\] **Definition 4.2.12**.: Let \(f:X\to Y\) be a map. The _\(\check{\mathsf{C}}\)ech complex_\(\check{\mathsf{C}}(f)\) of \(f\) is defined to be its \(\mathsf{csk}_{0}\)-image: \[\check{\mathsf{C}}(f)\mathrel{\mathop{:}}\equiv(y:Y)\times\mathsf{csk}_{0}((x :X)\times(fx=y)).\] We will justify this definition by calculating the type of \(n\)-simplices of \(\tilde{\mathsf{C}}(f)\) when both \(X\) and \(Y\) are \(0\)-skeletal. **Proposition 4.2.13**.: Let \(f:X\to Y\) be a simplicially crisp map between \(0\)-skeletal types. Then \[\tilde{\mathsf{C}}(f)_{n}\simeq X\times_{Y}\cdots\times_{Y}X\simeq(y:Y)\times \left((x:X)\times(fx=y)\right)^{n+1}\] is the \((n+1)\)-fold pullback of \(f\) along itself. Proof.: We calculate: \[\tilde{\mathsf{C}}(f)_{n} \mathrel{\mathop{:}}\equiv\mathsf{sk}_{0}(\Delta[n]\to\tilde{ \mathsf{C}}(f))\] \[\mathrel{\mathop{:}}\equiv\mathsf{sk}_{0}(\Delta[n]\to(y:Y)\times \mathsf{csk}_{0}((x:X)\times(fx=y)))\] \[\mathrel{\mathop{:}}\simeq\mathsf{sk}_{0}((\sigma:\Delta[n]\to Y) \times((i:\Delta[n])\to\mathsf{csk}_{0}((x:X)\times(fx=\sigma i))))\] Since \(Y\) is \(0\)-skeletal, any map \(\Delta[n]\to Y\) is constant, so we may continue: \[\simeq\mathsf{sk}_{0}((y:Y)\times(\Delta[n]\to\mathsf{csk}_{0}((x:X)\times(fx =y))))\] Since, by Theorem 4.2.4, \(\mathsf{sk}_{0}\Delta[n]=[n]\), we may use the adjointness of \(\mathsf{sk}_{0}\) and \(\mathsf{csk}_{0}\) to continue: \[\simeq\mathsf{sk}_{0}((y:Y)\times\mathsf{csk}_{0}([n]\to(x:X)\times(fx=y)))\] Now, we may use Lemma 6.8 of [47] to pass the \(\mathsf{sk}_{0}\) into the pair type, and then use that \(\mathsf{sk}_{0}\mathsf{csk}_{0}=\mathsf{sk}_{0}\) to continue: \[\simeq((u:\mathsf{sk}_{0}Y)\times\mathsf{let}\;u\mathrel{\mathop{:}}=y^{ \mathsf{sk}_{0}}\mathsf{in}\,\mathsf{sk}_{0}([n]\to(x:X)\times(fx=y)))\] However, all types involved are already \(0\)-skeletal, so we may remove the \(\mathsf{sk}_{0}\)s: \[\simeq((y:Y)\times([n]\to(x:X)\times(fx=y)))\] \[\simeq(y:Y)\times((x:X)\times(fx=y))^{n+1}\] This last type is the \((n+1)\)-fold pullback of \(f\) along itself, displayed in terms of its diagonal map to \(Y\). We can prove modally that the realization of the Cech nerve of a map \(f:X\to Y\) is the image \(\mathsf{im}f\) of \(f\). This follows from Theorem 10.2 of _Real Cohesion_[47]. **Theorem 4.2.14**.: _If \(A\) is \(0\)-coskeletal, then \(\mathsf{re}A\) is a proposition. As a corollary, \(\mathsf{re}(\mathsf{csk}_{0}X)\simeq\|X\|\)._ Proof.: In [47], this theorem is said to rely on the crisp Law of Excluded Middle. However a glance at the proof reveals that this assumption is only used to assume the decidable equality of \(\mathsf{sk}_{0}\Delta[1]\). Since we know that \(\mathsf{sk}_{0}\Delta[1]\simeq\{0,1\}\) has decidable equality, the proof goes through without assuming crisp LEM. **Theorem 4.2.15**.: _For a map \(f:X\to Y\), the realization \(\mathsf{re}\tilde{\mathsf{C}}(f)\) of the \(\tilde{\mathsf{C}}\)ech nerve is the realization \(\mathsf{re}\mathsf{im}f\) of the image of \(f\). If furthermore \(Y\) is \(0\)-skeletal, then \(\mathsf{re}\tilde{\mathsf{C}}(f)\simeq\mathsf{im}f\)._ Proof.: We compute: \[\mathsf{re}\tilde{\mathsf{C}}(f) \mathrel{\mathop{:}}\equiv\mathsf{re}((y:Y)\times\mathsf{csk}_{0} \mathsf{fib}_{f}(y))\] \[\mathrel{\mathop{:}}\simeq\mathsf{re}((y:Y)\times\mathsf{csk}_{0} \mathsf{fib}_{f}(y))\] \[\mathrel{\mathop{:}}\equiv\mathsf{re}((y:Y)\times\big{\|}\mathsf{ fib}_{f}(y)\big{\|})\] \[\mathrel{\mathop{:}}\equiv\mathsf{re}\mathsf{im}f.\] Now if \(Y\) is \(0\)-skeletal, then by Lemma 8.17 of [47], \(\mathsf{im}f\) is also \(0\)-skeletal, since it is a subtype of a \(0\)-skeletal type. Therefore, \(\mathsf{re}\mathsf{im}f\simeq\mathsf{im}f\), so that in total \(\mathsf{re}\tilde{\mathsf{C}}(f)\simeq\mathsf{im}f\). As an application of Cech nerves, we can see how to extract coherence data for higher groups from their deloopings. If we take the Cech nerve of the inclusion \(\mathsf{pt}_{\mathbf{B}G}:*\to\mathbf{B}G\) of the base point of the delooping of \(G\), we recover a simplicial type whose simplicial identities give coherences for the multiplication of \(G\). **Proposition 4.2.16**.: Let \(G\) be a crisp, \(0\)-skeletal higher group -- a \(0\)-skeletal type identified with the loops of a pointed, \(0\)-connected type \(\mathbf{B}G\). Then \[\tilde{\mathsf{C}}(\mathsf{pt}_{\mathbf{B}G})_{n}\simeq G^{n}.\] Furthermore, \(d_{1}:\tilde{\mathsf{C}}(\mathsf{pt}_{\mathbf{B}G})_{2}\to\tilde{\mathsf{C}}( \mathsf{pt}_{\mathbf{B}G})_{1}\) is the product of the projections \(d_{0}\) and \(d_{2}:G^{2}\to G\). Proof.: By Proposition 4.2.13, we know that \[\tilde{\mathsf{C}}(\mathsf{pt}_{\mathbf{B}G})_{n} \simeq(e:\mathbf{B}G)\times((x:*)\times(\mathsf{pt}_{\mathbf{B}G }*=e))^{n+1}\] \[\simeq(e:\mathbf{B}G)\times(\mathsf{pt}_{\mathbf{B}G}=e)^{n+1}\] \[\simeq(e:\mathbf{B}G)\times(\mathsf{pt}_{\mathbf{B}G}=e)\times( \mathsf{pt}_{\mathbf{B}G}=e)^{n}\] \[\simeq(\mathsf{pt}_{\mathbf{B}G}=\mathsf{pt}_{\mathbf{B}G})^{n}\] \[=G^{n}.\] In the second to last step, we contract \((e:\mathbf{B}G)\times(\mathsf{pt}_{\mathbf{B}G}=e)\). Now, \(d_{i}:\tilde{\mathsf{C}}(\mathsf{pt}_{\mathbf{B}G})_{2}\to\tilde{\mathsf{C}}( \mathsf{pt}_{\mathbf{B}G})_{1}\) is given by forgetting the \(i^{\text{th}}\) component of the list \((e,(a,b,c)):(e:\mathbf{B}G)\times(\mathsf{pt}_{\mathbf{B}G}=e)^{n+1}\). Therefore, \[d_{0}(e,(a,b,c)) =(e,(b,c))\] \[d_{1}(e,(a,b,c)) =(e,(a,c))\] \[d_{2}(e,(a,b,c)) =(e,(a,b))\] Contracting away \(e\) and the first element of the pair, we get the three equations \[d_{0}(ba^{-1},ca^{-1}) =cb^{-1}\] \[d_{1}(ba^{-1},ca^{-1}) =ca^{-1}\] \[d_{2}(ba^{-1},ca^{-1}) =ba^{-1}\] and indeed, we have \(d_{1}(ba^{-1},ca^{-1})=d_{0}(ba^{-1},ca^{-1})d_{2}(ba^{-1},ca^{-1})\). This is equivalent, but not quite the same, as the standard presentation. It amounts to \[d_{0}(g,h) =hg^{-1}\] \[d_{1}(g,h) =h\] \[d_{2}(g,h) =g.\] Using the Cech nerve, we can extract all the coherence conditions governing a homomorphism of higher groups. We first note that the realization of the Cech nerve of a group is a delooping of it. **Proposition 4.2.17**.: Let \(G\) be a \(0\)-skeletal higher group with simplicially crisp delooping \(\mathbf{B}G\), and let \(\tilde{\mathsf{C}}(G)\) be the Cech nerve of the basepoint inclusion \(\mathsf{pt}_{\mathbf{B}G}:*\to\mathbf{B}G\). Then the projection \(\mathsf{fst}:\tilde{\mathsf{C}}(G)\to\mathbf{B}G\) is a re-unit. Proof.: By Theorem 5.9 of [34], \(\mathbf{B}G\) is \(0\)-skeletal. By Axiom \(\Delta\mathsf{sk}_{0}\), it is therefore re-modal. Therefore, to show that \(\mathsf{fst}:\tilde{\mathsf{C}}(G)\to\mathbf{B}G\) is a re-unit, it suffices to show that it is re-connected. Since \(\mathbf{B}G\) is \(0\)-connected, it suffices to show that the fiber over the base point is re-connected, and this fiber is equivalent to \(\mathsf{csk}_{0}G\). This follows by Theorem 4.2.14; \(\mathsf{rc}\mathsf{csk}_{0}G\) is contractible. **Proposition 4.2.18**.: Let \(G\) and \(H\) be \(0\)-skeletal higher groups. Then the type of homomorphisms \(G\to H\) is equivalent to the type of pointed maps \(\tilde{\mathbb{C}}(G)\cdot\to\tilde{\mathbb{C}}(H)\). Proof.: Recall that a homomorphism of higher groups is by definition a pointed map between their deloopings. That is, a homomorphism \(\varphi:G\to H\) is equivalently a diagram as on the left, while a pointed map between the Cech nerves is a diagram as on the right: We are aiming for an equivalence between these two types, which we may present as a one-to-one correspondence. So, to \(\mathbf{B}\varphi:\mathbf{B}G\to\mathbf{B}H\) and \(\mathsf{pt}_{\mathbf{B}G}:\mathsf{pt}_{\mathbf{B}H}=\mathbf{B}\varphi(\mathsf{ pt}_{\mathbf{B}G})\) and \(f:\tilde{\mathbb{C}}(G)\to\tilde{\mathbb{C}}(H)\) and \(\mathsf{pt}_{f}:\mathsf{pt}_{\tilde{\mathbb{C}}(H)}=f(\mathsf{pt}_{\tilde{ \mathbb{C}}(G)})\) associate the type \[(\square:(x:\tilde{\mathbb{C}}(G))\to(\mathbf{B}\varphi(\mathsf{ft}_{\mathbf{ x}})=\mathsf{ft}(fx)))\times(\mathsf{pt}_{\mathbf{B}\varphi}\cdot\square( \mathsf{pt}_{\tilde{\mathbb{C}}(G)})=\mathsf{ft}_{\mathbf{s}},\mathsf{pt}_{f})\] which, diagrammatically, is the type of witnesses that the following diagram commutes: To show that this gives a one-to-one correspondence means showing that the types of diagrams are both contractible, the left for any homomorphism \(\varphi\) and the right for any pointed map \(f\). Let \(\varphi\) be a homomorphism. Since by definition \(\tilde{\mathbb{C}}(G)\) and \(\tilde{\mathbb{C}}(H)\) were the \(\mathsf{csk}_{0}\)-factorizations of the basepoint inclusions, there is a unique filler of this square: But this is precisely a rearrangement of the diagram on the left. Similarly, if \(f\) is a pointed map, then \(\mathsf{re}\,f:\mathbf{B}G\to\mathbf{B}H\) makes the diagram on the right commute, and by the universal property of the re-unit this is the unique such map. ### Global Equivariant Cohesion In _Global Homotopy Theory and Cohesion_[41], Rezk shows that the \(\infty\)-topos of _global equivariant homotopy types_ is cohesive over the \(\infty\)-topos of homotopy types. While Rezk constructs his site out of all compact Lie groups, we will follow Sati and Schreiber [43] in restricting our attention to the finite groups. The global orbit category \(\mathsf{Glo}\) is defined to be the full subcategory of homotopy types spanned by the deloopings \(\mathbf{B}G\) of finite groups \(G\). This is a \((2,1)\)-category, and the global equivariant topos is defined to be the \(\infty\)-category of homotopy type valued presheaves on it. There is an adjoint quadruple connecting the global equivariant topos and the topos of homotopy types: * \(\mathsf{colim}\,X\) is the colimit of the functor \(X:\mathsf{Glo}^{\mathrm{op}}\to\mathcal{S}\), which takes the _strict quotient_ of the global equivariant homotopy type \(X\). * \(\Delta S\) is the inclusion of constant functors: \(\Delta X(\mathbf{B}G):\equiv S\). We will refer to such equivariant types as _invariant_ types. * \(\Gamma X:\equiv X(*)\) is the evaluation at the point. This is known as the _homotopy quotient_ of the global equivariant homotopy type \(X\). * \(\nabla S\) is the Yoneda embedding: \(\nabla S(\mathbf{B}G):\equiv\mathcal{S}(\mathbf{B}G,S)\). This adjoint quadruple gives rise to the cohesive modalities \(\,\vee\dashv\cup\dashv\,\) of equivariant cohesion: * The ("equivariant shape") _strict quotient_ modality \(X\mapsto\vee X\) sends a global equivariant type to its strict quotient, considered as an invariant type. * The ("equivariant flat") _homotopy quotient_ modality \(X\mapsto\vee X\) sends a global equivariant type to its homotopy quotient, considered as an invariant type. Internally speaking, we say that an equivariantly crisp type is invariant when it is \(\,\vee\)-modal. * The ("equivariant sharp") _orbisingular_ modality \(X\mapsto\,\vee X\) sends a global equivariant type to its homotopy quotient, but considered with its natural equivariance via maps from the deloopings of finite groups. Our axioms for global equivariant cohesion are quite straightforward: **Axiom 4** (Global Equivariant Axioms).: The type family \(\,\vee\mathbf{B}:\mathsf{FinGrp}\to\mathbf{Type}\) sending a finite group \(G\) to \(\,\vee\mathbf{B}G\) detects equivariant continuity and connectivity. The types \(\,\vee\mathbf{B}G\) for finite groups \(G\) are the _orbi-singularities_. By the definition above, we may recover \(X(\mathbf{B}G)\) (considered with its natural equivariance) as \(\,\vee(\,\vee\mathbf{B}G\to X)\). **Remark 4.3.1**.: The family \(\,\vee\mathbf{B}G\) for finite groups \(G\) is a large family, but we may reduce it to a small family by noting that the type of finite groups is essentially small. This is a useful observation, since it allows us to conclude that \(\,\vee\), defined by nullifying all \(\,\vee\mathbf{B}G\), is an accessible modality. **Remark 4.3.2**.: Global equivariant cohesion shares a feature with Shulman's continuous real cohesion: both are _definable_ in the sense that the types which detect continuity and connectivity are definable without axioms in the type theory. This is not the case for simplicial cohesion, which appears to require postulating the 1-simplex. It is not clear to us whether there are any general features shared by definable cohesions. In _Proper Orbifold Cohomology_[43], Sati and Schreiber work with equivariant differential cohesion to give an abstract account of the differential cohomology of orbifolds. We can prove some of their lemmas easily in global equivariant cohesion; we will return to prove the lemmas relating equivariant and differential cohesion in the upcoming SS6.2. The following lemma appears as Proposition 3.62 in [43]. **Lemma 4.3.3**.: We have the following equivalences for the generic orbi-singularities \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ **Axiom 5** (Topological Focus).: The topological focus is determined by asserting that \(\mathbb{N}_{\infty}\) detects topological continuity. We may also be able to determine condensed homotopy types as in [19] -- or rather the similar but more topos-theoretic pyknotic homotopy types of [10] -- using a similar axiom. Define a profinite set to be the limit of a crisp diagram of finite sets indexed by a discrete (\(\flat\)-modal) partially ordered set with decidable order. We may then assert that the family of profinite sets detects condensed continuity. However, we do not know if these axioms are sufficient for proving theorems in these topological toposes. ## 5 Multiple Focuses Now, we turn our attention to generalities on possible relationships between different focuses. For this section, fix two focuses \(\blacktriangledown\) and \(\blacklozenge\). First, we should show that the focuses do indeed commute: **Proposition 5.0.1**.: For any type \(B\), the map \(\sharp_{\blacktriangledown}\sharp_{\blacktriangle}B\to\sharp_{\blacktriangle} \sharp_{\blacktriangledown}B\) defined by \[x\mapsto x_{\sharp_{\blacktriangledown}\sharp_{\blacktriangle}}\sharp_{ \blacktriangledown}\sharp_{\blacktriangle}\] is an equivalence. Furthermore, the maps \(\sharp_{\blacktriangledown}\sharp_{\blacktriangle}B\to\sharp_{\blacktriangledown }B\) and \(\sharp_{\blacktriangle}\sharp_{\blacktriangledown}B\to\sharp_{\blacktriangledown }\sharp_{\blacktriangle}B\) defined by \[x\mapsto x_{\sharp_{\blacktriangle}\sharp_{\blacktriangledown}} \sharp_{\blacktriangledown}\] \[x\mapsto x_{\sharp_{\blacktriangle}\sharp_{\blacktriangledown}} \sharp_{\blacktriangledown}\] are also equivalences. Proof.: The first map is well-defined because the use of \(\sharp_{\blacktriangledown}\)- and \(\sharp_{\blacktriangle}\)-introduction means that the assumption \(x\) becomes crisp for both \(\blacktriangledown\) and \(\blacklozenge\), so we may apply \(\sharp_{\blacktriangledown}\)- and then \(\sharp_{\blacktriangle}\)-elimination to it. We may similarly define an inverse by \[x\mapsto x_{\sharp_{\blacktriangle}\sharp_{\blacktriangledown}}\sharp_{ \blacktriangle}\sharp_{\blacktriangledown}.\] These maps are definitional inverses by the computation rules for \(\sharp\)s. The other maps are similarly well defined since being crisp for both \(\blacktriangledown\) and \(\blacklozenge\) means being crisp for focus \(\blacktriangledown\blacklozenge\). The inverse may be defined in the straightforward way. We also note that the ordering of focuses is reflected in the containment of their \(\sharp\)-modal (and so also \(\flat\)-modal) types. **Corollary 5.0.2**.: Suppose that \(\blacklozenge\leq\blacktriangledown\). Then any \(\sharp_{\blacktriangle}\)-modal type is \(\sharp_{\blacktriangledown}\)-modal. Proof.: Since \(\blacklozenge\leq\blacktriangledown\) is defined to mean \(\blacktriangledown\blacklozenge\equiv\blacklozenge\), we know that \(\sharp_{\blacktriangledown}A\equiv\sharp_{\blacktriangle}A\). By assumption \(\sharp_{\blacktriangle}A\simeq A\), and chaining this with the commutativity equivalence of Proposition 5.0.1 \[\sharp_{\blacktriangledown}A\simeq\sharp_{\blacktriangledown}\sharp_{ \blacktriangle}A\simeq\sharp_{\blacktriangledown}A\equiv\sharp_{ \blacktriangle}A\simeq A.\] Tracing these simple equivalences through, this does indeed give an inverse to the \(\sharp_{\blacktriangledown}\)-unit. We can similarly show that the \(\flat\)s commute. This is made simpler through the use of crisp induction. **Proposition 5.0.3**.: Let \(A\) be an \(\blacktriangledown\)-crisp type. Then \(\flat_{\blacktriangledown}\flat_{\blacktriangle}A\to\flat_{\blacktriangle} \flat_{\blacktriangledown}A\) defined by \[u\mapsto\text{let }\dot{v}^{\flat_{\blacktriangledown}}:=u\,\text{in}\,( \text{let }w^{\flat_{\blacktriangle}}:=v\,\text{in}\,(w^{\flat_{\blacktriangledown}})^{ \flat_{\blacktriangle}})\] is an equivalence, natural in \(A\). Proof.: In words, the map is defined as follows. Performing \(\triangleright_{\varphi}\)-induction on \(u:\triangleright_{\varphi}\triangleright_{\phi}A\) gives an assumption \(v:\triangleright_{\phi}A\). A second induction on the term \(v:\triangleright_{\phi}A\) then gives us an assumption \(w:\triangleright_{\phi}A\). This second induction is \(\ \ In the other direction, because \(\int_{\phi}A\) is \(\int_{\phi}\)-modal, it suffices to show that the composite \[A\to\int_{\phi}A\to\flat_{\phi}\not\int_{\phi}A\to\int_{\phi}A\] is equal to the unit \(\eta_{\phi}:A\to\int_{\phi}A\). For this we have the following commutative diagram: The left square commutes by the definition of \(i\), the right square by naturality of \(\varepsilon_{\psi}\), and the composite along the top is the identity. **Lemma 5.1.2**.: Suppose that \(\heartsuit\) and \(\spadesuit\) are both cohesive. Then \(\int_{\psi}\not\int_{\phi}A\to\int_{\phi}\not\int_{\psi}A\) is an equivalence for any \(\spadesuit\)-crisp type \(A\). Proof.: The map \(\eta_{\spadesuit}\circ\eta_{\psi}:A\to\int_{\psi}A\to\int_{\phi}\not\int_{\psi}A\) factors through \(\int_{\psi}\not\int_{\phi}A\), because \(\int_{\phi}\not\int_{\psi}A\) is \(\flat_{\spadesuit}\)-modal, as a type of the form \(\flat_{\spadesuit}X\), and also \(\flat_{\psi}\)-modal, by the previous lemma. The map the other way is defined similarly. To show these are inverses it suffices to show that they become so after precomposition with the composites of the units, because \(\int_{\spadesuit}\not\int_{\psi}A\) and \(\int_{\psi}\not\int_{\psi}A\) are \(\spadesuit\spadesuit\)-discrete; this is immediate by the definition of the maps. We might also hope that, say, \(\flat_{\psi}\) and \(\int_{\phi}\) commute in general, but there is a useful sanity check that shows this is not possible. In the bare type theory with no axioms, there is nothing that prevents interpretation in a model where \(\flat_{\psi}\equiv\flat_{\spadesuit}\) and \(\int_{\psi}\equiv\int_{\phi}\). In ordinary cohesive type theory it is certainly not the case that \(\flat\) and \(\int\) commute, and so \(\flat_{\psi}\not\int_{\phi}\simeq\int_{\phi}\flat_{\psi}\) cannot be provable without further assumptions on \(\heartsuit\) and \(\spadesuit\). A sufficient assumption on our focuses to make \(\flat_{\psi}\) and \(\int_{\phi}\) commute in this way is the following: **Definition 5.1.3**.: Suppose that \(G:_{\heartsuit\spadesuit}I\to\mathbf{Type}\) and \(H:_{\heartsuit\spadesuit}J\to\mathbf{Type}\) detect \(\heartsuit\) and \(\spadesuit\) connectivity respectively. We say that focuses \(\heartsuit\) and \(\spadesuit\) are _orthogonal_ if \(G_{i}\) is \(\flat_{\spadesuit}\)-modal for all \(i\), and \(H_{j}\) is \(\flat_{\psi}\)-modal for all \(j\). Our present goal is to show that this indeed makes \(\flat_{\psi}\not\int_{\phi}\simeq\int_{\phi}\flat_{\psi}\). We will in fact only use that the \(G_{i}\) are \(\flat_{\phi}\)-modal; of course the dual results, flipping \(\heartsuit\) and \(\spadesuit\), require the other half of orthogonality. **Lemma 5.1.4**.: Let \(\heartsuit\) and \(\spadesuit\) be cohesive focuses that are orthogonal. Then for any \(\spadesuit\)-crisp \(A\), if \(A\) is \(\int_{\psi}\)-modal, \(\flat_{\phi}A\) is still \(\int_{\psi}\)-modal. Proof.: Our goal is to show that \(\flat_{\phi}A\) is equivalent to \(G_{i}\to\flat_{\phi}A\) via precomposition by \(G_{i}\to 1\), for any \(i:I\). We easily check that the type \(G_{i}\to\flat_{\phi}A\) is \(\spadesuit\)-discrete: for any \(H_{j}\), \[H_{j}\to(G_{i}\to\flat_{\phi}A) \simeq H_{j}\to(G_{i}\to\flat_{\phi}A)\] \[\simeq G_{i}\to(H_{j}\to\flat_{\phi}A)\] \[\simeq G_{i}\to\flat_{\phi}A\] because \(\flat_{\phi}A\) is \(\spadesuit\)-discrete. Then, by adjointness of \(\int_{\spadesuit}\) and \(\flat_{\phi}\): \[(G_{i}\to\flat_{\phi}A) \simeq\flat_{\phi}(G_{i}\to\flat_{\phi}A)\] \[\simeq\flat_{\phi}(\int_{\phi}G_{i}\to A)\] \[\simeq\flat_{\phi}(G_{i}\to A)\] \[\simeq\flat_{\phi}A\] **Proposition 5.1.5** (Crisp\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\) On the opposite extreme of orthogonality, we can see that if the \(G_{i}\) which detect the connectivity of \(\blacktriangledown\) are \(\int_{\blacktriangle}\)-connected, then any \(\int_{\blacktriangle}\)-modal type is \(\int_{\blacktriangledown}\)-modal. **Proposition 5.1.10**.: Suppose that \(\blacktriangledown\) and \(\blacktriangleleft\) are cohesive focuses where \(G:I\rightarrow\mathbf{Type}\) detects the connectivity of \(\blacktriangledown\). Then the following are equivalent: 1. Every \(G_{i}\) is \(\int_{\blacktriangle}\)-connected. 2. Any \(\int_{\blacklozenge}\)-modal type is \(\int_{\blacktriangledown}\)-modal. Proof.: Suppose that \(G_{i}\) is \(\int_{\blacktriangle}\)-connected for all \(i\) and that \(X\) is \(\int_{\blacktriangle}\)-modal. We may compute: \[(1\to X)\simeq(\int_{\blacktriangle}G_{i}\to X)\simeq(G_{i}\to X)\] In the first equivalence, we use that \(G_{i}\) is \(\int_{\blacktriangle}\)-connected, and in the second that \(X\) is \(\int_{\blacktriangle}\)-modal. We conclude that \(X\) is \(\int_{\blacktriangledown}\)-modal. Conversely, suppose that any \(\int_{\blacktriangle}\)-modal type is \(\int_{\blacktriangledown}\)-modal. Then in particular \(\int_{\blacktriangle}G_{i}\) is \(\int_{\blacktriangledown}\)-modal, so that the identity map \(\int_{\blacktriangledown}G_{i}\rightarrow\int_{\blacktriangle}G_{i}\) factors through \(\int_{\blacktriangledown}\int_{\blacktriangle}G_{i}\). But by Lemma 5.1.2 we have \[\int_{\blacktriangledown}\int_{\blacktriangle}G_{i}\simeq\int_{\blacktriangle }\int_{\blacktriangledown}G_{i}\simeq\int_{\blacktriangle}*\simeq*.\] Therefore, the identity of \(\int_{\blacktriangle}G_{i}\) factors through the point, which means it is contractible. ## 6 Examples with Multiple Focuses In this section, we will see examples with multiple focuses. In particular, we will see simplicial real cohesion, equivariant differential cohesion, and supergeometric cohesion. ### Simplicial Real Cohesion We assume two basic focuses: the real (continuous or differential) focus \(\int\dasharrow\flat\dasharrow\sharp\), and the simplicial focus \(\mathsf{re}\dasharrow\mathsf{sko}\dasharrow\mathsf{csk}_{0}\). We will write \(\mathbb{R}\) for whichever flavor of real numbers is used in the real cohesive focus. We will assume both the axioms of real cohesion and simplicial cohesion, as well as the following axiom relating the two focuses. **Axiom 6** (Simplicial Real Cohesion).: We assume that the real focus and simplicial focus are orthogonal -- which is to say, \(\mathbb{R}\) is \(0\)-skeletal and that \(\Delta[1]\) is discrete. Furthermore, we assume that \(\int\) is computed pointwise: for any simplicially crisp type \(X\), the action \((\eta_{\upharpoonright})_{n}:X_{n}\rightarrow(\upharpoonright X)_{n}\) of the \(\int\)-unit of \(X\) on \(n\)-simplices is itself a \(\int\)-unit. Our goal in this section will be to prove that if \(M\) is a \(0\)-skeletal type -- to be thought of as a "manifold", having only real-cohesive structure but no simplicial structure -- and \(U\) is a _good cover_ of \(M\) -- one for which the finite intersections are \(\int\)-connected whenever they are inhabited -- then the homotopy type \(\upharpoonright M\) of \(M\) may be constructed as the realization of a discrete simplicial set -- namely, the Cech nerve of the open cover, with each open replaced by the point. **Definition 6.1.1**.: Let \(M\) be a \(0\)-skeletal type. A _cover_ of \(M\) consists of a discrete \(0\)-skeletal index set \(I\), and a family \(U:I\rightarrow(M\rightarrow\mathbf{Prop})\) of subobjects of \(M\) so that for every \(m:M\) there is merely an \(i:I\) with \(m\in U_{i}\). We may assemble a cover into a single surjective map \(c:\bigsqcup_{i:I}U_{i}\to M\), where \[\bigsqcup_{i:I}U_{i}\coloneqq(i:I)\times(m:M)\times(m\in U_{i}).\] A cover \(U:I\rightarrow(M\rightarrow\mathbf{Prop})\) is a _good cover_ if for any \(n:\mathbb{N}\) and any \(k:[n]\to I\), the \(\int\)-shape of the intersection \[\bigcap_{i:[n]}U_{k(i)}\coloneqq(m:M)\times((i:[n])\rightarrow(m\in U_{k(i)} )).\] is a proposition. That is, \(\int(U_{k(0)}\cap\cdots\cap U_{k(n)})\) is contractible whenever there is an element in the intersection. We begin with a few ground-setting lemmas. **Lemma 6.1.2**.: Let \(U:I\to(M\to\mathbf{Prop})\) be a simplicially crisp cover, and let \(c:\bigsqcup_{i:I}U_{i}\to M\) be the associated covering map. Consider the projection \(\pi:\dot{\mathbb{C}}(c)\to\operatorname{csk}_{0}I\) defined by \((m,z)\mapsto(\operatorname{fst}z_{\operatorname{csk}_{0}})^{\operatorname{ csk}_{0}}\). Over a simplicially crisp \(n\)-simplex \(k:\Delta[n]\to\operatorname{csk}_{0}I\), we have \[\operatorname{fib}_{\pi_{n}}(k^{\operatorname{sk}_{0}})\simeq\bigcap_{i:[n]}U_ {k(i_{\operatorname{sk}_{0}})}.\] As a corollary, we have that \[\ddot{\mathbb{C}}(c)_{n}\simeq(k:I^{[n]})\times\bigcap_{i:[n]}U_{k(i)}.\] Proof.: We compute: \[\operatorname{fib}_{\pi_{n}}(k^{\operatorname{sk}_{0}}) \equiv(x:\ddot{\mathbb{C}}(c)_{n})\times(\pi_{n}x=k^{\operatorname {sk}_{0}})\] \[\simeq((m,z):(m:M)\times((i:I)\times(m\in U_{i}))^{[n]})\times(\pi _{n}e(m,z)=k^{\operatorname{sk}_{0}})\] \[\simeq(m:M)\times(K:[n]\to I)\times(p:(i:[n])\to(m\in U_{K(i)})) \times(\pi_{n}e(m,i\mapsto(K(i),p))=k^{\operatorname{sk}_{0}})\] Here, \(e(m,z)\) is image under the equivalence from Proposition 4.2.13. When all the modal dust settles, we will be left knowing that \(\pi_{n}e(m,i\mapsto(K(i),p)):\operatorname{sk}_{0}(\Delta[n]\to\operatorname{ csk}_{0}I)\) is the unique correspondent to \(K:[n]\to I\) under the \(\operatorname{sk}_{0}\dashv\operatorname{csk}_{0}\) adjunction. Therefore, we may contract \(K\) away with \(k^{\operatorname{sk}_{0}}\) in the above type to get: \[\simeq(m:M)\times((i:[n])\to m\in U_{k(i_{\operatorname{sk}_{0}})}).\] For the next lemma, we will need to know that \(\int\) commutes with \(\operatorname{csk}_{0}\) on suitably crisp types. **Theorem 6.1.3**.: _For any simplicially crisp type \(X\), we have that \(\int\operatorname{csk}_{0}X\simeq\operatorname{csk}_{0}\not|X\)._ Proof.: We know by Proposition 5.1.9 that \(\operatorname{csk}_{0}\not|X\) is \(\int\)-modal. We therefore have a map \(\int\operatorname{csk}_{0}X\to\operatorname{csk}_{0}\not|X\) given as the unique factor of \(\operatorname{csk}_{0}(-)^{l}:\operatorname{csk}_{0}X\to\operatorname{csk}_{0} \not|X\). We will show that this map is an equivalence. Since it is crisp, it suffices to show that it is an equivalence on \(n\)-simplices. To that end, we compute: \[\operatorname{sk}_{0}(\Delta[n]\to\int\operatorname{csk}_{0}X) \simeq\int\operatorname{sk}_{0}(\Delta[n]\to\operatorname{csk}_{0}X)\] \[\simeq\int\operatorname{sk}_{0}([n]\to X)\] \[\simeq\operatorname{sk}_{0}([n]\to X)\] \[\simeq\operatorname{sk}_{0}([n]\to\not|X)\] \[\simeq\operatorname{sk}_{0}(\Delta[n]\to\operatorname{csk}_{0} \not|X)\] It remains to show that this is indeed the right equivalence. Since the first equivalence in the series above is given as the inverse of \((-)_{n}^{l}:(\operatorname{csk}_{0}X)_{n}\to(\int\operatorname{csk}_{0}X)_{n}\), it suffices to check that given a crisp \(z:\Delta[n]\to\operatorname{csk}_{0}X\), \((z^{\operatorname{sk}_{0}})^{f}\) corresponds under the above equivalences to \((\operatorname{csk}_{0}(-)^{l}\circ z)^{\operatorname{sk}_{0}}\). First, we send \((z^{\operatorname{sk}_{0}})^{l}\) to \(((z\circ(-)_{\operatorname{sk}_{0}})^{\operatorname{sk}_{0}})^{l}\). Then, we send it to \(((z\circ(-)_{\operatorname{sk}_{0}})^{l})^{\operatorname{sk}_{0}}\), and then to \((i\mapsto(z(i^{\operatorname{sk}_{0}})^{l}))^{\operatorname{sk}_{0}}\). Finally, we map this to \((i\mapsto((z(i^{\operatorname{sk}_{0}})^{l}))^{\operatorname{csk}_{0}\operatorname {sk}_{0}}\), which does equal \((\operatorname{csk}_{0}(-)^{l}\circ z)(i)\equiv(z(i)^{l})^{\operatorname{ csk}_{0}}\) at \(i:\Delta[n]\). **Lemma 6.1.4**.: Let \(U:I\to(M\to\mathbf{Prop})\) be a simplicially crisp cover, and let \(c:\bigsqcup_{i:I}U_{i}\to M\) be the covering map itself. Consider the "projection" \(\pi:\ddot{\mathbb{C}}(c)\to\operatorname{csk}_{0}I\) defined by \((m,z)\mapsto(\operatorname{fst}z_{\operatorname{csk}_{0}})^{\operatorname{ csk}_{0}}\). Then \(U\) is a good cover if and only if the restriction \(\pi:\ddot{\mathbb{C}}(c)\to\operatorname{im}\pi\) is a \(\int\)-unit. Proof.: Since \(\mathsf{csk}_{0}\) and \(\int\) commute by Theorem 6.1.3 and \(I\) is discrete, \(\mathsf{csk}_{0}I\) is also discrete. As the subtype of a discrete type, \(\mathsf{im}\pi\) is discrete. Therefore, it suffices to show that \(\pi:\tilde{\mathsf{C}}(c)\to\mathsf{im}\pi\) induces an equivalence \(\int\tilde{\mathsf{C}}(c)\xrightarrow{\sim}\mathsf{im}\pi\) if and only if the cover \(U\) is good. Since \(\pi\) is crisp, \(\int\tilde{\mathsf{C}}(c)\to\mathsf{im}\pi\) is an equivalence if and only if it is an equivalence on all \(n\)-simplices. On \(n\)-simplices, this map (at the top of the following diagram) is equivalent to the map on the bottom of the following diagram: The bottom map is an equivalence if and only if the cover is good, and so we conclude the same for the top map. Finally, we can piece these lemmas together for our result. **Theorem 6.1.5**.: _Let \(U:I\to(M\to\mathbf{Prop})\) be a simplicially crisp good cover of a \(0\)-skeletal type \(M\). Let \(\pi:\tilde{\mathsf{C}}(U)\to\mathsf{csk}_{0}I\) be the projection. Then_ \[\mathsf{re}\,\mathsf{im}\pi\simeq\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The following lemma appears as Lemma 3.67 of [43], and is proven quickly with our general lemmas concerning orthogonal cohesions. **Lemma 6.2.2**.: Suppose that \(X\) is both differentially and equivariantly crisp. Then \[\cup\not{\exists}X\simeq\not{\vdash}\cup X\qquad\cup\not{\exists}X\simeq\dot{ \sharp}\cup X.\] Proof.: The first equivalence follows by Proposition 5.1.7. The second equivalence follows by Proposition 5.0.3. The third follows by Corollary 5.1.8. ### Supergeometric Cohesion In his habilitation, _Differential Cohomology in a Cohesive \(\infty\)-Topos_[44], Schreiber describes an increasing tower of adjoint modalities which appear in the setting of supergeometry. The setting for supergeometric cohesion -- called "solid cohesion" in _ibid._ -- is sheaves on the opposite of a category of super \(\mathscr{C}^{\infty}\)-algebras. Schreiber calls these sheaves _super formal smooth \(\infty\)-groupoids_. Specifically, the site is (the opposite of) the full subcategory of the category of super commutative real algebras spanned by objects of the form \[\mathscr{C}^{\infty}(\mathbb{R}^{n})\otimes W\otimes\Lambda\mathbb{R}^{q}\] where \(W\) is a _Weil algebra_ -- a commutative nilpotent extension of \(\mathbb{R}\) which is finitely generated as an \(\mathbb{R}\)-module. The factor \(\mathscr{C}^{\infty}(\mathbb{R}^{n})\otimes W\) is even graded, while the Grassmannian \(\Lambda\mathbb{R}^{q}\) is odd graded. See Definition 6.6.13 of [44]. The inclusion of algebras of the form \(\mathscr{C}^{\infty}(\mathbb{R}^{n})\otimes W\) has a left and a right adjoint. The left adjoint is given by projecting out the even subalgebra, and the right adjoint is given by quotienting by the ideal generated by the odd graded elements. This gives rise to an adjoint quadruple between the resulting toposes of sheaves and thus an adjoint triple of idempotent adjoint (co)monads on the topos of super formal smooth \(\infty\)-groupoids: \(\rightrightarrows\dashv\rightsquigarrow\operatorname{Rh}\). Of these, \(\rightrightarrows\) and \(\operatorname{Rh}\) are idempotent monads. However, \(\rightrightarrows\) does not preserve products, and so does not give an internal modality. The action of \(\operatorname{Rh}\) is easy to define: \[\operatorname{Rh}X(\mathscr{C}^{\infty}(\mathbb{R}^{n})\otimes W\otimes \Lambda\mathbb{R}^{q}):=X(\mathscr{C}^{\infty}(\mathbb{R}^{n})\otimes W).\] That is, \(\operatorname{Rh}X\) is defined by evaluating at the even part of the superalgebra in the site. We may characterize it internally by localizing at the _odd line_\(\mathbb{R}^{0|1}\), which is the sheaf represented by the free superalgebra on one odd generator \(\Lambda\mathbb{R}\). We turn to the internal story now. The topos of super formal smooth \(\infty\)-groupoids also supports the differential real cohesive modalities \(\not{\vdash}\vartriangleright\dot{\sharp}\). These destroy all geometric structure -- super and otherwise. For this reason, we will work with the lattice \(\{\operatorname{diff}<\operatorname{super}<\top\}\) of focuses. The modalities of the super focus are \(\rightsquigarrow\operatorname{Rh}\). We will refer to \(\overset{\sim}{X}\) as the even part of \(X\), while \(\operatorname{Rh}\) is known as the _rheonomic_ modality. We assume the following axioms for supergeometric or _solid_ cohesion. **Axiom 7** (Solid Cohesion).: Solid cohesion uses the focus lattice \(\{\operatorname{diff}<\operatorname{super}<\top\}\). We use the definition of real superalgebras due to Carchedi and Roytenberg [15]. 1. We assume a commutative ring \(\mathbb{R}^{1|0}\) satisfying the axioms of synthetic differential geometry (as e.g. in Section 4.1 of [36]) known as the _smooth reals_ or the _even line_. 2. We assume an \(\mathbb{R}^{1|0}\)-module \(\mathbb{R}^{0|1}\). There is furthermore a bilinear multiplication \(\mathbb{R}^{0|1}\times\mathbb{R}^{0|1}\to\mathbb{R}^{1|0}\) which satisfies \(a^{2}=0\) for all \(a:\mathbb{R}^{0|1}\). Together these axioms imply that \(\mathbb{R}^{1|1}:=\mathbb{R}^{1|0}\times\mathbb{R}^{0|1}\) is a \(\mathbb{R}^{1|0}\)-supercommutative superalgebra. 3. We assume the following odd form of the Kock-Lawvere axiom: For any function \(f:\mathbb{R}^{0|1}\to\mathbb{R}^{0|1}\) with \(f(0)=0\), there is a unique \(r:\mathbb{R}^{1|0}\) with \(f(x)=rx\) for all \(x\). 4. We assume that \(\mathbb{R}^{0|1}\) is \(\rightsquigarrow\)-connected. 5. We assume that \(\mathbb{R}^{1|0}\) detects differential connectivity. 6. We assume that a type is Rh-modal if and only if it is \(\mathbb{R}^{0|1}\)-null. **Remark 6.3.1**.: It might seem prudent to instead ask that differential connectivity is detected by the family consisting of both \(\mathbb{R}^{1|0}\) and \(\mathbb{R}^{0|1}\), since we want \(\int\) to nullify all representables \(\mathbb{R}^{n|q}\), but it suffices to test with \(\mathbb{R}^{1|0}\) since \(\mathbb{R}^{0|1}\) admits an explicit contraction by its \(\mathbb{R}^{1|0}\)-module structure (appealing to Lemma 6.10 of [34]). By Corollary 5.0.2, any \(\sharp\)-modal type is Rh-modal. But also every \(\flat\)-modal type is Rh-modal. **Lemma 6.3.2**.: If \(X\) is \(\int\)-modal (and, in particular, if \(X\) is \(\flat\)-modal), then \(X\) is Rh-modal. Proof.: Since \(\mathbb{R}^{0|1}\) is \(\int\)-connected due to its explicit contraction by the scaling of its module structure, any \(\int\)-modal type is \(\mathbb{R}^{0|1}\)-null, and therefore Rh-modal.
2309.10440
Why do drivers and automation disengage the automation? Results from a study among Tesla users
A better understanding of automation disengagements can impact the safety and efficiency of automated systems. This study investigates the factors contributing to driver- and system-initiated disengagements by analyzing semi-structured interviews with 103 users of Tesla's Autopilot and FSD Beta. Through an examination of the data, main categories and sub-categories of disengagements were identified, which led to the development of a triadic model of automation disengagements. The model treats automation and human operators as equivalent agents. It suggests that human operators disengage automation when they anticipate failure, observe unnatural or unwanted automation behavior (e.g., erratic steering, running red lights), or believe the automation is not suited for certain environments (e.g., inclement weather, non-standard roads). Human operators' negative experiences, such as frustration, feelings of unsafety, and distrust, are also incorporated into the model, as these emotions can be triggered by (anticipated) automation behaviors. The automation, in turn, monitors human operators and may disengage itself if it detects insufficient vigilance or traffic rule violations. Moreover, human operators can be influenced by the reactions of passengers and other road users, leading them to disengage automation if they sense discomfort, anger, or embarrassment due to the system's actions. This research offers insights into the factors contributing to automation disengagements, highlighting not only the concerns of human operators but also the social aspects of the phenomenon. Furthermore, the findings provide information on potential edge cases of automated vehicle technology, which may help to enhance the safety and efficiency of such systems.
Sina Nordhoff, Joost De Winter
2023-09-19T08:59:35Z
http://arxiv.org/abs/2309.10440v1
# Why do drivers and automation disengage the automation? ###### Abstract A better understanding of automation disengagements can impact the safety and efficiency of automated systems. This study investigates the factors contributing to driver- and system-initiated disengagements by analyzing semi-structured interviews with 103 users of Tesla's Autopilot and FSD Beta. Through an examination of the data, main categories and sub-categories of disengagements were identified, which led to the development of a triadic model of automation disengagements. The model treats automation and human operators as equivalent agents. It suggests that human operators disengage automation when they anticipate failure, observe unnatural or unwanted automation behavior (e.g., erratic steering, running red lights), or believe the automation is not suited for certain environments (e.g., inclement weather, non-standard roads). Human operators' negative experiences, such as frustration, feelings of unsafety, and distrust, are also incorporated into the model, as these emotions can be triggered by (anticipated) automation behaviors. The automation, in turn, monitors human operators and may disengage itself if it detects insufficient vigilance or traffic rule violations. Moreover, human operators can be influenced by the reactions of passengers and other road users, leading them to disengage automation if they sense discomfort, anger, or embarrassment due to the system's actions. This research offers insights into the factors contributing to automation disengagements, highlighting not only the concerns of human operators but also the social aspects of the phenomenon. Furthermore, the findings provide information on potential edge cases of automated vehicle technology, which may help to enhance the safety and efficiency of such systems. Partial automation; automation disuse; disengagements; Tesla Autopilot; Full-Self-Driving (FSD) Beta Introduction Since October 2020, drivers in the United States and Canada have been using Tesla's Full-Self Driving (FSD) Beta feature, an SAE Level 2 system that expands the operational design domain of the standard Autopilot beyond highways to non-highway roads. Tesla cautioned FSD Beta program participants via email to exercise vigilance while using the system, as it may potentially malfunction at the most inopportune moments. Drivers were instructed to maintain their hands on the steering wheel, pay close attention to the road, and avoid complacency (Nordhoff et al., 2023). Research indicates that Tesla Autopilot users might misuse automation, becoming complacent and engaging in hazardous behaviors such as hands-free and mind-off driving, intentionally manipulating the steering wheel to feign attentiveness, and sleeping while the system is engaged (Nordhoff et al., 2023). The present study focuses on the disengagement (deactivation) of automation, which can be initiated by either the driver (driver-initiated) or the system itself (system-initiated). Deactivation in this study is not limited to immediate responses to environmental occurrences; it may also manifest in a more enduring manner. This enduring deactivation has been referred to as disuse, or the underutilization of automation (Parasuraman and Riley, 1997). More specifically, the term 'disuse' is commonly applied to situations in which the driver chooses not to use automation (i.e., not turning on, or turning the automation off after usage) even though usage would be beneficial for performance and safety. Disengaging automation poses a safety concern: If drivers opt to disengage the automation in situations where it could enhance their performance and road safety, automation's potential benefits will not be realized. Conversely, it could be posited that disengaging automation might be beneficial or even necessary to avert accidents. The study of human use, misuse, disuse, and abuse of automation has been a subject of research since the 1990s. Scholars have discovered that automation disuse arises from a multitude of factors, including low perceived automation reliability and false alarms, task complexity, risk, learning about automation states, fatigue, a general negative predispositions towards automation and a resistance to innovation, missing functionality, and an aversion to automation's 'bells and whistles' (De Winter et al., 2022; Ferris et al., 2010; Lee, 2006, 2008; Parasuraman and Riley, 1997; Parasuraman et al., 2008; Reagan et al., 2018). These studies propose that users are more likely to engage automation if they perceive its reliability to be greater than their self-confidence. Conversely, if users' self-confidence surpasses their trust in automation, they are more inclined to disengage it. High workload may also prompt users to disengage reliable, accurate, and trustworthy automation. Research on automation disengagement of automated vehicles demonstrates that driver-initiated disengagements occur in anticipation of potentially hazardous situations, such as adverse weather conditions, construction zones, poor road infrastructure, the presence of emergency vehicles, or navigating curves. Other reasons for disengaging automation encompass executing lane-changing maneuvers, lack of trust or discomfort, and other road users' (reckless) driving behavior (Boggs et al., 2020; Dixit et al., 2016; Favaro et al., 2018; Kim et al., 2022; Lee, 2006; Lv et al., 2017; Wilson et al., 2020). System-initiated disengagements can arise due to missing system functionality (Gershon et al., 2021), such as failures in detection technology, communication, sensor readings, map calibration, or hardware (Dixit et al., 2016). Understanding the reasons for disengaging automated driving systems (i.e., switched off, or not switched on) is essential to ensure their use in situations where they offer increased safety and efficiency compared to human drivers. Knowledge about disengaging automation can also provide valuable insight into 'edge case' scenarios, namely situations in which human drivers disengage the automated driving system because its operational boundaries have been exceeded. Addressing these edge cases is pivotal to fully realizing the benefits of automation and minimizing its risks (Ryerson et al., 2020), and contributes to the widespread acceptance of automated cars. There is a scarcity of research on the factors underlying driver- and system-initiated disengagements of Autopilot and FSD Beta. Previous studies have examined various factors contributing to disengagements, such as the impact of automation capability on trust and reliance, as well as the influence of operator confidence, risk perception, and learning. However, previous research has often studied these factors in isolation. Most of the research reporting disengagements of partially automated driving relied on technical vehicle data (Alambeigi et al., 2020; Favaro et al., 2018) rather than in-depth qualitative accounts from the users of these systems. The psychological aspects of disengaging Autopilot and FSD Beta are not well documented. Our study adopts a more comprehensive approach by examining the reasons behind disengagements of complex real-life partially automated driving systems: Tesla's Autopilot and FSD Beta. ## 2 Method ### Recruitment We conducted semi-structured interviews with drivers of partially automated vehicles equipped with standard Autopilot and FSD Beta. The study was approved by the Human Research Ethics Committee of Delft University of Technology. We recruited users of standard Autopilot and FSD Beta through specialized online communities and forums (i.e., Discord, Facebook, Twitter, Reddit, YouTube, Instagram, Tesla Motors Club, and Tesla Motors Forum). As FSD Beta was only available to drivers in North America and Canada during the study, we focused our recruitment efforts on these regions. Ownership of a Tesla was subjectively evaluated using self-reported data regarding access to Autopilot and FSD Beta. ### Procedure The interviews were conducted online via Zoom, with both audio and visual recorded. The interviews followed a pre-defined protocol consisting of open-ended and closed-ended questions. To reduce the subjectivity of interview research, an interview protocol was created on Qualtrics, and a link to the questions was sent through the chat function of Zoom at the beginning of the interview. This allowed respondents to view the questions directly, enabling them to advance to the next question independently. The researcher listened to respondents' answers to minimize influencing them during the interview. Respondents were encouraged to skip questions already answered. As the questions were standardized and logically ordered, the researcher's intervention was minimal. The interview protocol comprised two main parts. Initially, respondents provided informed consent to participate in the study. The first part consisted mostly of open-ended questions, while the second part primarily involved closed-ended questions about respondents' socio-demographic profile, travel behavior (e.g., age, gender, education, frequency of Autopilot and FSD Beta use), and general attitudes towards traffic safety. The appendix presents an overview of the questions asked in the first part of the interview. The present paper only analyzed the questions Q28, "Do you disengage Autopilot and FSD Beta? Why/why not?", and Q29, "Does Autopilot and FSD Beta disengage? When/in which situations?". Respondents were asked to answer each question separately for Autopilot and FSD Beta, reflecting on any differences between the systems. The questions Q1, Q4, Q24-Q26, and Q30-Q35 were addressed in our previous study (Nordhoff et al., 2023). ### Data analysis Data analysis was performed by the first author of the present study in four steps: 1. Interviews were recorded via Zoom and transcribed verbatim using Microsoft Teams transcription software. Transcripts were compared with audio files and corrected as necessary. Atlas.ti version 22.0.2 was used to create main categories and sub-categories for data analysis. These categories were developed using the categorization of disengagement causes from Boggs et al. (2020), which classifies them in terms of control discrepancies, environmental and other road users, hardware, software, perception, and planning discrepancies, and operator takeover. Additional sub-categories were formed following principles of inductive category development by Mayring (2000) for those not fitting within the proposed classification scheme. The main categories and sub-categories were used to develop a triadic model of driver- and system-initiated disengagements. 3. All sub-categories reported in Table 2 are mentioned at least five times by respondents. 4. Illustrative quotes were chosen to convey the meaning of each sub-category. Multiple mentions of a sub-category by a respondent were not discarded but combined with other mentions of the sub-category by the respondent. As a result, some quotes represent collections of sentences mentioned by the same respondents at different points during the interview. Filler words and repetitions (e.g., "you know", "like", "uhhm") were omitted from the quotes. ## 3 Results The majority of respondents were male (91%), with an average age of 42 years, highly educated (52% held a Bachelor or Master degree), and occupied positions as engineers (30%), managers (8%), or were retired (7%). They predominantly resided in California (20%), Colorado (8%), and Florida (7%). Eighty-two percent of respondents utilized both standard Autopilot and FSD Beta, while 18% reported having access only to standard Autopilot. The analysis of interview data led to the development of a triadic model of automation disengagement, as depicted in Figure 1. This model was derived from the main categories and sub-categories presented in Table 1. The model considers the automation and human operator as equivalent agents since the decision to disengage can be initiated by either party. Both agents assess each other's performance and capabilities. The model identifies both relatively permanent and more transient states of the automation and human operator. More permanent reasons to disengage the automation by human operators is the preference for manual control to enjoy driving. Certain (anticipated) automation behaviors can result in negative transient states experienced by the human operator, such as frustration, stress, and embarrassment. The human operator may decide to disengage the automation in anticipation of automation failure in environments that exceed the system's capabilities or when the automation exhibits unnatural or unwanted behavior. Moreover, the human operator can apply a 'theory of mind' to deduce whether other humans might become uncomfortable or frustrated by the automation's actions. Figure 1: Triadic model of automation disengagements. Similarly, the automation may perceive the human operator as insufficiently vigilant (based on the hands-on-wheel sensors) or violating speed limit rules, or it may assess its own capabilities as inadequate in the given environment, leading to self-disengagement. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multirow{2}{*}{**Main category**} & \multirow{2}{*}{**Sub-category**} & \multirow{2}{*}{**Meaning**} & \multicolumn{2}{c|}{**Type of disagreement**} & \multicolumn{2}{c|}{**Type of system**} \\ \cline{5-6} & & & **Driver- initiated** & **System- initiated** & **Autopilot Beta** \\ \hline \multicolumn{1}{p{56.9pt}}{**Human operator’s relevant states**} & & & & & \\ \hline \multicolumn{1}{p{56.9pt}}{**Permanent**} & \multicolumn{1}{p{56.9pt}}{Fatigue and intoxication} & \multicolumn{1}{p{56.9pt}}{Fatigue and intoxication} & \multicolumn{1}{p{56.9pt}}{X} & \multicolumn{1}{p{56.9pt}}{X} \\ \cline{1-1} \cline{3-6} & & & & & \\ \cline{1-1} \cline{3-6} & & & & & \\ \cline{1-1} \cline{3-6} & & & & & \\ \cline{1-1} \cline{3-6} & Travel trip constraints constraints & & & & & \\ \cline{1-1} \cline{3-6} & constraints & & & & \\ \cline{1-1} \cline{3-6} & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \hline \multicolumn{1}{p{56.9pt}}{**Human operator’s perception of automation**} & \multicolumn{1}{p{56.9pt}}{New software releases and bugs associated with software release, with drivers reporting they engage in short-term usage behavior of the automation test and experience its limits or temporarily stop using it, respectively} & \multicolumn{1}{p{56.9pt}}{X} & \multicolumn{1}{p{56.9pt}}{X} & \multicolumn{1}{p{56.9pt}}{X} \\ \cline{1-1} \cline{3-6} & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & & & \\ \cline{1-1} \cline{3-6} & & & & \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline False positives & Disengagements despite drivers & X & X & \\ \cline{2-5} & & supervising automation & & & \\ \hline \multicolumn{5}{|p{113.8pt}|}{**Unwanted action(s)**} \\ \hline **Operational** & & & & \\ \hline \multirow{5}{*}{Harsh deceleration} & Harsh and sudden deceleration, commonly known as 'phantom braking', associated with shift of Tesla's sensor suite from radar to vision-only, potentially increasing incidents of rear-end collisions and contributing to temporary disuse of automation & X & X & X \\ \cline{2-5} & Erratic steering wheel movements of automation & Erratic, jerky steering wheel movements of automation & X & & X \\ \cline{2-5} & Steering into adjacent traffic & Steering into adjacent traffic & X & & X \\ \cline{2-5} & **Strategic** & & & \\ \cline{2-5} & Route unfamiliarity & On unfamiliar roads due to driver’s lack of trust in automation to handle these areas & & & \\ \cline{2-5} & & areas safely given lack of experience in these areas & & X & X \\ \hline \multirow{5}{*}{Taking the wrong route} & Taking the wrong route & possibly because of misalignment between mapping and navigation data & X & & X & X \\ \cline{2-5} & **Tactical** & & & & \\ \cline{2-5} & & & & & \\ \cline{2-5} & Longitudinal control & Inappropriate control of automation over braking and acceleration, with drivers adjusting operating speed to flow of traffic, increasing speed to close the gap between Tesla and vehicle in front, overtaking a slower vehicle, and decreasing operating speed for safety, comfort, and efficiency & X & X & X & X \\ \cline{2-5} & Undesired lane changes & Unexpected, unnecessary, too aggressive, or conservative lane changes & X & & X & X & X \\ \cline{2-5} & Misidentifying correct lane & Steering into parking, bike, or bus lane correct lane & X & & & X \\ \hline \multirow{2}{*}{Creeping situations} & Creeping at intersections and in turning situations & X & & & X \\ \cline{2-5} & Rolling through stop signs & Rolling through stop signs, representing an illegal yet accepted human behavior & X & X & & X \\ \cline{2-5} & Running red lights / stop signs & Running red lights and stop signs & X & X & X & X & X \\ \cline{2-5} & Turning on red traffic light showing a red signal & X & & & & X \\ \cline{2-5} & Unprotected left turns & Turning left at intersection with a green light instead of a green arrow with oncoming traffic having the right of way & X & & & X \\ \cline{2-5} & Other turning situations & Other turning situations, such as protected left-hand and right-hand and U-turns & X & & & X \\ \cline{2-5} & **Human operator’s perception of other humans** & Passenger discomfort with perceived lack of control, or unpredictable and erratic system behavior & X & X & X & X \\ \cline{2-5} & Road rage & To avoid confusion and rage of other road users & X & & & X \\ \cline{2-5} & Reckless behavior & To respond to reckless driving behavior of other drivers, following too close from behind, or swerving into the lane of the vehicle & X & & & X \\ \hline \multicolumn{5}{|p{113.8pt}|}{**Automation’s perception of the human operator**} \\ \hline \end{tabular} ### Human operator's relevant states As shown by our model in Figure 1, human operators initiate the disengagement due to permanent and transient operator states. Permanent human operator states include fatigue and intoxication, travel trip constraints, and the preference for manual control to enjoy driving. Transient human operator states include frustration and (temporal) stress as well as embarrassment. #### 3.1.1 Fatigue and intoxication This sub-theme covered driver-initiated disengagements resulting from drivers being physically or mentally impaired by factors such as fatigue and intoxication. _"I stopped using FSD Beta when I'm fatigued or something, and I just don't feel like paying that close attention."_ (R012) _"If I'm tired at the end of the day, I'll actually just turn off the Beta and drive myself because I know I need to pay attention, and I don't want to have to go through this stress of figuring out 'OK, is it going to slow down in the middle of the intersection before proceeding? ""_ (R017) _"With the Beta, I didn't use it on that trip because I was tired, and it was a long trip, and Beta is one of those things you have to be on top of it. You can't use it when you're impaired in any way, when you're super tired, when you've had any alcohol."_ (R096) #### 3.1.2 Travel trip constraints Human operators reported to disengage FSD Beta when they faced travel time constraints, wanted to travel short distances, and at the beginning and end of a trip. _"If I'm in a little bit of a hurry, I'll take over myself because it is probably slower. It's more careful and cautious than I would in some ways so if I'm trying to save time, I'll take over myself."_ (R021) _"When I take my lunch - it's just a short 15-minute break - it's sort of 'OK, I'm just gonna rush really quick right around the corner, grab a sandwich, and then rush right back to my apartment', and it's such a short distance that it doesn't really warrant using it. So, that would be about the only time that I don't use it."_ (R049) _"The second time that I will disable it will be if I need to get to my destination on a less relaxed timeline, because FSD is great for obeying all the laws that it's supposed to, and there are times where you're running late, you're gonna be in a little bit of a hurry."_ (R061) _"I usually can't engage Beta from my driveway. I can't tell it 'Leave my driveway' or 'Back into my drive'. I would love to teach the car 'Hey, this is how I backed into my driveway. Just repeat that.' So, once I get on my main road, I will usually engage it."_ (R071) _"I turn off FSD Beta when I have to make a tight turn in and out of parking lots because the camera system is limited in what it can see, and it will nose out and then stop, or it will nose out and keep going, and you don't want it to."_ (R099) #### 3.1.3 Preference for manual control to enjoy driving The preference for manual control to enjoy driving was mentioned as one of the reasons leading to disengage automation. This sub-category represents a permanent human operator state as shown by our model in Figure 1. _"I stopped using Autopilot when I wanna do the driving because I'm enjoying doing the driving."_ (R012) _"The other times that I will disengage Autopilot is if I want to have a little bit of fun. The car will never use its full acceleration capability when it's in Autopilot or full self-driving, and so if I'm the lead person at a stoplight, and then I'm watching the surroundings, and see that there's nobody that's going to run a red, I'll turn off Autopilot so that I can be more aggressive, and then turn it back on once we're into our normal speeds."_ (R037) _"Almost the only time I ever turn it off is if I'm doing a joyride, or in the mountains, and really wanna experience the driving experience. Similar to some of the same motivations I use whenever I do auto cross or rally cross type of stuff as a hobby on closed tracks."_ (R065) _"Sometimes I'll turn it off because I wanna feel driving for an afternoon on a curny highway road or something."_ (R085) #### 3.1.4 Frustration and (temporal) stress This sub-theme covered driver-initiated disengagements resulting from frustration and driver's stress with the behavior of FSD Beta. _"FSD Beta - I think it's less stressful to not have it enabled."_ (R003) _"The only thing that impacts whether or not I use it is 'Do I want to not have to worry about driving on a freeway? I'll use Autopilot if I don't really care._ FSD Beta - it's not an unsafe thing that stops me from using it. It's annoying to use it. I don't want to babysit the car, so I'd rather just drive myself."_ (R055) _"With FSD Beta, I see a lot of disengagement on the trip, I just disengage it, and drive away because at some point it becomes dangerous, and it's easier for me. I get frustrated. Like 'What are you? Why? Why are you doing this?' 'What's the point of doing this?' It's just unnecessary."_ (R058) _"There is something in that drive that just gets me to say 'I'm done'. I'm gonna make sure that I don't get frustrated, and that it doesn't make a mistake."_ (R061) _"With the Beta, I was frustrated, and we had gone through a couple updates, and nothing was getting better in my area. I just get angry, and I'm like 'You know what? This is not even usable.' So, I just stopped using it for a while."_ (R096) #### 3.1.5 Embarrassment This sub-theme addressed driver-initiated disengagements resulting from the driver's feeling of embarrassment towards other road users due to the behavior of the FSD Beta. _"I feel shame and embarrassment because it's like 'Oh, if I turned on my blinker and they gave me room, but I didn't actually need to change lanes.' Now they are confused, and I feel bad because I'm the worst driver on the road. I wanted a bumper sticker like 'It's not my fault. I'm so sorry. It's a student driver. ""_ (R007) _"Just a couple days ago, I was using it, and there was a woman and her dog standing on the corner of the road and I thought 'I'm gonna turn off Full Self-Driving Beta because even though it's probably not gonna hit this woman and her dog, it might turn really stupidly, and then this woman's gonna be like 'How, why does she make such a horrible turn?' Would be embarrassed."_ (R059) "When it came up into that intersection, all of a sudden it jerks to the left for no reason. It just literally jerks you into the next lane with no notice, no reason, and that's the scary thing. Especially if there was somebody there, it would be like 'Was this guy drunk? Is he texting? ''_ (R069) _"If I allowed to do auto lane change, sometimes I will interject because it'll be like 'I wanna change back lanes' but then I know 5 seconds later, it's gonna come back out of that driving lane again to pass another vehicle. So sometimes I may turn off the auto lane change feature. Just keep it more consistent versus me look like a crazy driver just zipping back and forth. "_ (R071) _"Most often, when it continues to go, it already stopped but the person is thanking me for stopping, and in that time that the person thanks me, the car decides 'Well, if you're not gonna go, I'm just going to go instead.' It confuses the person. It makes me look bad. It's just bad. "_ (R074) ### Human operator's perception of automation In addition to disengaging automation due to permanent and transient operator states, human operators may also choose to disengage automation due to new software releases, in anticipation of automation failures in environments exceeding the capabilities of the automation, or when the automation behaves in an unnatural or unwanted way. #### 3.2.1 Software releases This sub-theme addressed driver-initiated disengagements caused by new software releases, with drivers reporting engaging FSD Beta after a new software update has been released to test and experience the limits of the system, or temporarily disengaging FSD Beta due to software bugs associated with the software release. _"FSD Beta - so with each update I like to test the new update on the same sections of road, and if it does something really bad a few times in one section of the road, I won't use it for that whole update, but then I'll try it again in the next update, and it might improve so that I don't keep using it. "_ (R043) "If there is a bad update, I probably just wouldn't use it at all it. If they released an update that just completely broke something, I probably just would stop using it until the next update."_ (R063) "If the FSD Beta software is just a bad release at least for my area, I just stopped using it most of the time because it's pretty much unsafe." (R078) "There was one particular update they released that had a lot of bugs, and that got people kind of like 'Ohh, no, I don't want to use it', but they fixed it within 4 hours. So, I think the only way that would make me not to want to use FSD is that if I had a buggy software that they released." (R091) "So, for me I don't use it every single day because without another update, it pretty much acts the same way. So, unless I don't have an update, there's no sense for me to keep driving the same area. I'm not gonna get a different result." (R094) #### 3.2.2 Anticipated automation failure This sub-theme addressed driver-initiated disengagements in anticipation of upcoming system failure, with drivers reporting a high number of driver-initiated disengagements. "I think the furthest I've made it so far is maybe a kilometer before I had to do something." (R045) "I don't think I could have FSD Beta running for over 10 minutes straight without me needing to disengage." (R055) "There is some point where we were going to lunch, my programmer and I, and I took it the two miles into town. It was terrible. All was terrible. It was so bad it would disengage every quarter mile, which is terrible." (R067) Drivers reported taking over control before the car executes the maneuver because of discomfort, a lack of trust in the system's capabilities, and a low feeling of safety. Some of these disengagements might have been unnecessary as it is not clear whether the system would have caused a crash if the driver had not intervened. _"Sometimes you have traffic coming towards you and there's a hole in traffic, and you can make a left turn. So, the question is how? Is the car going to accelerate through that hole in a way I'm comfortable with, and sometimes I'm just not willing to find out. I'm gonna put my foot on the accelerator because finding out the answer to this question is very significant. It's an accident."_ (R012) _"You immediately have to intervene now. You don't know if it would have turned itself because you intervened. It's likely that it would have turned. I mean, even Tom was riding with me at that point. I was driving, and he said 'Yeah, it would have turned, but it looks scary. "_ (R067) _"I think to myself 'Holy shit, if I didn't take over in less than a half second, I am going to have a head-on collision going 40 miles an hour with another car going 40 miles an hour. I think that it would have corrected itself but then at the same time it's the repercussions, if it didn't correct itself..."_ (R068) _"I think the first times I showed it to friends and family, it tried to kill us a couple times, and that's part of the experience. If I just completely let go that wheel, it probably would have done something that would have caused an accident."_ (R074) _"The system is going haywire and it's like 'Hey, you know, I'm Terminator. I'm gonna do my own thing', and then I feel unsafe forward to the point where I disengage, I turn off FSD, and I'm like 'I don't want to risk anything'."_ (R075) #### 3.2.3 Unnatural automation behavior As shown in our model in Figure 1, disengagements by human operators may occur due to the automation's unnatural or unhuman behavior. "I'm not always sure that's going to do the right thing. Usually it does, but again it doesn't always do it soon or not soon, or at least as early as I would do it myself so there's that divergent between what I would do and what the car is doing. I pretty much always take over at that point."_ (R003) _"I don't understand if the car is gonna behave the way I would behave or better, and this is probably because in a lot of cases when there's no risk in these scenarios, the car sometimes will do behaviors that are like 'OK, so why did you just sort of come to a stop right here?' If there were other cars around, I'm gonna wanna intervene."_ (R012) _"When I feel uncomfortable, I take over quickly. It probably would be able to make it through the corners safely. It's just... it doesn't drive like me, which I think is one of the biggest problems, and one of the things that people expect from a full self-driving car is that it would be a comfortable driving experience, but at the moment it does not achieve that."_ (R038) _"Do I disengage Autopilot, FSD Beta? FSD Beta all the time and that's just because of how I drive. I don't want it to do something that seems inhuman."_ (R055) _"It will start doing something that at least as a human, I would not do that, and that's typically when I would take over or override it."_ (R090) #### 3.2.4 Random disengagements This sub-theme addressed system-initiated disengagements caused by system error without drivers understanding why the system disengaged. _"I had a couple of situations where I was driving on the road, not a regular road, and it just started beeping, and asking me to take control, and I could not tell what exactly it was that it did not like but I just took control for 30 seconds, and then engaged it by the next corner and it was fine."_ (R001) "FSD Beta disengages more often than not for me, when the system itself is having an error. So, if it says take over immediately and I take the wheel, system will disengage, and say it aborted due to a system error."_ (R064) "There are times where both of them will disengage for no reason. I'll get the red steering wheel, and it'll say take over immediately, and I don't know why. I work on the overnight shift, so I'll be on the freeway in the middle of the night with no one around me, and the red steering wheel will come on, and it'll tell me you take over immediately, and I'll start slowing down for absolutely no reason. I've had been to do that a couple times where I was at a stoplight, and it just kicked me out and did the same thing. Just said take over!"_ (R076) "So just two days ago I had an Autopilot software crash. I'm not entirely sure what happened, but it does occasionally happen during this test process. I've never seen FSD Beta crash."_ (R094) #### 3.2.5 False positives Human operators reported that the system disengaged itself despite them supervising the automation. "I rest my hand on the wheel while it's dragging, and it will say my hand is not there. It said there is no pressure on the steering wheel so it will disable it. If somebody is not paying attention, you could cause an accident."_ (R008) "Autopilot itself will give you 'Please keep your hands on the wheel even though your hands are on the wheel.' It gives you that warning regardless, and when it says 'Take over immediately', it will disengage even if you don't have your hands on the wheel. It typically disengages about 530 seconds later even if you refuse to take over."_ (R032) "There's been a time where I was wearing sunglasses, and it couldn't be sure that I was actually paying attention, and so it gave one of the red steering wheel warnings to take over, and that's essentially the biggest times that it'll disengage if it thinks that you're not paying attention."_ (R037) "If I have to get yelled at by the camera all the time because it's literally tracking my eyeballs, and it's wrong sometimes. If it sees a phone, it has really good AI, even if you're looking at it, a map or whatever, it just sees the device in your hand, and then you get a strike, right? So, I have two strikes out of five already, and I didn't buy a car to get a strike out."_ (R042) #### 3.2.6 Unwanted action(s) Human operators additionally deactivated automation as a result of undesired automated system actions across operational, strategic, and tactical decision-making dimensions, as depicted in the model presented in Figure 1. #### 3.2.6.1 Operational Harsh deceleration This sub-theme addressed system-initiated disengagements caused by harsh and sudden deceleration. Drivers associated incidences of harsh braking with the shift of the sensor suite from radar to vision and considered it a safety risk, as it could potentially lead to more incidents of the Tesla being rear-ended. After encountering incidents of harsh braking, drivers were more likely to disengage the system. "I was on the Interstate one time in the middle of the day, perfect weather, and it did an automatic emergency brake, and there were some cars behind me, and it almost caused somebody to hit me because of this sudden thing. I checked the reporting, and there was nothing on it. People refer to that as phantom braking. It literally slammed on the brakes, and went from 80 miles an hour down to like 40 miles an hour almost instantly. So, for the next month or so I didn't use it at all."_ (R046) "When they went to the vision only, I was encountering a lot of phantom braking. There were elements of safety involved. If I'm on the highway, my car just decides to randomly brake hard. If there's someone behind me, that might be an accident. So, I just stopped using it unless there was no one around me. 'Nope. OK, I'm out for a while. I don't need that again. "_ (R056) "When it does that phantom braking, then it makes me feel anxious, and I immediately leave Autopilot."_ (R059) "I was driving in the middle of the desert where you could see ten miles ahead. There's no cars on a two-lane road, and it was just slamming on the brakes. So, at any time I have to be ready. It could just slam on the brakes. That's been my recent experience with FSD Beta."_ (R068) #### 5.2.2 Erratic steering wheel movements Another unwanted automation behavior represents erratic steering wheel movements by FSD Beta, leading to human operators disengaging the automation. "If we're approaching a junction, it starts squirming the wheel back and forth. It does it with a lot of force. If it pulls the wheel really hard, you can stop it instantly."_ (R051) "A lot of times when I set it into FSD, and it just does dumb things like whipping in and out of lanes, I'm just gonna disengage it, and I just drive manually."_ (R058) "It also increases my anxiety when it does unnecessary wiggles in the lane for unnecessary reasons, thus create higher anxiety because you think there's something I didn't see. Did a cat run out in front of you? You overcorrect, which could cause you to swerve into another car."_ (R071) "You don't ever know what it's gonna do. I've had it recently. Jerks the wheel back and forth. 'What are you doing? This is not what we do.' So, you slam on the brake, and you take over and drive."_ (R084) "I only had one hand on the wheel, my elbow was on the armrest, and it swerved. It made this maneuver so strong that it actually bent my wrist down, pulled my arm off the armrest, and I was shocked because I had never had something like that happened now. I saved the video from the dashcam thing to go back and look because I was 'Maybe I missed something. Maybe there was a person that I didn't see. 'There was nothing. There was no animal there. There was absolutely no reason for it to do that maneuver. "_ (R096) Steering into adjacent trafficThe occurrence of FSD Beta steering towards neighboring traffic represents an unintended automated system behavior, which contributes to instances of driver-initiated deactivations of the system. "_It was just a straight road. All it needed to do was just drive straight. Stay in the lane lines. Don't hit the car in front of me. Maintain speed. Instead, it tried to swerve left into a vehicle next to me, so I had to disengage. "_ (R007) "_It will take the outside lane until the last second, and it will cut over into the inside lane, which can be an accident, and so you still have to babysit it. It's not a feature yet. It will turn into traffic if you let it. "_ (R011) "_It drove into the wrong lane. It got confused, and then actually drove into the oncoming traffic lane. I've had that happen. "_ (R042) "_Actually, the very first time I used FSD Beta, I turned it on my street. It goes down, and it tries to veer into a car immediately, and I was like 'Ohh no, this isn't good. "_ (R066) "_FSD Beta is downright scary. I don't wanna be crude, but it almost makes you pass your pants sometimes when it jerks out of a lane or jerks into a lane. When it does some of that stuff, it's downright scary, and the fact that if you weren't paying attention, then actually took control of the car. You get into an accident, hurt yourself or kill somebody or kill yourself or whatever. There's nothing positive about that. "_ (R069) 3.2.6.2. Strategic Route unfamiliarityThis sub-theme addressed driver-initiated disengagements on unfamiliar roads due to driver's lack of trust in the automation's capability to handle these areas safely given their lack of experience with the automation in these areas, and a desire to be in control. "Full-Self-Driving Beta - sometimes when I'm in an area where I'm not familiar with, so I'm not familiar with the roads, and that's when I would disengage it."_ (R020) "If I'm in a situation where I don't drive very often, then I'm likely to keep FSD Beta on a very short leash. I casually drive into New York City, but once it gets in a crowded situation coming through, or it starts acting a little wonky, I'm gonna take it down, and say 'Hey, I'm not comfortable using it there. It may be able to handle it, but I haven't tested it in this environment enough to feel comfortable doing it.' I intended to not use it in those situations, just because I don't have enough experience." (R026) "FSD Beta - in areas that I drive common, I use it almost all the time. If I were to go into a new city, I most likely won't use it. Maybe if I'm there for example a week, maybe the first few days I won 't use it just because I want to see what the area and roads are like and then I use FSD." (R075) So, for Autopilot, I engaged it during interchanges, or maybe an exit, or there's an unfamiliar area, and I wanna stay in control. If I'm not comfortable in an area, I will definitely not let Autopilot take the lead." (R094) #### Taking the wrong route This sub-theme addressed driver-initiated disengagements caused by the system taking the wrong route. These system errors were attributed to a misalignment of mapping and navigation data. "It can be driving along, and then be like 'I'm gonna turn left.' Turn left with no blinker or anything. So, let's say it was supposed to turn left, but it didn't, and it continued straight, and then the navigation re-navigates, and just suddenly turned left across multiple lanes." (R002) "It sometimes will do really dumb directions. It's like 'Just turn left here. You don't need to go right, and go around the square and then come back around.' You can't easily necessarily correct that in the moment, so those are instances where you have to disengage."_ (R007) _"I would say that the majority of my disengagements in 10.2 are related to navigation. So right here1, I was pulling into a sort of a strip mall with a restaurant, and you can see that it takes the turn just fine, but right here2, we see a very strange road marking where it turns into two lanes without any sort of land marking, and you can see here that it can't visualize it that way. It has its turning right into nothing. There's no parking lot over here, and so I disengage to go to the actual destination that I'm going to."_ (R011) _"One of the more problem areas is that sometimes the map data and information associated with FSD Beta is not the most accurate, and you may have to take over in order to get to the correct location."_ (R037) _"There is one specific corner on the freeway where the map data is wrong, so I never use Autopilot there. I'll switch it off, drive around this corner, and turn it back on."_ (R038) Footnote 1: Interviewee shows researcher the situation on YouTube video documenting the situation. Footnote 2: Interviewee shows researcher another situation on YouTube video. #### 3.2.6.3. Operational Longitudinal control This sub-theme addressed driver-initiated disengagements caused by FSD Beta's control over braking and acceleration, which often did not meet drivers' expectations in terms of comfort and safety. Drivers reported disengaging the system to exceed the speed limit for reasons of safety and efficiency. Specifically, drivers reported increasing the operating speed to decrease the gap between the Tesla and the vehicle in front, overtake a slower vehicle, or decrease the operating speed of FSD Beta. _"In scenarios where I need to go above the speed limit to get around a car safely, you have to disengage Autopilot because it will not do that for you. Autopilot currently does not have the ability of going above its speed in order to get out of a bad situation. It only knows how to slow down."_ (R031) "I will disengage it when there's a long trail of cars and everybody else has got four feet or something in between each vehicle, I'll turn off Autopilot and then scooch the car forward, and then re-engage it so that the gap between me and the other cars is similar to what everybody else has."_ (R037) _"I did not trust the FSD Beta to brake in time, so quite often I would disengage the FSD Beta because I didn't feel that the vehicle was programmed to my safety standards. I just felt that it was just going too fast, too close to other vehicles when other vehicles were at a stop sign or a stoplight. So, I'd say 90% of the time I disengaged the FSD Beta for braking purposes."_ (R041) _"The only time is when sometimes cars slow down quickly, and it feels like it won't stop until the end. It's hard for you to tell because it stops later than you would stop, but you're nearing a point where you have to make a decision. Where do you let it? Do you? Do you wait, and see if it does its thing, or are you just gonna stop on your own? So, I stop on my own 80% of the time, and that's the only time, and I'm pretty sure it would stop, but that's the only part of whatever feels unsafe."_ (R070) _"Sometimes I don't even trust that it's gonna stop. 'Am I gonna stop?' I've had plenty of times where it was a controlled left turn, and the light was red, and I was going 55, and it changed lanes, went into the turn lane, and it was gonna go right through without stopping. That car was not gonna stop, that I was just gonna blow right through that red through the controlled stop, and I don't know what would happen."_ (R076) **Undesired lane changes** This sub-theme addressed driver-initiated disengagements caused by the system performing unexpected, unnecessary, too aggressive, or too conservative lane changes. _"FSD Beta - I can't look away even for a half second. It just changed lanes on its own as a motorcycle was rapidly approaching. I don't want to kill the motorcyclists. It's like 'Oh, I need to change lanes.' It randomly decided to change lanes. Didn't give me time to do a head check because it just goes as soon as it wants to. Why are we changing lanes? It was unnecessary. It was unexpected. I did the head check, saw the motorcycle in my rearview mirror rapidly approaching, and took over. It likely would have been rear-ended, or the motorcycle would have had to take extreme evasive action to avoid it."_ (R007) _"So, it was like 'I'll change lanes', and I'll be thinking 'Why the heck are doing that for?'You know, 'I must say, I'm gonna right lane, I'm gonna make a right turn.' It'll go to the left lane for something, and then go back to the right lane again, and I'm thinking 'You know why I'm staying in the right lane?' You needed game theory going on in the left lane. Just turn it off because it will do things that maybe wasn't dangerous, but it just wasn't necessary._ (R072) _"We disengage Navigate on Autopilot because sometimes it changes lanes too much, so then we'll turn it off. It will change lanes even while you're going down the freeway, look for the best lane, but sometimes that's annoying."_ (R095) _"It's trying to make a lane change and it's just freaking the other drivers out. Sometimes it'll try to make a lane change on the freeway, and there won't be quite enough space between people, so it'll just stay there, and stick there with blinker on, either holding up traffic or confusing the people behind me so then I feel unsafe, but then I just take control of the car."_ (R098) _"I was up in North Carolina on FSD Beta, and it was a two-lane divided highway going 50 miles an hour, and all of a sudden it said 'I wanna get in the left lane' so I thought 'Well, that's weird. There's nobody around. We don't need to be in the left lane', so I corrected it and made it go back to the right lane, and it did it again, and I corrected it again, and then this poor lady drives up in the lane beside me, and is trying to pass me in it, and it tries to do it again, and I said 'No, this Is not O.K.', and I turned it off. So."_ (R100) #### Misidentifying correct lane This sub-theme addressed driver-initiated disengagements caused by FSD Beta driving into bike, bus, or parking lanes. _"It will almost always choose the wrong lane to be in. Sometimes it'll get in the parking lane. Sometimes it'll get in the correct lane. You never know. So, you really have to pay attention to that."_ (R010) _"It does some weird things when it's coming across a bike lane. It mainly let me try going to the bike lane. Those things get better over time as people report them, but those are all that kind of things that are weird now."_ (R034) _"They just need to program it so it can read the word 'bus lane'. In my commute, there's a bus lane, and you stay out of it, and that is on the right side of the road. It wants to go into that right lane, every time into the bus lane, and I have to cancel it."_ (R081) _"I will disengage if it's not handling an area correctly. So, for example, the white bike lanes, the car will sometimes attempt to go there, and if I see it happening, I will disengage FSD, report it, because it shouldn't be doing that."_ (R094) #### Crepeing One of the main reasons for the driver-initiated disengagements at intersections was due to the creeping behavior of FSD Beta, which was causing confusion and annoyance to the drivers. _"If you make a right turn, and you need to get over multiple lanes immediately, the car will always do a right turn, and then take its sweet time to get over, and usually it seems it's going to be stuck getting, so I will just do it myself. It makes the narrowest right turn possible, and then slowly start stepping over."_ (R050) _"On FSD Beta, I actually put it on the accelerator more often than not because it's just really slow at intersections. So, I manually have to override it with my foot to make it go through an intersection faster, or else I'll miss my chance, and then confuse other people in intersections."_ (R060) _"The big reason that I don't use it all the time is just there's all these social norms regarding how people drive, and people do not like it when you drive differently than they do. Even in the simplest of things where the car is pausing at a four way stop and then slowly, inexplicably creeping through the intersection when it's clearly time to go. People stop, and they look at you as you drive past, raise both hands. 'Go!' 'Hey, look, no hands!"_ (R068) _"I'm sure you've heard this from other people about the creeping. So, when you're coming up to a stop sign, it does this thing called creeping. So, it does stop, and then it creeps forward slowly to make sure it's safe to go through the intersection. There is nobody in any direction, and it just sits there, and I just nudged the accelerator pedal, just tap it a little bit, and then it's like 'Ohh, ok', and then it 'll finally go, and make the left turn."_ (R069) #### Rolling through stop signs Some participants brought up the National Highway Traffic Administration's (NHTSA) efforts to regulate FSD Beta's practice of 'rolling through stop signs', which, although it represents an unlawful action, is a frequently observed human behavior in certain regions. _"We're already dealing with the National Highway Traffic Safety Administration regulating Tesla where before Full Self-Driving, Beta would do rolling stops at stop signs and depending on where you live, that's a natural normal thing to do, but now Autopilot always has to stop at stop signs, and that makes the driving experience less natural."_ (R038) _"Another big reason is the NHTSA. They disabled the rolling stock functionalities of FSD Beta. It makes the car fully stop, which causes a huge traffic delay. If there's a clear intersection, almost everyone does rolling stops. It's becoming a little frustrating because the NHTSA stepped in, and now we have no rolling stops, which makes the car sit at a stop sign for a significantly amount of time. So, I'm finding myself using the Beta less."_ (R094) _"At an intersection and there's lots of cars, a lot of times I'll just take over because it's too awkward. It hesitates too long to go, and I'm a very assertive driver. So, I get frustrated with other drivers who sit at stop signs for too long, like, 'Why are you waiting for the other person to get all the way through the intersection? There's no one else coming. ""_ (R096) **Running red lights / stop signs** This sub-theme addressed driver- and system-initiated disengagements resulted from FSD Beta running a red light or a stop sign. _"It actually has tried running a red light for me. I was following one of the test cars for one of the self-driving car companies, which ran a very late yellow, but then my car followed and when the light turned red, it wasn't slowing down. It wasn't reacting to it at all, so I took over. "_ (R003) _"FSD basically tries to run this red light, and it basically is trying to take a ride on red. This is illegal. Illegal right turn. But it's trying to do, but I stopped because it can't see beyond that truck. There's no way that it knows it has the right of way. "_ (R011) _"Then you would come up to the stop sign, you would slow down to stop, and it just kept going. It was just gonna go right through it, and that's when I hit the brake. As soon as you touch the brake, the whole Autopilot turns off. So, if I had not hit the brake, it would have gone right through that stop. "_ (R069) _"It can run a stop sign. If the stop sign is not on the map to let the car know you have to stop here, the vision system cannot see and recognize with enough distance to give it enough time to come to a stop. I tested yesterday at 45, and in fact it ran the stop sign and then came to a stop afterwards on the other side."_ (R096) #### Turning-on-red This sub-theme addressed driver-initiated disengagements in 'turning-on-red' situations. In turning-on red situations, cars are allowed to turn into the direction of traffic at a traffic light showing a right signal (Preusser et al., 1982). FSD Beta was sometimes not able to detect these traffic signs, resulting in drivers disengaging the system to avoid annoying other drivers or losing time while traveling. _"It still needs a lot of intervention. So, a typical example is drive up to a four-way-stop signs. Sometimes you have one or two cars around. So, we need to figure out who takes the turn, and typically with FSD Beta it doesn't actually do a very good job of going up at the full stop sign. So, it needs to go right away instead of waiting for other cars, but it'll always kind of wait for other cars, and so usually I like to press the accelerator pedal to kind of get to go."_ (R003) _"Many of the intersections around me do not turn on red signs, and FSD Beta doesn't recognize those signs, so previously with FSD stopping at red lights, you had to manually intervene to make it proceed on green, and now I often have to manually intervene on red."_ (R007) _"It doesn't know what 'turn on red' means. It doesn't know lane control signs. So, it is possible that if you're on those lanes, that FSD Beta might try to actually pass all the traffic and goes, 'Hey, this lane over here is pretty', but it's not supposed to be on it. I don't use it in this place because I'm pretty sure it's gonna do the wrong thing."_ (R034) _"So those areas I tend to just maybe take over myself. Anytime there is a really weird sign, there's one right turn-on red sign in our area that a pedestrian can come up and if the pedestrian pushes the button, it'll give you a big red \(X\) over the crossbar of the traffic light. I don't know if Beta detects it yet, so I really am careful and ready to just hit the brake."_ (R100) Another unwanted FSD Beta behavior represents unprotective left turns. Such turns involve turning left at an intersection with a green light, instead of a green arrow, while oncoming traffic has the right of way. Human operators reported to disengage FSD Beta during unprotected left-hand turns. _"Waiting to turn unprotected left, there's no light there, and there's a really nice break in traffic, and it doesn't go, and I'm waiting for it to decide what it's going to do, and three cars come right toward me. I have my foot right over the brake pedal, and the car just suddenly tries to go. It just feels like it's jumping. These cars that are coming straight toward me, and a person wouldn't do that,... unless they're trying to die."_ (R018) _"It's an unprotected left turn, and the car will creep forward at the wrong time, when a car is just crossing, and that can be really scary because you're thinking, 'Oh, is the car just gonna run into this car? Why is it, why is it moving forward if there's traffic? So sometimes it does things that a human would never do, moving forward when it's obviously not time to move forward."_ (R027) _"It was waiting to make a left-hand turn, and it started jittering, and moving forward a little bit when there was clearly traffic. A human would sit there, wait for traffic to be gone and go, whereas full self-driving realized, 'OK, I can move a little bit forward, I can move a little bit forward, and maybe there's a gap here.' There's no gap there. I wasn't willing to take that risk. The risk is if you're not paying close enough attention, it will probably crash the car."_ (R045) _"There is a lot of times where the unprotected left turns, it just doesn't work. It's kind of scary, and you either have to press the accelerator because it just moving too slow. So, the car will kind of creep forward, and it will wait, and when the car passes by, it's still kind of thinking. 'Should I go? Should I stay right?', and then then it slowly starts moving."_ (R058) _"The only times that FSB Beta make me feel unsafe is taking unprotected left turns mostly because there's no lights at all. Just knowing that there's vehicles coming at you, and the vehicle's slowly creeping forward scares you because you don't know if it's just gonna gun it at the back at the worst possible moment. FSD Beta is moving way too forward, and ignoring the stopping white line. So, taking those left-hand turns are a little bit nerve wracking."_ (R078) #### Other turning situations Human operators also indicated to disengage the automation in other turning situations, such as signalized left- and right-hand turning, and U-turning situations. _"I think a week or two ago, I stopped at a stop sign and there's a highway in front. It was going to make a right turn onto the highway, but it stopped too far back from the sign. There are trees in the way. Instead of creeping forward, it decided that there were no cars coming and started to move out full speed into the highway, and there was another car coming, and I had to slam on the brakes because the other car might have run into me."_ (R021) _"Yesterday at a traffic light, it was gonna make a left turn. It was red, and green, cars were coming, and the wheel was moving like it was hesitating, thinking, and it was about to go. Two lanes of cars coming towards me. So, I had to jam on the brakes, and I hit the capture button like 'What on earth are you doing? You see everything else. You can't see these two lines of cars coming at you?' That would, that would have been an accident."_ (R052) _"Full-Self Driving Beta - it doesn't know how to make turns safely, even simple turns. Sometimes, it takes some weirdly fast. It'll stop. It will be like, 'OK, when it's time to go?', and I'll be like, 'OK, now!', and then it'll go really fast. It's like 'You could have just slowed down, and done that turn slowly.' So, it's not super safe."_ (R059) _"There's times it takes way too long for it to make a decision when making just a right turn, and there's times where it's an easy win. You can see there's no cars coming, no traffic, and it's still there taking side, scooting a little more, soothing a little bit more. I'm like, 'Do you have to have confidence in this turn? ''_ (R071) _"On my normal test route, maybe it always gets in the wrong turn lane. It would get in it too early, and so I would just disengage, and get into the correct turn lane."_ (R079) ### Human operator's perception of other humans Human operators may initiate the disengagement when they perceive that other humans, such as passengers and road users, become discomforted and frustrated by the actions of the automation's behavior, respectively. #### 3.3.1 Passenger discomfort Human operators reported to disengage Autopilot and FSD Beta as a result of passengers' discomfort and lack of trust in the car with standard Autopilot and FSD Beta engaged. Reasons for this included a perceived lack of control and the automation's erratic, harsh, and unpredictable behavior. _"If my wife is in the car, I can't use it because she refuses to put up with it, even with disengaging, and she doesn't like all the beeping, and the alarms going off regularly. "_ (R007) _"I know people who say 'Hey, I'm not driving Autopilot with my kids in the car', and my wife says, 'You're not using that thing with the kids in the car."_ (R008) _"I am less likely to use it when I have passengers in the car. I'd rather have that experience when I'm alone rather than doing it when I have a car full of family, and passengers who are like 'Oh, what's that thing been about? Why is it beeping at you?', or if I need to make a sudden correction, I don't wanna do that when I have got passengers that might be discomorted from that."_ (R027) _"It's my experience in my view and my feelings, and then that of my wife. I've had three occasions where had I not been absolutely paying attention and being fully in control of the car, it would have caused an accident. My wife's comment is 'If I use the FSD Beta with her in the car again, then I should just go get the divorce papers.' It's that bad."_ (R069) _"My wife will not accept the FSD Beta being turned on when she is in the car with me because she does not trust it. She doesn't want me to turn it on, mostly because the way that it reacts to things can be very jerky and not very smooth. So, it's the harshness, and the non-warning of those situations that makes her very uncomfortable as a passenger and she won't use it herself when she's driving the car."_ (R090) #### 3.3.2 (Anticipated) frustration This sub-theme addressed driver-initiated disengagements to avoid confusion and rage of other road users interacting with Autopilot and FSD Beta on public roads. _"The second biggest risk is road rage. Infiriating other road users from weird system behaviour. That computer does weird things that humans don't like, and it makes humans mad, right? It just makes them really angry, and then they are angry at you. That's actually been my biggest issue with the system. I'll just turn the system off. Anytime someone following me closely, I'm nervous, this system is going to break erroneously, and produce either an accident or anger."_ (R002) _"Drivers behind you are going to get annoyed and frustrated and that might cause them to behave in an anxious or unsafe way. That would put me in an unsafe condition, so I would intervene by either taking over the driving, or at least accelerating more than the car would."_ (R012) _"I really don't have an intervention unless somebody's behind me pushing me, and they get frustrated because it takes a turn too slow for them, and then I'll just disengage."_ (R062) _"Most of my disengagements are because I don't like how I perceive it will make other drivers feel because I'm a very courteous driver. When Autopilot starts creeping out, I do feel it's going to move towards them, and that's going to make them feel I might dart out, and so how are they gonna react?"_ (R085) _"That indecisiveness about shifting to a lane and then shifting back when it corrects itself- it's not always the best for traffic behind you. They'll honk at you like 'Hey, what are you doing?' I don't wanna deal with that. Personally, I don't want people honking at me."_ (R094) #### 3.3.3 Reckless behavior Another reason for disengaging automation was the reckless behavior of other drivers, such as drivers following too closely from behind or swerving into the lane of the Tesla. _"It was only at one time that I was on the Autopilot, and a car merged in so quickly that I actually had to come take control of the vehicle because my vehicle was not going to stop because he just merged out of nowhere."_ (R020) _"Yesterday, we had about four incidents where we had 18 wheelers cross over the line. So, it's something you want to keep an eye on to make sure that they're not going to come over in the lane."_ (R024) _"Again, somebody coming up behind you too quickly is they don't expect the car to be there. You understand that's a very complex situation coming up or that there are people driving around you way too aggressively who might be impatient, not wait for the car."_ (R065) _"I typically don't have to intervene most of the trip unless it's a situation that's more caused by another driver. They're driving erratically or there's something along those lines that I don't trust the system enough."_ (R090) _"One time when I was in a highway, I did see something a little weird. There was a truck with a really long flatbed that had nothing on it, and that truck tried to turn into my lane, and the car didn't realize that it was a really long trailer. So, I had to just slow it down, hit the brake."_ (R100) ### Automation's perception of human operator The automation may initiate the disengagement itself if it perceives human operators becoming complacent. Both automation and human operators may initiate the disengagement of the automation with drivers exceeding the speed limit. #### 3.4.1 Complacency Complacency of drivers constitutes a rationale for the automation to initiate its own deactivation. _"So, if you're paying attention, it's no problem. You know when to take over. If you're not, all of a sudden, that thing starts panicking. All it really does is basically slow down, and slam on the brakes, and stopped."_ (R006) _"Autopilot never does disengage except for once. When I first dropped the car, and I was just not paying attention, and it threw me off because I wasn't paying attention."_ (R066) _"There's been a few times I forget to do the nudging because it'll flash blue, and then it'll flash brighter, and it'll beeping at you, and a couple times, I was listening to music or like on a phone call or something, and then I may have forgotten to do that, and then it will get mad at you, and beep, and then it'll kick you out. It just tells you to take over, and then you can't use it until you park the car and start again. I think that's happened twice."_ (R070) #### 3.4.2 Inappropriate amount of torque applied to steering wheel Drivers applying an inappropriate amount of torque to the steering wheel resulted in unintentionally disengaging the automation. _"It doesn't take much to disengage it. Then you practically have taken over even though you didn't need to. So sometimes, I might hover my hand over the wheel. That way, I don't accidentally come disengage it during a turn."_ (R032) "Full Self-driving Beta is really, really easy for taking over because you keep your hands on the steering wheel, and if it jerks the steering wheel with your hands on it, you're just gonna automatically disengage."_ (R049) "They could get off the steering wheel nag because actually you could apply too much torque while you're trying to make sure it knows you are paying attention, and that takes it out of Full Self-driving Beta, and that could be a shock if you aren't ready for it." (R062) "So, what happens is on a curve, it turns the steering wheel. So, your hands are on it, but if it wants you to nudge it on a turn, there's a chance you nudge it a little too much, and then you're on a turn with no Autopilot anymore." (R070) "It's very hard to sometimes not accidentally disengage Autopilot when you're trying to show the car that I have my hands on the steering wheel." (R088) #### 3.4.3 Speed rule violation This sub-theme addressed driver- and system-initiated disengagements that occurred when drivers exceed the speed limit. Drivers reported initiating the disengagement to exceed the speed limit for reasons of safety, efficiency, or a lack of knowledge. The system disengaged when drivers exceeded the speed limit. "Now that I'm on the FSD Beta program, and I need to go just a little bit faster to get enough space between me and the car behind me to make a lane change, it will actually disengage, and put me in what they call Autopilot jail, so I cannot reengage Autopilot until I completely stopped and parked the car." (R033) "Sometimes you'll go over the limit, and the car will start to scream at you. So, when that occurs, I have to manually turn it off. I can speed around people, and then I can turn it back on. Then I don't have to worry about getting kicked out for the remainder of the drive." (R056) "When I first bought the car there was nobody else around. I simply went too fast, and Autopilot kicked me off because I noticed that I went above 90 mph."_ (R066) "It used to kick off if you hit 80 miles an hour, and in Arizona, if you're doing 80 miles an hour, you're basically in people's way. So one time I was trying to get by a truck, and you get these flashing lights, and it kicks you off, and you're in jail until you stop, restart the car. You can't just kick it back in." (R082) "One main problem I have with Autopilot just disengaging is I wanna pass the truck, and I'll start accelerating around the truck and if I go over 82 miles an hour, it disengages, which is frustrating." (R095) ### Automation incapability in environment This section covers the disengagements initiated by both the automation and human drivers in environments exceeding the perceived capability of the automation. #### 3.5.1 Weather This theme addressed driver- and system-initiated disengagements in inclement weather conditions, with respondents disengaging the system and the system asking drivers to take back control in poor visibility conditions given the current limitations of the sensor suite. "I think the best example would be in bad weather because Autopilot heavily relies on all the cameras, and we have very bad weather here in Canada sometimes. When the cameras gonna cover it up too much with the snow, it will sometimes abruptly disable itself, and it does the warning beeps and everything, and that's a little jarring. I've never had it really lose control because it's slippery roads, but it's always in the back of my mind in bad weather." (R016) "It did disengage one other time. Last winter, there was like a slush on the freeway, and it was all stuck to the front of my car where the radar is, so Autopilot wouldn't work because all the slush was covering the radar."_ (R043) _"Whenever the road isn't visible due to poor weather, I feel much more unsafe in that situation than I would if I was driving myself manually, so probably one of the only instances where I can say that I would much rather be driving than to allow Autopilot or Full Self-Driving to be active."_ (R065) _"In really bad weather, Autopilot is going to start chirping and making noises to tell you to take over when water gets on all the cameras, and it can't see any longer. So, normally if it starts doing that more than once or twice, I'll just turn it off until I'm done driving through the rainstorm."_ (R070) _"Most of the times that it would disengage would be weather-related. So, there's certain times where you're getting rain or snow, and it detects that you can't engage FSD Beta, it'll tell you there's inclement weather, you can't use it."_ (R087) #### 3.5.2 Non-standard roads This theme addressed driver- and system-initiated disengagements due to inconsistent lane markings and differences in lane width, which often resulted in the car re-centering itself within the lane. _"I traveled across the U.S., and there were few states where they would have a dotted line for the lane line. If there's an exit lane with no dotted line, the car thinks the lane is very wide, and it's always trying to center itself between what it thinks is the lane, so it might center itself between the old lane and the new exit lane, and then it comes to the separation point, and makes some last second crazy decision. That's definitely scary. I'll just turn the system off."_ (R007) _"If you encounter an area where the paint disappears, the car tries to center itself in. Now if it's a two-lane highway, it may try to center itself, which is nerve wracking because part of me is not sure what would happen if there's a car near me. There have been times where I've actually turned it off to get past those areas. Then I'll turn it back on."_ (R056) _"Earlier when I came to an on-ramp, off-ramp where there was no line, it could not see the line properly and then it disengaged, and the alarm went on, and it started slowing down. So, I took over."_ (R073) _"As soon as it makes the turn onto a road that has no markings, all that safety feeling goes out the window because it doesn't really know how to deal with unmarked roads well yet. So, it will try to stay in either the center of the road, and then it'll go to the right of the road, and then it'll go very slowly because it doesn't know if the edge of the road is really the edge of the road. So, I'll usually have to take over for that situation."_ (R074) _"I'm not sure that the car is gonna be able to read the lanes correctly because they're painted out, scratched out. You know you don't have clearly defined lines, and in most cases the Full Self-Driving and Autopilot disengages themselves in those situations anyway."_ (R102) #### 3.5.3 Curves, hills This sub-theme addressed driver-and system-initiated disengagements in curves (sharp turns) and on hills. _"The largest issues I've historically had with Autopilot is, it doesn't turn sharply enough on sharp turns, so I get quite a bit of anxiety justifiably going into sharper turns. I'm always especially paying attention, and usually would disengage on sharper turns just because it wouldn't be steering enough so my steering would disengage Autopilot to keep it in the lane."_ (R007) _"There is one specific scenario that was so scary. Here3 it'll get into the turn lane, and then it suddenly lurches to the right. It's doing some sort of curve optimization for smoothness, for driver comfort but it ends up leaving the lane, and this why whenever new Beta drivers are coming in, I always tell them_ _'You have to keep your hands right here because it will turn, and if you have not turned it back, it would have crashed into something, right?' "_ (R011) _"There are certain sections of freeway where there are tight turns when everyone is going really fast, and I'll just manually take over on the really hard turns."_ (R042) _"It struggles a little bit with tight turns and creeping. There's a balance, you want it to learn, but you don't wanna annoy people and you don't want to have to take over if you don't have to."_ (R100) #### 3.5.4 Object detection This sub-theme addressed driver- and system-initiated disengagements in response to stationary and non-stationary objects and events in the environment. Drivers decided to disengage the system in these situations because the system failed to detect and maneuver around these objects, or because of the system's lane centering, which resulted in an uncomfortable close proximity to these objects. Stationary objects and events in the environment included potholes, parked cars, road debris, construction, cones, bushes/trees, buildings, gates, and railway crossings. _"I'm always nervous going hit a curb. It's going to run them over. I've run over many pieces of trash and things like that and potholes. Yes, if there are potholes, a lot of times I'll turn it off, and go around a pothole because this system doesn't recognize potholes."_ (R002) _"Here4 is one of those disengagements that is dangerous, and you need to intervene. So, there's an object in the road I didn't see until my fiance said 'Hey, there's something in the road'. I had to disengage, and you can see that there's nothing visual as here. So, I would have hit that if I didn't take over."_ (R011) "I've been on Full Self-Driving Beta since October of last year, and I was going to go to the grocery store, and my car drove me there, and that's a nice experience all the way into the parking lot where it stopped, and then drove me home, and everything was fine until it almost drove into a building. I had to grab the wheel, put on the brakes."_ (R018) _"I've had moments where even going out of my own driveway, I engaged Beta, and it immediately started making a sharp left into a tree by the side of my driveway, and if I had not intervened, it probably would have hit the tree. My driveway is straight. It doesn't get any simpler than that, so something was missing there."_ (R068) _"That same railroad crossing I was approaching, and there was a train going through. This is rural. There's no gates that come down, there's no lights that flash. There's no bells that go off. It's just a stop sign. I approached the stop manually, and while I was stopped, I decided to turn Beta, and what it was displaying on the screen was a line of semi-trucks, and then the visualization disappeared, and it started to accelerate, and there's a train creeping forward. It was just like full normal acceleration. I'm not gonna let it get very close to a moving train. So, I hit the brakes as soon as it started to accelerate at normal velocity."_ (R096) Drivers reported disengaging the automation in situations with non-stationary objects and events in the environment, such as other vehicles in adjacent lanes, emergency vehicles, and vulnerable road users. _"It tried to hit a woman in the crosswalk yesterday. She's waiting for the walk indicator so she can cross, and I'm waiting to turn right. I'm on full-self driving, and I'm looking at her. She's just waiting for the light. I think 'My car's gonna try to hit her I know. I was gonna try to hit her I know.' The light turns green. Ohh. Turned to hit her."_ (R018) _"I do not trust it around pedestrians and small children and dog walkers because it's too close to a human. That Tesla will approach the pedestrian and giving them a very small birth, and they would be like 'What are you doing? There's plenty of room.' It's not quite human so I don't always trust it around humans because it's not as courteous as a human would be, right?'"_ (R031) _"It is not that great at picking up on emergency vehicles yet, so if I see a police officer off the distance with its flashing lights, currently on the Autopilot version I have, it will start to slow down usually way too aggressively. Other people around me are not slowing down yet. I usually have to take over in those situations."_ (R074) _"FSD Beta - whenever there are many, many turns with lots of pedestrians, I'm not going to. It's too scary. I don't want it to hit anyone. I have to disable it because there's no point in risking someone's life or my freedom in order to test out something that wouldn't be able to do the job."_ (R074) #### 3.5.5 Intersections This sub-theme addressed driver- and system-initiated disengagements in intersections, including roundabouts. Respondents reported disengaging the system due to its limitations in handling the situation at intersections safely or their fear of getting rear-ended due to unexpected or unnatural system behavior. Respondents also reported to disengage FSD Beta at intersections due to the system's lack of negotiation and interaction skills with other road users. _"When you're at an intersection, and there's four cars at this intersection, and they're all trying to encourage the other driver to go first. The car doesn't really recognize that. So, at that point, I will have to either take over, or I'll have to hope that the car goes pretty soon."_ (R017) _"Roundabouts, for example, are a very challenging scenario for FSD Beta still, and so it will almost always come to a complete stop at a roundabout or before entering the roundabout, and so I feel a sense of social anxiety if there's a car behind me, and the car just completely comes to a stop. I will tap on the accelerator to encourage the car to go through the roundabout."_ (R050) "If I'm at an intersection, it can be too slow, and there's a lot of cars behind me, and I'm like 'Come on, no one's there. Come on.' Sometimes I think 'Accelerate because you can do that. You can accelerate.' It'll go and it'll do the turn. I don't always do that because certain situations, certain intersections are too big. So, then I'll just disengage, do the difficult spot, and re-engage."_ (R052) "Especially things like roundabouts. It is just far too unsure. It's very jerky at times, it'll stop and start, stop, and start, and when you've got other traffic around you, I don't wanna deal with that. So, I'll typically turn it off. I typically only use it for highway driving."_ (R056) "It's not a finished product. Always, every intersection I'm like, 'OK, we're gonna do this. Right. Great. Wonderful. Doing good. Next one. You can do this. Right. OK, good. Ohh. There's a little conservative here. Let me flag this.' Then the next time, I know, 'OK, it might be a little conservative here. It might get confused, and be prepared to take over. "" (R057) #### 3.5.6 Discontinuities in road design This sub-theme addressed driver- and system-initiated disengagements due to discontinuities in road design, such as in on- and off-ramp, lane merging, and splitting situations. "Common example is you'll be driving along, and a new lane opens up, and this week, the car just went into the left turn lane for no reason at all so I immediately intervened, and took it to the correct lane." (R010) "I've had it disengaged near off ramps where it just doesn't understand that the lane is diverting. I haven't really had it disengaged based on anything else." (R028) "When I took the exit on the freeway, when it just shouldn't, I didn't use it for the rest of the 30-minute drive. I just was like 'No, I am not using this because we almost just got in an accident. "" (R043) "It's got to adjust a few things, including exit ramps. It makes the maneuver way too aggressively, basically whipping you into that lane, and I'm not joking when I use the word 'whipping you' because you're there like 'Hey, lane change'. It literally throws you like this as it makes the maneuvering to the other lane. I have to disable it every time it needs to take that style of exit."_ (R071) #### 3.5.7 Complex, heavy traffic This sub-theme addressed driver-initiated disengagements in complex and heavy-traffic situations. "So, if it was two in the morning, and there was no one else on the freeway, I would just kind of double check, and let it do its thing and not disengage, but usually there would be traffic around, so usually I'd need to disengage." (R007) "If there's a lot of cars on the road, I do not use Beta, and why is obviously because sometimes it tries to kill me." (R023) "If there's a lot of traffic, or if there's a lot of vehicles around, I probably won't use it because I don't want to intervene with the traffic. I don't wanna cause an unsafe situation, so I won't use it, but if it's empty, it's a dead area, then I'm gonna use it." (R048) "So, I only use it in areas where it's not gonna do anything bad, right? I use it in areas where I would feel comfortable training a brand-new driver, and so I don't feel safe using it on busy streets." (R059) ## 4 Discussion This study presents the findings of semi-structured interviews conducted with 103 users of standard Autopilot and FSD Beta, focusing on factors contributing to driver- and system-initiated disengagements of Autopilot and FSD Beta. Analysis of the interview data led to the extraction of main categories and sub-categories representing factors of disengagements of Autopilot and FSD Beta. ### Triadic model of automation disengagement The categories informed the development of a triadic model of automation disengagement, which treats human operators and the automation as equivalent agents. Each agent perceives the performance and capability of the other, initiating disengagement accordingly. The idea that both human operators and the automation can initiate the disengagement is consistent with existing literature (Endsley, 2019; Favaro et al., 2018). Human operators may initiate the disengagement if they perceive the automation's capabilities to be lower than their own in handling a maneuver safely. The triadic model of automation disengagements considers not only the impact of automation behavior and environmental characteristics, such as road infrastructure and design, on human operators but also on other humans (e.g., passengers, other road users). Previous driver behavior and automation models primarily featured the driver and automation as main actors (Banks et al., 2014; Michon, 1985; Rasmussen, 1983). Our model resonates with the principle of distributed cognition, which implies that information does not only exist in the minds of individuals but also in the objects and artefacts that individuals use as well as in the interactions between individuals and objects and artefacts (Stanton et al., 2010). As road traffic increasingly incorporates sensors, computers, and communication systems, the concept of distributed cognition is expected to become even more important in understanding and managing interactions within the transportation system. The main reasons contributing to disengagement of the automation pertain to human operator's relevant states, human operator's perception of the automation, and human operator's perception of other humans, as well as the automation's perception of the human operator. ### Human operator's relevant states Our research demonstrates that human operators opt to disengage automation due to both permanent and transient states. A permanent reason for disengaging automation involves the enjoyment of manual driving. Negative transient states include frustration, stress, and embarrassment. Prior research has identified trust, mental workload, and situational awareness as factors impacting automation disengagement (Dzindolet et al., 1999; Favaro et al., 2018; Parasuraman & Riley, 1997; Reagan et al., 2021; Wilson et al., 2020). Our study introduces additional psychological states (e.g., frustration, stress, embarrassment) that influence automation disengagements. ### Human operator's perception of automation Human operators and automation may initiate disengagement in anticipation of automation failure in situations exceeding the perceived capability of the automation. Studies have shown that drivers frequently disengage automation in an anticipatory or precautionary manner (Endsley, 2019; Favaro et al., 2018; Gershon et al., 2021; Lv et al., 2017; Morando et al., 2021; Mueller et al., 2021). Some reported driver-initiated disengagements might have been unnecessary, as it remains unclear whether the system would have handled the situation safely without driver intervention. Consistent with Gershon et al. (2021), drivers indicated initiating disengagement not to mitigate immediate risk but to perform tactical maneuvers exceeding the automation's capabilities (e.g., passing a vehicle in front). The interviews also showed that drivers may opt to disengage automation when it demonstrates unnatural, non-human, or undesirable behavior at the operational, strategic, and tactical levels of decision-making (Michon, 1985), such as harsh deceleration (or phantom braking), erratic steering wheel movements, steering into adjacent traffic, choosing the incorrect route, unanticipated lane changes, misidentifying the correct lane, or creeping at intersections. While the phantom braking behavior associated with assisted and partially automated driving represents a known vehicle behavior (Nassi et al., 2020), our study offers new insights into additional vehicle behaviors (e.g., steering into oncoming traffic, undesired lane changes). Some of these behaviors are safety-critical and could have resulted in an accident if the driver had not intervened. Potentially safety-critical system behaviors (e.g., steering into oncoming traffic) could be mitigated through improvements in automated driving design (see Morando et al., 2021) or driver monitoring systems. These observations relate to the operational design domain (ODD) concept, which describes the conditions under which automated vehicles can function (SAE International, 2021). Our study supports existing research on external and environmental factors contributing to automation disengagement, as well as the need for human operator intervention in challenging situations (Boggs et al., 2020; Favaro et al., 2018; Lv et al., 2017). ### Human operator's perception of other humans Our study emphasizes the significant role other humans play in shaping disengagements of automation. Human operators may choose to disengage automation if passengers and other road users feel uncomfortable or are angry due to negative automation behavior. Consequently, the human operator possesses the 'theory of mind ability' (Baron-Cohen, 1995; Leslie, 2001) to infer how other humans interacting with automation feel about its behavior. The perceived risk of accidents and loss of control may be higher for passengers than for the driver (Hauslschmid et al., 2017), and other road users who lack direct control over the actions of the automation. Our research suggests that if other humans are uncomfortable with drivers engaging the system, the anticipated safety and efficiency benefits of road vehicle automation may not be fully realized. For instance, drivers might opt to use the system alone without passengers onboard, testing and experiencing its limits, which may lead to increased single-person vehicle miles traveled (see Nordhoff et al., 2023). Car manufacturers should consider the impact of automation on passengers and other road users, designing the human-machine interaction in such a way that perceived safety and trust of humans inside and outside automated vehicles is promoted, and frustration and stress reduced. ### Automation's perception of the human operator Our study also revealed that automation decides to initiate disengagement if it, based on the hands-on-wheel sensor, detects that the human operator becomes complacent or violates traffic rules. The risk of complacency associated with partial automation is well-known (Banks et al., 2018; Reagan et al., 2021; Wilson et al., 2020). Analogous to the concepts of 'trust in automation' and 'trust in self', the automation should have accurate trust in humans and trust in self, initiating disengagement when the perceived reliability of itself is lower than the perceived reliability of human control. Driver education and training could play a crucial role in raising awareness of system limitations and improving system handling (e.g., the required amount of torque applied to the steering wheel. ### Limitations and implications for future research The data on driver- and system-initiated disengagements reflects the subjective perceptions of respondents. While disengagements are considered a safety risk (Favaro et al., 2018), it is not clear to what extent disengaging automation contribute to a decrease or increase in accident risk compared to not disengaging the automation. Respondents represented early adopters and were thus not representative of the broader population of users of partially automated cars. We recommend future research to perform studies in naturalistic driving conditions to investigate to what extent disengagements influence accident risk with a representative population of users of partially automated cars. ## 5 Conclusion The study presents the results of semi-structured interviews with 103 users of Tesla's Full-Self-Driving (FSD) Beta system and standard Autopilot, exploring the factors contributing to driver- and system-initiated disengagements of both systems. The study proposes a novel triadic model of automation driver- and system-initiated disengagements, which takes into account the impact of automation behavior not only on the human operators of automation but also on passengers and other road users. The results have shown that the contributing factors leading to driver- and system-initiated disengagement of Autopilot and FSD Beta pertain to permanent and transient human operator states, safety-critical system behaviors (e.g., steering into oncoming traffic), other road users' behavior (e.g., road rage, reckless driving behavior), and road infrastructure and design factors (e.g., missing lane markings). The findings provide new insights into the factors contributing to disengaging partial automation, with valuable information on potential edge cases of automated vehicle technology. Our paper emphasizes that automation disengagement is not solely a human operator-based concern but also a social phenomenon, in which the operator may feel a sense of embarrassment or self-consciousness about the impact of the automation on others. This social dimension of distrust highlights the importance of considering the experiences and perceptions of other road users in addition to the human operator when evaluating the effectiveness and usability of automation systems. ## 6 Acknowledgments An extended abstract of this work was submitted to the ARTS23 (Automated Road Transportation Symposium) ([https://trb.secure-platform.com/a/page/AutomatedRoadTransportationSymposium](https://trb.secure-platform.com/a/page/AutomatedRoadTransportationSymposium)).
2305.00586
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Pre-trained language models can be surprisingly adept at tasks they were not explicitly trained on, but how they implement these capabilities is poorly understood. In this paper, we investigate the basic mathematical abilities often acquired by pre-trained language models. Concretely, we use mechanistic interpretability techniques to explain the (limited) mathematical abilities of GPT-2 small. As a case study, we examine its ability to take in sentences such as "The war lasted from the year 1732 to the year 17", and predict valid two-digit end years (years > 32). We first identify a circuit, a small subset of GPT-2 small's computational graph that computes this task's output. Then, we explain the role of each circuit component, showing that GPT-2 small's final multi-layer perceptrons boost the probability of end years greater than the start year. Finally, we find related tasks that activate our circuit. Our results suggest that GPT-2 small computes greater-than using a complex but general mechanism that activates across diverse contexts.
Michael Hanna, Ollie Liu, Alexandre Variengien
2023-04-30T21:44:21Z
http://arxiv.org/abs/2305.00586v5
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model ###### Abstract Pre-trained language models can be surprisingly adept at tasks they were not explicitly trained on, but how they implement these capabilities is poorly understood. In this paper, we investigate the basic mathematical abilities often acquired by pre-trained language models. Concretely, we use mechanistic interpretability techniques to explain the (limited) mathematical abilities of GPT-2 small. As a case study, we examine its ability to take in sentences such as "The war lasted from the year 1732 to the year 17", and predict valid two-digit end years (years \(>32\)). We first identify a circuit, a small subset of GPT-2 small's computational graph that computes this task's output. Then, we explain the role of each circuit component, showing that GPT-2 small's final multi-layer perceptrons boost the probability of end years greater than the start year. Finally, we find related tasks that activate our circuit. Our results suggest that GPT-2 small computes greater-than using a complex but general mechanism that activates across diverse contexts. ## 1 Introduction As pre-trained language models (LMs) have grown in both size and effectiveness, their abilities have expanded to include a wide range of tasks, even without fine-tuning [3]. Such emergent abilities can range from translation to text classification and multi-step reasoning [36]. Yet despite heavy study of these models [27; 28; 15], how LMs implement these abilities is still poorly understood. In this paper, we study one such emergent LM ability, performing mathematics. Mathematical ability has long been of interest in natural language processing: models have been trained to perform tasks such as simple arithmetic and word problems [35; 30]. Researchers have also fine-tuned pre-trained LMs on these tasks, instead of training from scratch [9; 14]. Recently, however, LMs seem to have acquired significant mathematical abilities without explicit training on such tasks [3; 23; 5; 7]. How these mathematical abilities arise in LMs is largely unknown. While studies have investigated pre-trained LMs' mathematical abilities [21; 25], existing work is behavioral: it explains _what_ models can do, rather than _how_ they do it. Most work that delves into model internals does so using models trained directly on such tasks: Hupkes et al. [13] probe such models for hierarchical structure, while Liu et al. [16] and Nanda et al. [20] study toy models trained on modular addition. Some studies do examine the structure of number representations in pre-trained models [19; 33]; however, they do not provide a causal explanation about how these models leverage these representations to perform math. The mechanisms underlying pre-trained LMs' mathematical abilities thus remain unclear. To understand the roots of these mathematical abilities, we study them in GPT-2 small,1[26] which we show still possesses such abilities, despite its small size. This small size enables us to investigate its mathematical abilities at a very low level. Concretely, we adopt a circuits perspective [24, 6], searching for a minimal subset of nodes in GPT-2's computational graph responsible for this ability. To do so, we use fine-grained, causal methods from mechanistic interpretability, that allow us to identify nodes in GPT-2 that belong in our circuit, and then prove our circuit's correctness, through carefully designed causal ablations [12]. We also use mechanistic methods to pinpoint how each circuit component contributes to the mathematical task at hand. The end result of this case study is a detailed description of GPT-2's ability to perform one simple mathematical operation: greater-than. Footnote 1: Further references to GPT-2 refer to GPT-2 small Our investigation is structured as follows. We first define year-span prediction, a task that elicits mathematical behavior in GPT-2 (Section 2). We give the model input like "The war lasted from the year 1732 to the year 17"; it assigns higher probability to the set of years greater than 32. We next search for the circuit responsible for computing this task, and explain each circuit component's role (Section 3). We find a set of multi-layer perceptrons (MLPs) that computes greater-than, our operation of interest. We then investigate how these MLPs compute greater-than (Section 4). Finally, we find other tasks requiring greater-than to which this circuit generalizes (Section 5). Via these experiments, we accomplish three goals. First, we demonstrate how circuits provide a useful abstraction for understanding the structure and semantics of computation in pre-trained LMs. Second, we add to the abundant work on MLPs in pre-trained LMs [17, 10, 11] by exploring how transformer MLPs receive information from other components, and work together at the MLP and neuron level. Third, we show that our mathematical capability emerges in GPT-2 due not to simple memorization of individual answers, but to a complex mechanism that generalizes across contexts. ## 2 Year-Span Prediction in GPT-2 GPT-2's size is ideal for low-level study, especially with potentially resource-intensive techniques like those in Section 3.1. However, this small size poses a challenge: GPT-2 is less capable than larger LMs, which still often struggle with mathematical tasks [18]. With this in mind, we craft a simple task to elicit a mathematical behavior in GPT-2, and verify that GPT-2 produces said behavior. Task and DatasetWe focus on a simple mathematical operation, greater-than, framed as it might naturally appear in text: an incomplete sentence following the template "The <noun> lasted from the year XXYY to the year XX"; see Figure 1 for an example. The model should assign higher probability to years >YY. We can then automatically generate sentences using this template. We draw the nouns from a pool of 120 nouns that could have a duration,2 found using FrameNet [1]. We sample the century XX of the sentence from \(\{11,\dots,17\}\), and the start year YY from \(\{02,\dots,98\}\). Footnote 2: Full list in Appendix F We impose the latter constraints because we want GPT-2 to be able to predict a target as it would naturally be tokenized. However, GPT-2 uses byte-pair encoding, in which frequent strings more often appear as single tokens [29]. Thus, more frequent years--multiples of 100 or those in the 20th century--are tokenized as single tokens; less frequent years are broken into two. This causes a problem: GPT-2 could predict "[00]" after "[17]", but "1700" is always tokenized as "[1700]" in normal data and never as "[17][00]". So, we exclude all single-token years from our year pool. Finally, we want each example to have at least one correct and one incorrect validly tokenized answer, so we exclude each century's highest and lowest validly tokenized year from the pool of start years. Qualitative EvaluationWe first qualitatively analyze GPT-2's baseline behavior on this task by running it on a dataset of 10,000 examples. Each example has a noun randomly drawn from our Figure 1: Year-span prediction example (XX=17 and YY=32) with sample (in)valid output years. 120 nouns, and a year drawn randomly from the 768 valid years from 1000 to 1899. For each YY in \(\{2,\ldots,98\}\), we take the average of GPT-2's probability distribution over predicted years, for all examples with start year YY; we visualize these average distributions in Figure 2. GPT-2 appears to perform greater-than on our year-span prediction task: it creates a sharp cutoff between invalid end years (\(\leq\) YY), and valid end years (\(>\) YY). It assigns higher probability to the latter years, though not all of them: 15-20 years after YY, probabilities drop. The exact length of the year-span receiving higher probability likely reflects patterns in GPT-2's training data. In the real-world, and likely also in GPT-2's training data, our prompts' nouns, such as a "war," "dynasty", or "pilgrimage", have average durations that GPT-2 may have learned, influencing its output. Quantitative EvaluationWe design two numerical measures of model performance for the purpose of quantitative assessment. Let YY\(\in\{02,\ldots,98\}\) be the start year of our sentence, and \(p_{y}\) be the probability of a two-digit output year \(y\). We define the following two metrics: * **Probability difference**: \(\sum_{y>\texttt{YY}}p_{y}-\sum_{y\leq\texttt{YY}}p_{y}\) * **Cutoff sharpness**: \(p_{\texttt{YY+1}}-p_{\texttt{YY-1}}\) Probability difference verifies that model output reflects a greater-than operation by measuring the extent to which GPT-2 assigns higher probability to years \(>\)YY. It ranges from -1 to 1; higher is better. In contrast, cutoff sharpness is not intrinsically connected to greater-than. However, it quantifies an interesting behavior of GPT-2: the sharp cutoff between valid and invalid years. In doing so, it checks that the model depends on YY, and does not produce constant (but valid) output, e.g. by always outputting \(p(99)=1\). Cutoff sharpness ranges from -1 to 1; larger values indicate a sharper cutoff. We perform quantitative evaluation with our same 10,000-element dataset; on this dataset, GPT-2 achieves 81.7% probability difference (SD: 19.3%) and a cutoff sharpness of 6.0% (SD: 7.2%). Overall, both qualitative and quantitative results indicate that GPT-2 performs the greater-than operation on the year-span prediction task. ## 3 A Circuit for Year-Span Prediction Having defined our task, we now aim to identify a circuit: a minimal computational subgraph of our model that suffices to compute the task [24, 6]. We then explain our circuit's components, discovering how GPT-2 implements the greater-than operation. ### Path Patching To find a circuit, we use path patching,3 introduced by Wang et al. [34] and further described by Goldowsky-Dill et al. [12]. This technique determines how important a model component (e.g. an attention head or MLP) is to a task, by altering that component's inputs and observing model behavior Figure 2: Left: Probability heatmap of GPT-2 for year-span prediction. Y-axis: the sentence’s start year (YY). X-axis: the two-digit output year candidate. (X,Y): Mean probability assigned by GPT-2 to output year X given input year Y. Right: GPT-2’s average output distribution when YY=41 post-alteration. It is much like causal mediation analysis or interchange interventions [32; 8]; however, unlike these, it allows us to constrain our intervention's effects to a specific path. To illustrate this, consider a model's forward pass on its inputs as a directed acyclical graph. Its nodes are components such as attention heads or MLPs. The input of a node \(v\) is the sum of the outputs of all nodes with a direct edge to \(v\). GPT-2 can be thought of as such a graph flowing from its input tokens to its logits (and thereafter, its probabilities), as depicted in Figure 3. In path patching, we specify new input tokens, and a path of components through which they will reach the logits. For example, if we want to ascertain the effects of MLP 10 on the logits, we can patch the direct path (MLP 10, logits) with new input, which we call the 01-input: "The war lasted from the year 1701 to the year 17". We thus alter MLP 10's direct effects on the logits without changing its output to the attention and MLP of layer 11 (Figure 3). If the model's behavior (as indicated by its logits) changes, we can be sure that this is because MLP 10 is important to that behavior; it is not due to downstream components. Earlier methods like interchange interventions lack this distinction--when they alter a component, they affect all components downstream from it. The specificity of path patching allows us to test detailed hypotheses. For example, imagine that we know that MLP 10 affects the logits both directly and via its effects on MLP 11. We want to know how important layer 10's attention is to the circuit via MLP 10. We can test this by patching two paths at once: (Attn 10, MLP 10, logits) and (Attn 10, MLP 10, MLP 11, logits), as in Figure 3. This allows us to pinpoint the relationship between precisely these two components, Attn 10 and MLP 10. This technique underpins our circuits approach: we search for a path starting in the inputs and ending in the logits that explains how our model performs the greater-than task. To perform path patching, we need a new dataset that replaces a node's original inputs. To this end, we create the "01-dataset": we take each example in the original dataset and replace the last two digits YY of the start year with "01". If a component normally boosts logits of years\(>\)YY, patching it with the 01-dataset will cause it to boost the logits of years \(>01\), inducing a larger error in the model. ### Circuit Components MLPsWe search for a circuit by identifying components that perform year-span prediction via their direct connection to the logits. We consider as potential circuit components GPT-2's 144 attention heads (12 heads/layer\(\times\)12 layers), and 12 MLPs (1 per layer). We do so because the residual stream [6] that serves as input to the logits is simply the sum of these components' direct contributions (along with the token embeddings; we ignore these as they contain no YY information). If we consider each of these, we will not miss any components that contribute to this task. For details, see Appendix B. We iteratively path patch each component's direct contributions to the logits, replacing its inputs with the 01-dataset. In our earlier notation, for a component of interest \(C\), we patch the path (\(C\), logits), as in Figure 3 B, where \(C\) = MLP 10. We patch only one component at a time, and only at the end of the sentence; at other positions, these components cannot affect the logits directly. After we patch a component, we run the model and record the probability difference, comparing it to that of the unpatched model. If patching a component caused model performance to change significantly, that component contributed to the model's computation of year-span prediction. Figure 3: A. The computational graph of GPT-2, run on our normal dataset. B: GPT-2, where the (MLP 10, logits) path is patched to receive 01-input. C. GPT-2, where the (Attn 10, MLP 10, logits) and (Attn 10, MLP 10, MLP 11, logits) paths receive 01-input. Nodes receiving normal input have blue output; nodes receiving 01-input have red output; nodes receiving both have purple output. Figure 4 shows the results of this experiment; for computational reasons we run it using a smaller dataset (490 datapoints, 5 per year YY). The heatmap indicates that MLPs 8-11 are the most important direct contributors to the logits, along with a9.h1: attention layer 9's head 1. However, the MLPs cannot act alone: to compute years\(>\)YY, these MLPs at the end of the sentence must know the value of YY. But unlike attention heads, MLPs cannot attend to earlier tokens such as the YY token. Thus, we search for nodes that contribute to the circuit via these MLPs. Attention HeadsWe find components that contribute to the circuit via the MLPs using more path patching. We start by patching components through MLP 11, since it is the furthest downstream; for a component of interest \(C\), we patch (\(C\), MLP 11, logits). We find that MLP 11 relies mostly on the 3 MLPs upstream of it (Figure 4), so we search for components that act via those MLPs. We next find components that contribute to the circuit through MLP 10. For a given \(C\), we patch (\(C\), MLP 10, logits) and (\(C\), MLP 10, MLP 11, logits), as in Figure 3 C. We do so because MLP 10 contributes directly to two nodes in our circuit, the logits and MLP 11, and we want to know which nodes contribute via MLP 10 to the entire circuit. We repeat this procedure for MLPs 9 and 8. The results in Figure 4 indicate that MLPs rely heavily on other MLPs upstream of them. MLPs 8 and 9, the furthest upstream of our MLPs, also rely on attention heads. MLP 9 relies on a9.h1, while MLP 8 relies on a8.h11, a8.h8, a7.h10, a6.h9, a5.h5, and a5.h1; we add these to our circuit. Many of these attention heads can be seen to contribute to the logits directly, though more weakly than the MLPs do. For this reason, we also add these heads' direct connections to the logits to our circuit. Figure 5 visualizes the circuit we have found. We could further develop this by specifying a circuit from the token inputs to the logits; indeed, we do so in Appendix A. However, the present circuit already captures the most interesting portion of the model: the MLPs that compute greater-than. So, we instead provide evidence that our circuit is correct, and then analyze its constituent parts. Figure 4: Iterative path-patching (IPP) heatmaps. Y-axis: layer of the component. X-axis: attention head number, or MLP. (X,Y): Change in probability difference induced by patching the corresponding component. A: Heatmap for the path ((X,Y), logits). B: Heatmaps for MLPs 8-11. Figure 5: Left: Diagram of the year-span prediction circuit. Center: Diagram showing which GPT-2 components receive our standard dataset vs. our 01-dataset in the circuit evaluation experiment. Right: The probability heatmap (as in Figure 2) for the patched model. EvaluationHaving defined our circuit, we perform another path-patching experiment to ensure it is correct. In this experiment, we give most of the model inputs from the O1-dataset. The model only receives our standard dataset via the paths specified in our circuit. So, our attention heads' contributions to the logits are backed by the standard dataset, as are their contributions to the MLPs, and the MLPs' contributions to one another. But, some components of the MLPs' inputs (those that come from model components not in the circuit) receive input from the O1-dataset as well. We stress that this is a difficult task, where the large majority of the model receives input that should push it to poor performance. For a diagram of the circuit and our evaluation, see Figure 5. We perform this evaluation using the larger dataset, and almost entirely recover model performance. The probability difference is 72.7% (89.5% of the original) and and the cutoff sharpness is 8%--sharper than pre-patching. This indicates that our circuit is mostly sufficient to compute this task. The circuit is also necessary: performing the opposite of the prior experiment, giving nodes in our circuit the O1-dataset, and those outside it the normal dataset, leaves GPT-2 unable to perform the task: it achieves a probability difference of -36.6%. ### Circuit Semantics Now, we interpret each circuit component, starting with the attention heads. We first perform a simple attention-pattern analysis of the heads in our circuit. Figure 6 shows which tokens our attention heads attend to at which positions. At the relevant (end) position, in-circuit attention heads attend to the YY position, suggesting that they detect the year which the output year must be greater than. Next, we examine the contributions of attention heads using the logit lens approach [22]: we multiply each head's output by GPT-2's unembedding matrix, translating this output into unembedding (vocabulary) space. Note that here, we do not only view logit lens as a tool for obtaining intermediate estimates of model predictions [2]. Rather, we use it to understand components' outputs more generally: the logit lens can capture how such outputs shape model predictions, but it can also capture how these outputs add information to the residual stream in unembedding space. We visualize the heads' outputs for each sentence in our small dataset in Figure 6. Attention head outputs for a sentence with start year YY have a high dot product with the embedding vector for YY, as indicated by the blue diagonal in the plots. Given our earlier analysis, we hypothesize that these heads may identify the start year (at the YY position), and indicate it via a spike in unembedding space of the residual stream at the end position; they thus communicate YY to downstream components. Figure 6: (Left) Attention patterns for attention heads a7.h11 and a8.h10. <bos> is GPT-2’s start of sentence token, <|endoftext|>. (Right) Logit lens of a7.h11 and a8.h10. Axes as in Figure 2; blue indicates that the module upweights the output year, and red, that it downweights the year. Figure 7: (Left to right) Logit lens of MLPs 11, 10, 9, and 8; labels as in Figure 6 We similarly apply the logit lens to the outputs of MLPs 8-11 (Figure 7). The results indicate that MLPs of 9 and 10 directly specify which years are greater than YY: the logit lens of each layer's output has an upper triangular pattern, indicating that they upweight precisely those years greater than YY. MLP 11 plays a similar role, but seems to upweight roughly the first 50 years after YY, enforcing a maximum duration for the event in the sentence. However, MLP 8 is unusual: its logit lens shows a diagonal pattern, but no upper triangular pattern that would indicate that it computes greater-than. We claim that this is because MLP 8 contributes mainly indirectly, via the other MLPs in our circuit. We confirm this by patching MLP 8's direct contributions to the logits with the 01-dataset; we do so again with its indirect contributions, through the other MLPs. In the former case, model performance drops only 14%, while in the latter case, it drops by 39%. So MLP 8 does not contribute much to the logits directly, but it does contribute indirectly. Other MLPs also have mixed effects: MLP 9 has roughly equal direct and indirect contributions (28% vs. 32%), while MLP 10 contributes mostly directly (56% vs. 16%). MLP 11 can only contribute directly. Our full picture of the circuit so far is this: the attention heads communicate the start year YY in embedding space. MLP 8's mostly influences downstream MLPs. However, MLPs 9, 10, and 11 appear to compute the greater-than operation in tandem, and in steps. We conclude that while the attention heads identify the important year YY, it is the MLPs that effect the greater-than computation. ## 4 Explaining Greater-Than in the Year-Span Prediction Circuit Our prior experiments show that MLPs 9-11 directly compute greater-than. But how do they do so? We cannot provide a conclusive answer, but identify avenues by which MLPs might compute this. We first examine their inputs, finding structure that might enable greater-than computation. Then, we examine MLP internals, showing how neuron composition could enable greater-than computation. ### Input Structure To understand how MLPs compute greater-than, we analyze various model representations using 2D Principal Component Analysis (PCA). For each of the 97 datapoints in our small dataset, each with a unique start year, we analyze the input residual stream to our MLPs, as well as the output of relevant attention heads. As a control, we also analyze representations from irrelevant model components, and the static year embeddings. We take all component representations from the end position. In Figure 8, PCA reveals that the input residual stream to MLP 8 (and indeed all of our MLPs, though not all are shown) is highly structured: representations are ordered by the start year of the sentence they are from, increasing clockwise. The same is true of the outputs of relevant attention heads (a7.h10), which serve as inputs to the MLPs, but not of outputs of irrelevant heads (a7.h8). This suggests that it is specifically the relevant attention heads that transmit this structured information to relevant MLPs. But while the heads seem to transmit this structured information to the MLPs, they need not have created this structure from scratch. We find, as in Wallace et al. [33], that structure already exists in the static year embeddings, though the years 02-09 are clustered apart from the rest. The heads need only unify these groups and move this information from the YY position to the end. Figure 8: PCA of MLP 8’s input, a7.h10’s and a7.h8’s output, and the static year embeddings. Each point corresponds to one datapoint’s representation, and is labeled with and colored by the its YY. Structured number representations have been implicated in mathematical capability before: Liu et al. [16] train a toy transformer model on modular addition, and find that its number representations become structured only after it moves from overfitting to generalization. This suggests that GPT-2's structured number representations may be relevant to its greater-than ability. However, when we ablate the dimensions found through PCA, to test their importance to the greater-than task, we found little change in model performance, indicating that GPT-2 does not rely on them alone. ### Neuron-Level Processing In order to understand MLPs better, we turn to their internals, zooming in on their individual neurons. We choose one MLP--MLP 10 --that we know directly the greater-than operation to the logits, and study it closely. First, we find which of MLP 10's 3072 neurons are important, by path patching each neuron's direct contributions to the logits with the 01-dataset. As before, we record the change in model performance, as measured by probability difference, compared to the unpatched model. We find that neuron contributions to the task are sparse: most neurons can be patched (ablated) with near zero effect on our model performance, as observed in prior work [32]. We then analyze those neurons that contribute most to model performance using the logit lens. To do this, we take advantage of the fact that each neuron has a corresponding row in the MLP output weight matrix. As noted by Geva et al. [10], multiplying this row by the unembedding weights yields an (unnormalized) distribution over the logits, indicating which outputs the neuron upweights when activated. Taking the outer product of this logit distribution with the neuron's activations yields the logit lens, indicating which output years the neuron upweights for each input sentence's YY. Figure 9 shows the logit lens of the 3 most important neurons in MLP 10; more neurons can be found in Appendix C. Each neuron up- or down-weights certain output years depending on the input year YY, but no individual neuron computes greater-than. No one neuron can do so, as each neuron's activation for each input only scales that neuron's distribution over the logits, without changing its overall shape. In contrast, the correct shape of the logits differs depending on the example's start year. Many neurons can compute greater-than when combined, though. We perform logit lens on the sum of the top-10 neurons' contributions 4 in Figure 9. Though they do not do so individually, the top-10 neurons perform an imperfect greater-than when summed together as a group. The logit lens of MLP 10 as a whole can be thought of as the logit lens of the sum all 3072 neurons' contributions; we partially recreate this with the contributions of just the top-10 neurons. Including more top neurons produces sharper approximations of greater-than; see Appendix C for examples. Footnote 4: We can confirm this using path patching as well; see Appendix D for details. In summary, we find that even inside one MLP, the greater-than operation is spread across multiple neurons, whose outputs compose in a sum to form the correct answer. Even the contributions of a small number of relevant neurons composed begin to roughly form the correct operation. We study this in MLP 10, but observe it in the other MLPs as well. Section 3.3 suggested that GPT-2 computed greater-than across multiple important MLPs; these results suggest that moreover multiple important neurons in each MLP compose to allow the MLP to compute greater-than. Figure 9: Left: The logit lens of the 3 MLP 10 neurons most important to year-span prediction. Right: The logit lens of the top-10 MLP 10 neurons. Blue indicates that the neuron upweights logits at the given input year (y-axis), output year (x-axis) combination, while red indicates downweighting. Does The Circuit Generalize? We now possess a detailed circuit for year-span prediction. But one thing remains unclear: is this a general circuit for the greater-than operation? Or does it only only apply to this specific, toy task? Answering this question fully would require an in-depth exploration of circuit similarity that fall outside the scope of this paper. For the purposes of this investigation, we perform primarily qualitative analyses of tasks that preserve the format and output space of year-span prediction. To start, we focus on three increasingly different prompts: "The <noun> started in the year 17YY and ended in the year 17", "The price of that <luxury good> ranges from $ 17YY to $ 17", and "1599, 1607, 1633, 1679, 17YY, 17". In all cases, a two-digit number greater than YY would be a reasonable next token. The model performs greater-than given all of these prompts (\(\geq 72\%\) probability difference). Moreover, the circuits found via path patching are similar to those in Sections 3.2 and 3.3. When we ablate all non-circuit nodes, we achieve 98.4%, 96.4%, and 90.3% loss recovery on each task respectively; for more details, see Appendix E. As before, these tasks depend on MLPs 8-11 to compute greater-than; these MLPs depend on attention heads that transmit information about YY. That said, these tasks' circuits are not identical to our circuit. For the last task, ablating all non-circuit nodes yielded only 67.8% of the model's original probability difference. But, a closer look at path patching results indicated that on this task, GPT-2 used not only the entire circuit from before, but also MLP 7 and two extra attention heads. Expanding our hypothesis to include those nodes allowed us to recover 82.8% of performance. So, similar tasks seem to use similar, but not identical, circuits. GPT-2 produced unusual output for some tasks requiring other mathematical operations. It produced roughly symmetric distributions around YY on the task "1799, 1753, 1733, 1701, 16YY, 16", which might yield predictions smaller than YY. It behaved similarly on examples suggesting an exact answer, such as "1695, 1697, 1699, 1701, 1703, 17", which could yield 05. GPT-2 even failed at some tasks that were solvable using the greater-than circuit, like "17YY is smaller than 17"; it always predicted YY. Across all such tasks, we found that GPT-2 relied on another set of heads and MLPs entirely. So GPT-2 does not use our circuit for all math; sometimes it does not rely on it even when it should. We also observe the opposite phenomenon: inappropriate activation of the greater-than circuit, triggered by prompts like "The <noun> ended in the year 17YY and started in the year 17" and "The <noun> lasted from the year 7YY BC to the year 7". In these cases, GPT-2 ought to predict numbers smaller than YY; however, it predicts numbers greater than YY. This is because it is using the exact same circuit used in the greater than case! GPT-2 thus overgeneralizes the use of our circuit. Our results suggest that our circuit generalizes to some new scenarios. But what does a generalizing circuit imply about the origins of GPT-2's greater-than capabilities--do they stem from from memorization [31, 4], or rich, generalizable representations of numbers [16]? The presence of a greater-than circuit does not preclude memorization. Our circuit could function internally as a lookup table, although our PCA experiments hint that GPT-2 may be relying instead on structured number representations to perform its task. But even if we view this circuit as simply retrieving memorized facts, the retrieval mechanism at work is sophisticated. GPT-2 can identify--albeit imperfectly--greater-than scenarios and the relevant operand; it then activates a dedicated mechanism that retrieves the correct answer. GPT-2's mathematical abilities thus extend beyond a simple, exact memorization of answers. ## 6 Conclusion In this paper, we attempted to bridge the gap between our understanding of mathematical abilities in toy models, and the mystery of such abilities in larger pre-trained LMs. To do so, we outlined a circuit in GPT-2 with interpretable structure and semantics, adding to the evidence that circuits are a useful way of understanding pre-trained LMs [34]. Our circuit is coarser-grained than findings in toy models for mathematical tasks [20], but much finer-grained than existing work on mathematics in pre-trained LMs. Moreover, we showed that this circuit generalizes to a set of distinct greater-than-adjacent tasks, suggesting that GPT-2 possesses a generalized mechanism that it uses across tasks and contexts. We note that our conclusions are limited by the small size of our model and dataset, and the simple phenomenon studied. Our study is very model-centric: data-driven interpretability techniques would strengthen our work. Studying circuit performance across diverse tasks could better measure the degree to which our circuit generalizes to all greater-than tasks. Similarly, studying larger models would confirm that our results hold for the models that dominate natural language processing today. Despite these limitations, we believe that this study lays the groundwork for future work. Our small study hints at the potential for circuits as a lens for the study of memorization and generalization in pre-trained LMs. More broadly, we hope that our finding that not only attention heads, but also MLPs and their neurons can be analyzed jointly as a complex system will motivate circuits work to come. ## Author Contributions **MH** and **OL** performed the initial experiments that formed the basis of this project. **MH** developed and performed the final set of experiments shown here, and drafted the paper. **AV** supervised the empirical work and writing process. ## Acknowledgments and Disclosure of Funding The authors thank Buck Shlegeris, Chris MacLeod, and Arthur Conmy for their valuable feedback on earlier drafts of this work. They also thank Dani Yogatama, as well as members of the Amsterdam and Technion NLP groups, for their helpful comments. They finally thank Redwood Research for both running the REMIX program, which provided many of the ideas, techniques, and collaborations behind this paper, and providing continued support thereafter.
2309.09111
Reducing sequential change detection to sequential estimation
We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes. In this paper, we describe a simple reduction from sequential change detection to sequential estimation using confidence sequences: we begin a new $(1-\alpha)$-confidence sequence at each time step, and proclaim a change when the intersection of all active confidence sequences becomes empty. We prove that the average run length is at least $1/\alpha$, resulting in a change detection scheme with minimal structural assumptions~(thus allowing for possibly dependent observations, and nonparametric distribution classes), but strong guarantees. Our approach bears an interesting parallel with the reduction from change detection to sequential testing of Lorden (1971) and the e-detector of Shin et al. (2022).
Shubhanshu Shekhar, Aaditya Ramdas
2023-09-16T23:48:47Z
http://arxiv.org/abs/2309.09111v2
# Reducing sequential change detection to sequential estimation ###### Abstract We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional \(\theta\) of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes. In this paper, we describe a simple reduction from sequential change detection to sequential estimation using confidence sequences: we begin a new \((1-\alpha)\)-confidence sequence at each time step, and proclaim a change when the intersection of all active confidence sequences becomes empty. We prove that the average run length is at least \(1/\alpha\), resulting in a change detection scheme with minimal structural assumptions (thus allowing for possibly dependent observations, and nonparametric distribution classes), but strong guarantees. We also describe an interesting parallel with Lorden's reduction from change detection to sequential testing and connections to the recent "e-detector" framework. ## 1 Introduction We consider the following problem of _sequential change detection_ (SCD): for a general space \(\mathcal{X}\), given a stream of \(\mathcal{X}\)-valued observations \(X_{1},X_{2},\ldots\), our goal is to design a method to detect any changes in a prespecified parameter or functional \(\theta\) (possibly infinite dimensional) associated with the source generating this stream. Let \(\mathcal{P}\) denote a class of probability distributions on the infinite product space \(\Omega=\mathcal{X}^{\infty}\), and let \(\theta:\mathcal{P}\to\Theta\), denote a mapping from probability distributions to some (possibly infinite dimensional) parameter space \(\Theta\). The data distribution satisfies the following for some \(T\in\mathbb{N}\cup\{\infty\}\): * For \(n\leq T\), the observations are the first \(T\) elements of a trajectory of \(P_{0}\in\mathcal{P}\), with \(\theta(P_{0})=\theta_{0}\). * For \(n>T\), the observations are drawn from another distribution \(P_{1}\in\mathcal{P}\), such that \(\theta(P_{1})=\theta_{1}\neq\theta_{0}\). In particular, \(P_{0},P_{1}\) are distributions over an infinite sequence of observations, and the data is not assumed to be i.i.d. (independent and identically distributed), meaning that we do not assume that \(P_{0},P_{1}\) take the form \(p^{\infty}\) for some distribution \(p\) over \(\mathcal{X}\). Further, \(P_{1}\) (and hence \(\theta_{1}\)) is allowed to depend on \(X_{1},\ldots,X_{T}\). Under the above model, our goal is to design a data-driven method for detecting any change in the value of \(\theta\), without any knowledge of \(T,P_{0},P_{1}\). In technical terms, our objective is to define a stopping time \(\tau\), at which we stop collecting more data, and declare a changepoint has previously occurred. There are two problem settings to consider: non-partitioned and partitioned. By default, define the filtration \(\mathcal{F}\equiv(\mathcal{F}_{n})_{n\geq 0}\) as given by \(\mathcal{F}_{n}\coloneqq\sigma(X_{1},\ldots,X_{n})\) and \(\mathcal{F}_{0}=\{\emptyset,\Omega\}\). **Problem 1.1** (Non-partitioned SCD).: _For some unknown triple \((T,P_{0},P_{1})\), suppose \(X_{1},X_{2},\ldots\) denote a data stream, such that \((X_{n})_{n\leq T}\) are drawn from \(P_{0}\), and \((X_{n})_{n>T}\) are drawn from \(P_{1}\), such that \(\theta(P_{1})=\theta_{1}\neq\theta_{0}=\theta(P_{0})\). We must define a stopping time \(\tau\), adapted to \(\mathcal{F}\) satisfying:_ * _If_ \(T=\infty\)_, and there is no changepoint, we require that_ \(\mathbb{E}_{\infty}[\tau]\geq 1/\alpha\)_, for a prespecified_ \(\alpha\in(0,1)\)_. The term_ \(\mathbb{E}_{\infty}[\tau]\) _is called the average run length (ARL), and represents the frequency of false alarms._ * _If_ \(T<\infty\)_, and there is a changepoint at which the distribution changes from_ \(P_{0}\) _to_ \(P_{1}\)_, we desire the detection delay,_ \(\mathbb{E}_{T}[(\tau-T)^{+}]\)_, to be as small as possible._ The non-partitioned SCD problem stated above, does not require any pre-specified partitioning of \(\mathcal{P}\) into pre- and post change distribution classes. That is, it assumes that the data generating distribution changes from one unknown distribution (\(P_{0}\)) in \(\mathcal{P}\) to another distribution (\(P_{1}\)). This is in contrast to another formulation of the SCD problem, where it is assumed that we know some partition of \(\mathcal{P}\) into \(\mathcal{P}_{0},\mathcal{P}_{1}\) such that \(P_{0},P_{1}\) are known to respectively lie in \(\mathcal{P}_{0},\mathcal{P}_{1}\) (in other words, we are not interested in changes within \(\mathcal{P}_{0}\), only changes from \(\mathcal{P}_{0}\) to \(\mathcal{P}_{1}\)). This additional knowledge about the two partitioned distribution classes, that is not available in the first formulation, is often critical in designing optimal SCD schemes, especially in parametric problems. The strategy we develop in this paper is also applicable to an intermediate variant of the SCD problem, that only assumes the knowledge of the pre-change parameter class, \(\Theta_{0}\). **Problem 1.2** (Partitioned SCD).: _For some unknown triple \((T,P_{0},P_{1})\), suppose \(X_{1},X_{2},\ldots\) denote a stream of \(\mathcal{X}\)-valued observations, satisfying the following assumptions: The observations \((X_{n})_{n\leq T}\) are drawn according to a process \(P_{0}\in\mathcal{P}\) with parameter \(\theta_{0}\in\Theta_{0}\subset\Theta\), and this parameter set \(\Theta_{0}\) is assumed to be known. The observations \((X_{n})_{n>T}\) are drawn from \(P_{1}\) with parameter \(\theta_{1}\), such that \(\theta_{1}\not\in\Theta_{0}\). The parameter \(\theta_{1}\) is not assumed to be known. Our objective is to design a stopping time \(\tau\), satisfying the two bulleted criteria stated in Problem 1.1._ Sequential changepoint detection is a very well-studied problem in the sequential analysis literature, going back to the early works by Shewhart (1925, 1930); Page (1954); Shiryaev (1963). These initial papers developed computationally efficient likelihood-based schemes for known pre- and post-change distributions, which have since then been extended to more general cases of composite, but parametric, pre- and post-change distribution classes. We refer the reader to the book by Tartakovsky et al. (2014) for a detailed discussion of the parametric SCD problem. Unlike the parametric case, there exist very few general principles (analogous to the likelihood and generalized likelihood based schemes) for designing SCD methods with nonparametric distribution classes. Two recent exceptions to this trend include the paper on e-detectors by Shin et al. (2024) for Problem 1.2 (partitioned), and the BCS-Detector scheme proposed by Shekhar and Ramdas (2023) for Problem 1.1 (non-partitioned), which is the subject of this paper. The BCS-Detector scheme of Shekhar and Ramdas (2023) is also a reduction from changepoint detection to estimation, but in this paper we propose a different, and even simpler reduction. To elaborate, BCS-Detector uses a single confidence sequence (CS) in the forward direction, but with every new observation, it constructs a new CS in the backward direction (the so-called "backward CS" or BCS, constructed using observations with their time indices reversed). The scheme stops and declares a detection, as soon as the intersection of the CS and the all active BCSs becomes empty. Since it is critical to our simplified scheme as well, we recall the definition of CSs below. **Definition 1.3** (Confidence Sequences).: Given observations \(X_{1},X_{2},\ldots\) drawn from a distribution \(P\) with associated parameter \(\theta\equiv\theta(P)\), a level-\((1-\alpha)\) CS for \(\theta\), is a sequence of sets \((C_{n})_{n\geq 1}\), satisfying: * For every \(n\geq 1\), the set \(C_{n}\subset\Theta\) is \(\mathcal{F}_{n}=\sigma(X_{1},\ldots,X_{n})\)-measurable. In words, the set \(C_{n}\) can be constructed using the information contained in the first \(n\) observations. * The sets satisfy a uniform coverage guarantee: \(\mathbb{P}\left(\forall n\in\mathbb{N}:\theta(P)\in C_{n}\right)\geq 1-\alpha\). Equivalently, for any \(\mathcal{F}\)-stopping time \(\tau\), \(\mathbb{P}\left(\theta(P)\in C_{\tau}\right)\geq 1-\alpha\). **Remark 1.4**.: Due to the uniform coverage guarantee, if \((C_{n})\) is a CS, then so is \((\cap_{m\leq n}C_{m})\). Thus, we can assume without loss of generality that the sets involved in a CS are nested; that is \(C_{n}\subset C_{n^{\prime}}\) for all \(n^{\prime}<n\). The BCS-Detector scheme of Shekhar and Ramdas (2023) satisfies several favorable properties: it can be instantiated for a large class of parametric and nonparametric problems, it provides non-asymptotic control over the ARL, and has strong guarantees over the detection delay. However, a closer look at their scheme reveals that it implicitly makes a "bidirectional" assumption about the data generating process: at any \(n\geq 1\), the BCS-Detector assumes the ability to construct a CS in the forward direction (based on \(X_{1},\ldots,X_{n}\)), as well as in the backward direction (using \(Y_{1}=X_{n},Y_{2}=X_{n-1},\ldots,Y_{n}=X_{1}\)). Most methods for constructing CSs proceed by designing martingales or supermartingales adapted to the natural (forward) filtration of the observations. Hence, the BCS-Detector implicitly involves constructing martingales (or supermartingales) in both, the forward and reverse directions; this in turn is typically only possible if the observations are independent. This restriction limits the applicability of the BCS-Detector scheme, and this paper's simplified scheme addresses this weakness. Contributions.Our main contribution in this paper is a new SCD scheme, that we refer to as the repeated-FCS-Detector. This scheme proceeds by constructing a new forward CS with each observation (Definition 2.1), and stops as soon the intersection of all the active CSs becomes empty. Since it relies only on forward CSs, it eliminates the aforementioned weakness of the BCS-Detector of Shekhar and Ramdas (2023). Further, as we show in Theorem 2.2, our scheme achieves a tighter bound (by a factor 2) on the ARL, as compared to the BCS-Detector, while matching its detection delay guarantees. Finally, we note in Section 3, that our reduction from change detection to constructing CSs is significantly generalizes a famous reduction by Lorden (1971) from parametric change detection to one-sided sequential tests. ## 2 Our proposed reduction We now describe our new SCD scheme that proceeds by starting a new CS in the forward direction with each new observation. **Definition 2.1** (repeated-FCS-Detector).: Suppose we are given a stream of observations \(X_{1},X_{2},\ldots\), and a functional \(\theta\) associated with the source. For Problem 1.1 (non-partitioned), define the \(C_{n}^{(0)}=\Theta\) for all \(n\geq 1\), while for Problem 1.2 (partitioned) set \(C_{n}^{(0)}=\Theta_{0}\), for all \(n\geq 1\). Proceed as follows for \(n=1,2,\ldots\) : * Observe the next data-point \(X_{n}\). * Initialize a new level-\((1-\alpha)\) forward CS, \(C^{(n)}\equiv(C_{t}^{(n)})_{t\geq n}\), which will be formed using data \(X_{n},X_{n+1},\ldots\). * Update all CSs initialized so far using \(X_{n}\), meaning that we form the sets \(\{C_{n}^{(m)}:1\leq m\leq n\}\). * If the intersection of all initialized CSs becomes empty, meaning \(\cap_{m=0}^{n}C_{n}^{(m)}=\emptyset\), then set \(\tau\gets n\), and declare a detection. (In the last step, we have implicitly used the nestedness discussed Remark 1.4, but if the CSs are not nested, we can use the stopping criterion \(\cap_{m=0}^{n}\cap_{i=m}^{n}C_{i}^{(m)}=\emptyset\).) Compared to the BCS-Detector of Shekhar and Ramdas (2023), the main change in the above scheme is that it creates a new forward CS with each new observation, instead of a new backward CS. We now show that our new SCD scheme defined above admits a nonasymptotic lower bound on the ARL. **Theorem 2.2** (ARL control).: _The changepoint detection scheme described in Definition 2.1 controls the average run length (ARL) at level \(1/\alpha\). That is, when \(T=\infty\), our proposed stopping time \(\tau\) satisfies \(\mathbb{E}_{\infty}[\tau]\geq 1/\alpha\)._ The proof of this result is in Section 4.1. Note that for the BCS-Detector, used with level-\((1-\alpha)\) CSs, Shekhar and Ramdas (2023) obtained a lower bound of \(1/2\alpha-3/2\) on the ARL. Thus, our repeated-FCS-Detector achieves an improved (approximately by a factor of 2) lower bound, while matching the detection delay guarantees of BCS-Detector, as we show in the next section. **Remark 2.3**.: An alternative performance measure to ARL is the _probability of false alarms (PFA)_, which is equal to the probability that the stopping time \(\tau\) is finite; that is \(\mathbb{P}_{\infty}(\tau<\infty)\). If we modify our repeated-FCS-Detector to use a level-\((1-6\alpha/(n^{2}\pi^{2}))\) CS in rounds \(n\geq 1\), then the resulting scheme ensures \[\mathbb{P}_{\infty}(\tau<\infty)\leq\sum_{n\geq 1}\mathbb{P}_{\infty}\left( \left\{(C_{t}^{(n)})_{t\geq n}\text{\leavevmode\nobreak\ \rm miscovers \leavevmode\nobreak\ }\theta_{0}\right\}\right)\leq\alpha\sum_{n\geq 1} \frac{6}{\pi^{2}n^{2}}=\alpha.\] This implies that the ARL of the above modified repeated-FCS-Detector scheme is infinity, since \(\mathbb{E}_{\infty}[\tau]\geq(1-\alpha)\times\infty=\infty\). This significantly stronger control over false alarms comes at the cost of an increase in detection delay. In particular, we can show that for most CSs, the detection delay of this modified scheme will have a logarithmic dependence on \(T\). This means that the worst case (over all \(T\) values) detection delay of this scheme is usually unbounded. ### Analyzing the detection delay We now state an assumption under which we will analyze the detection delay of our SCD scheme. **Assumption 2.4**.: _Letting \(d\) denote a metric on \(\Theta\), \(X^{n}\) be shorthand for \((X_{1},\ldots,X_{n})\), and \((C_{n})_{n\geq 0}\) be a given confidence sequence, we assume that the width of the set \(C_{n}\equiv C_{n}(X^{n},\alpha)\) has a deterministic bound_ \[\sup_{\theta^{\prime},\theta^{\prime\prime}\in C_{n}}d(\theta^{\prime},\theta^{ \prime\prime})\stackrel{{ a.s.}}{{\leq}}w(n,P,\alpha),\] _such that \(\lim_{n\to\infty}w(n,P,\alpha)=0\), for all \(P\in\mathcal{P}\) and \(\alpha\in(0,1]\)._ The above assumption requires the existence of a deterministic envelope function for the diameter of the confidence sequence, which converges to zero pointwise for every \((P,\alpha)\), as \(n\) increases. This is a very weak assumption, and essentially all known CSs satisfy it. We now analyze the detection delay of our SCD scheme for Problem 1.1 under Assumption 2.4. **Theorem 2.5**.: _Consider the SCD problem with observations \(X_{1},X_{2},\ldots\) such that \((X_{n})_{n\leq T}\) are drawn from a distribution \(P_{0}\) (with parameter \(\theta_{0}\)), while \((X_{n})_{n>T}\) are drawn from a product distribution \(P_{1}\) (with parameter \(\theta_{1}\neq\theta_{0}\)), and are independent of the pre-change observations. Suppose the repeated-FCS-Detector from Definition 2.1 is applied to this problem, with the CSs satisfying Assumption 2.4. Let \(\mathcal{E}:=\{\theta_{0}\in\cap_{n=1}^{T}C_{n}^{(1)}\}\) denote the "good event" (having at least \((1-\alpha)\) probability) that the first CS covers the true parameter up to the changepoint. For Problem 1.1 (non-partitioned), if \(T\) is large enough to ensure that \(w(T,P_{0},\alpha)<d(\theta_{0},\theta_{1})\), then the detection delay of our proposed scheme satisfies_ \[\mathbb{E}_{T}[(\tau-T)^{+}|\mathcal{E}]\leq\frac{3}{1-\alpha}u( \theta_{0},\theta_{1},T),\] \[\text{where}\quad u(\theta_{0},\theta_{1},T)\coloneqq\min\{n\geq 1 :w(T,P_{0},\alpha)+w(n,P_{1},\alpha)<d(\theta_{0},\theta_{1})\}.\] _For Problem 1.2 (partitioned), with a known \(\Theta_{0}\), the repeated-FCS-Detector satisfies_ \[\mathbb{E}_{T}[(\tau-T)^{+}]\leq\frac{3}{1-\alpha}u(\Theta_{0}, \theta_{1}),\] \[\text{where}\quad u(\Theta_{0},\theta_{1})\coloneqq\min\{n\geq 1 :w(n,P_{1},\alpha)<\inf_{\theta^{\prime}\in\Theta_{0}}d(\theta^{\prime}, \theta_{1})\}. \tag{1}\] _for all values of \(T<\infty\)._ The proof of this result adapts the arguments developed by Shekhar and Ramdas (2023) for analyzing the BCS-Detector, and we present the details in Section 4.2. **Remark 2.6**.: The above detection delay bound _exactly_ matches that obtained by the BCS-Detector of Shekhar and Ramdas (2023), which had an ARL guarantee of \(\text{ARL}\geq 1/(2\alpha)-3/2\). Recalling Theorem 2.2, our new scheme achieves a significantly improved (by a factor of 2) nonasymptotic lower bound on the ARL as compared to the BCS-Detector, while matching their detection delay performance. This result provides a general detection delay bound applicable to a large class of problems. However, this guarantee can be sharpened when the repeated-FCS-Detector is applied to problems with additional structure. We demonstrate this next, through an application of our scheme to the problem of detecting changes in mean of bounded random variables. ### A nonparametric example: change in mean for bounded random variables We now analyze the performance of our changepoint detection scheme the problem of detecting changes in the mean of bounded real-valued random variables supported on \(\mathcal{X}=[0,1]\). Note that despite the simple observation space, the class of distributions on this \(\mathcal{X}\) is highly composite and nonparametric. In particular, there does not exist a common dominating measure for all distributions in this class, which renders likelihood based techniques inapplicable to this problem. Formally, we consider the instances of Problem 1.1 and Problem 1.2 with \(\mathcal{X}=[0,1]\), and the parameter space \(\Theta=[0,1]\) with metric \(d(\theta,\theta^{\prime})=|\theta-\theta^{\prime}|\). For Problem 1.2, we assume that the pre-change mean \(\theta_{0}\) lies in a known set \(\Theta_{0}\subset\Theta\). For an unknown value \(T\in\mathbb{N}\cup\{\infty\}\), the distribution generating the observations changes from \(P_{0}\), with mean \(\theta_{0}\), to another distribution \(P_{1}\), with mean \(\theta_{1}\neq\theta_{0}\). The (unknown) change magnitude is denoted by \(d(\theta_{0},\theta_{1})=\Delta=|\theta_{1}-\theta_{0}|\). For this problem, we employ our repeated-FCS-Detector strategy using an instance of the betting-based construction of CSs for the means of bounded random variables (details in Section 4.3) proposed by Waudby-Smith and Ramdas (2023). Our next result analyzes its performance. **Proposition 2.7**.: _Consider the above problem of detecting changes in mean with bounded observations, under these additional conditions: (i) the post-change observations are independent of the pre-change observations, and (ii) both, the pre- and post-change observations are i.i.d. (that is \(P_{0},P_{1}\) are infinite products of some distributions \(p_{0},p_{1}\) on \(\mathcal{X}\)). For Problem 1.1 (non-partitioned), if \(T\geq 64\log(64/\Delta^{2}\alpha)/\Delta^{2}\), the repeated-FCS-Detector instantiated with the betting CS (details in Section 4.3) satisfies:_ \[\mathbb{E}_{\infty}[\tau]\geq\frac{1}{\alpha},\quad\text{and} \quad\mathbb{E}_{T}[(\tau-T)^{+}|\mathcal{E}]=\mathcal{O}\left(\frac{\log(1/ \alpha K_{1})}{K_{1}}\right), \tag{2}\] \[\text{where}\quad K_{1}=K_{1}(P_{1},\theta_{0})\coloneqq\inf_{P_ {\theta}:\theta\in\Theta_{0}}d_{\text{KL}}(p_{1}\parallel p_{\theta}).\] _In the display above, \(\mathcal{E}\) is the "good event" in Theorem 2.5, having probability at least \(1-\alpha\)._ _For Problem 1.2 (partitioned), the repeated-FCS-Detector satisfies the following:_ \[\mathbb{E}_{\infty}[\tau]\geq\frac{1}{\alpha},\quad\text{and} \quad\mathbb{E}_{T}[(\tau-T)^{+}]=\mathcal{O}\left(\frac{\log(1/\alpha K_{2}) }{K_{2}}\right), \tag{3}\] \[\text{where}\quad K_{2}\equiv K_{2}(P_{1},\Theta_{0})=\inf_{P_{ \theta}:\theta\in\Theta_{0}}d_{\text{KL}}(p_{1}\parallel p_{\theta}).\] _In the statements above, \(P_{\theta}=p_{\theta}^{\infty}\) denotes any product distribution on \(\mathcal{X}^{\infty}\) with mean \(\theta\)._ The proof of this result is in Section 4.3, and relies on a careful analysis of the behavior of betting CSs. If the pre-change distribution \(P_{0}\) is also i.i.d. (say \(P_{0}=p_{0}^{\infty}\)), and is known, then \(K_{2}\) in (3) reduces to \(d_{\text{KL}}(p_{1}\parallel p_{0})\). The resulting detection delay is order optimal, according to Lorden (1971)[Theorem 3], and furthermore, this optimality is achieved for an unknown \(P_{1}\) lying in a nonparametric distribution class. **Remark 2.8**.: By an application of Pinsker's inequality (Tsybakov, 2009, Lemma 2.5), we know that both \(K_{1}\) and \(K_{2}\) are \(\Omega(\Delta^{2})\), which gives us the weaker upper bound on the detection delay, \(\mathcal{O}\left(\log(1/\alpha\Delta)/\Delta^{2}\right)\). This is the upper bound on the detection delay derived by Shekhar and Ramdas (2023) for the change of mean detection problem, using the empirical-Bernstein CS of Waudby-Smith and Ramdas (2023), and a direct application of the general delay bound of Theorem 2.5. ## 3 Connection to Lorden's reduction from SCD to testing Using the duality between confidence sequences and sequential hypothesis tests, we now show that our repeated-FCS-Detector strategy is a generalization of a well-known result of Lorden (1971), that reduces the problem of SCD (with separated distribution classes) to that of repeated sequential tests. Lorden's work built upon the interpretation of CuSum algorithm as repeated sequential probability ratio tests (SPRT) for known pre- and post-change distributions by Page (1954). In particular, Lorden (1971) considered a parametric SCD problem with a known pre-change distribution \(P_{0}\), and a parametric composite class of post-change distributions \(\{P_{\theta_{1}}:\theta_{1}\in\Theta_{1}\}\). Then, given a sequential test, or equivalently, extended stopping time, \(\{N(\alpha):\alpha\in(0,1)\}\), satisfying \(\mathbb{E}_{P_{0}}(N(\alpha)<\infty)\leq\alpha\), Lorden (1971) proposed the following SCD strategy: * For every \(m\geq 1\), define \(N^{(m)}(\alpha)\) as the stopping rule \(N(\alpha)\) applied to the observations \(X_{m},X_{m+1},\ldots\). * Using these, declare the changepoint at the time \(N^{*}(\alpha)\), defined as \[N^{*}(\alpha)=\inf_{m\geq 1}\,\{N^{(m)}(\alpha)+m-1\}.\] (4) In words, this scheme can be summarized as: _initiate a new sequential level-\(\alpha\) test with every new observation, and stop and declare a detection as soon as one of the active tests rejects the null_. For this scheme, Lorden (1971) established the ARL control; that is, \(\mathbb{E}_{P_{0}}[N^{*}(\alpha)]\geq 1/\alpha\), for the specified \(\alpha\in(0,1)\). Furthermore, under certain assumptions on the expected stopping time of the test \(N(\alpha)\) under the alternative, Lorden (1971) also established the minimax optimality of the scheme in the regime of \(\alpha\to 0\). Due to the duality between confidence sequences and sequential tests, it is easy to verify that the SCD scheme defined in (4) is a special instance of our general strategy. In particular, let us consider the problem studied by Lorden (1971), with a known pre-change distribution \(P_{0}\) with parameter \(\theta_{0}\) (which could, for instance, be the distribution function of \(P_{0}\)), and a class of post-change distributions \(\{P_{\theta_{1}}:\theta_{1}\in\Theta_{1}\}\). Given a method for constructing confidence sequences for \(\theta\), denoted by \(\mathcal{C}\), we can define a sequential test \[N(\alpha)=\inf\{n\geq 1:\theta_{0}\not\in C_{n}\},\quad\text{where}\quad C_{n} =\mathcal{C}(X_{1},\ldots,X_{n};\alpha).\] By the uniform coverage guarantee of confidence sequences, we have \(\mathbb{P}_{P_{0}}\left(N(\alpha)<\infty\right)=\mathbb{P}_{P_{0}}\left( \exists n\in\mathbb{N}:\theta_{0}\not\in C_{n}\right)\leq\alpha\). Thus, \(N(\alpha)\) is a valid level-\(\alpha\) sequential test for the simple null \(\{P_{0}\}\). Similarly, \(N^{(m)}\) for \(m\geq 1\), can be defined as the stopping time \(N(\alpha)\) constructed using observations \((X_{n})_{n\geq m}\) starting at time \(m\). More specifically, we have \[N^{(m)}(\alpha)=\inf\{n-m+1:\theta_{0}\not\in C_{n}^{(m)},\;n\geq m\},\quad \text{where}\quad C_{n}^{(m)}=\mathcal{C}\left(X_{m},X_{m+1}\ldots,X_{n}; \alpha\right).\] Using this, we can define \(N^{*}(\alpha)=\inf m\geq 1\{N^{(m)}(\alpha)+m-1\}\). We now observe, that this \(N^{*}(\alpha)\) is exactly the same as our proposed stopping time, \(\tau\), in Definition 2.1, with \(\Theta_{0}=\{\theta_{0}\}\). In particular, for all \(n\geq 1\), we have \[\{N^{*}(\alpha)\leq n\} =\{\exists n^{\prime}\leq n,\exists m\leq n^{\prime}:N^{(m)}( \alpha)=n^{\prime}-m+1\}\] \[=\{\exists n^{\prime}\leq n,\exists m\leq n^{\prime}:\theta_{0} \not\in C_{n^{\prime}}^{(m)}\}=\{\exists n^{\prime}\leq n:\big{(}\cap_{m=1}^{ n^{\prime}}C_{n^{\prime}}^{(m)}\big{)}\cap\{\theta_{0}\}=\emptyset\}\] \[=\{\cap_{n^{\prime}=1}^{n}\cap_{m=0}^{n^{\prime}}C_{n^{\prime}}^{ (m)}=\emptyset\}=\{\cap_{m=0}^{n}\cap_{n^{\prime}=m}^{n}C_{n^{\prime}}^{(m)}= \emptyset\}=\{\tau\leq n\}.\] In the last two equalities, we have used the fact that \(C_{n}^{(0)}\) for all \(n\geq 0\) is equal to \(\Theta_{0}=\{\theta_{0}\}\). Thus, our general strategy proposed in Definition 2.1 subsumes the SCD scheme of Lorden (1971). We end our discussion with the following two remarks: * While we instantiated our scheme above for the case of a singleton null, \(\{P_{0}\}\), the exact same construction also applies to the case of a composite null \(\{P_{\theta_{0}}:\theta_{0}\in\Theta_{0}\}\), with \(\Theta_{0}\cap\Theta_{1}=\emptyset\). The only modification needed is to update the stopping time \(N(\alpha)\) to be equal to \(\inf\{n\geq 1:\Theta_{0}\cap C_{n}=\emptyset\}\). By Theorem 2.2, the resulting SCD scheme still controls the ARL at the required level \(1/\alpha\). * Note that the e-detector framework, developed by Shin et al. (2024), also generalizes Lorden's scheme to work for composite, and nonparametric pre- and post-change distribution classes (\(\mathcal{P}_{0}\) and \(\mathcal{P}_{1}\) respectively). However, the e-detectors were developed explicitly for Problem 1.2 (partitioned); that is for a known class of pre-change distributions \(\mathcal{P}_{0}\) (although the general idea could be suitably adapted for the non-separated formulation in some cases). This is unlike our scheme that is easily applicable to both the partitioned and non-partitioned formulations of the SCD problem. ## 4 Deferred proofs from Section 2 In this section, we present the proofs of the three technical results stated in Section 2. ### Proof of Theorem 2.2 We prove this statement in three steps. First, we define an e-process (recalled in Definition 4.1) corresponding to every confidence sequence \((C_{n}^{(m)})_{n\geq m}\) involved in our scheme. Then, using these e-processes we introduce an e-detector \((M_{n})_{n\geq 1}\), that is, a process adapted to the natural filtration \(\mathcal{F}\) that satisfies \(\mathbb{E}_{\infty}[M_{\tau^{\prime}}]\leq\mathbb{E}_{\infty}[\tau^{\prime}]\) for all stopping times \(\tau^{\prime}\). Finally, we show that our stopping time \(\tau\), introduced in Definition 2.1, is larger than \(\tau^{\prime}=\inf\{n\geq 1:M_{n}\geq 1/\alpha\}\), defined using the e-detector. This allows us to leverage Shin et al. (2024, Proposition 2.4) to conclude that \(\mathbb{E}_{\infty}[\tau^{\prime}]\geq 1/\alpha\), which implies required statement about the ARL of \(\tau\). Since we prove this result by attaching an e-process to every CS, we recall their definition below. **Definition 4.1** (e-processes).: Given a class of probability measures \(\mathcal{P}\), and a filtration \(\mathcal{F}\equiv(\mathcal{F}_{n})_{n\geq 1}\) defined on some measurable space, a \(\mathcal{P}\)-e-process is a collection of nonnegative random variables \((E_{n})_{n\geq 1}\) adapted to \(\mathcal{F}\), satisfying \(\mathbb{E}_{P}[E_{\tau^{\prime}}]\leq 1\) for all \(P\in\mathcal{P}\), and for all stopping times \(\tau^{\prime}\) (adapted to the same filtration). Step 1. Construct an e-process for every CS.For every CS starting with the \(m^{th}\) observation, denoted by \((C_{n}^{(m)})_{n\geq m}\), we associate a process defined as \[E_{n}^{(m)}=\begin{cases}0,&\text{if }n<m,\text{ OR if }n\geq m,\text{ and }\theta_{0}\in C_{n}^{(m)},\\ \frac{1}{\alpha},&\text{if }n\geq m,\text{ and }\theta_{0}\not\in C_{n}^{(m)}. \end{cases}\] It is easy to verify that for every \(m\geq 1\), the process \(\{E_{n}^{(m)}:n\geq 1\}\) is an e-process: * For every \(n\geq 1\), the value of \(E_{n}^{(m)}\) is \(\mathcal{F}_{n}=\sigma(X_{1},\ldots,X_{n})\) measurable. * For any stopping time \(\tau^{\prime}\), adapted to the filtration \(\mathcal{F}\), we have \[\mathbb{E}_{\infty}[E_{\tau^{\prime}}^{(m)}] =\mathbb{E}_{\infty}\left[0\times\mathbf{1}_{\tau^{\prime}<m}+ \frac{1}{\alpha}\times\mathbf{1}_{\tau^{\prime}\geq m}\mathbf{1}_{\theta_{0} \not\in C_{\tau^{\prime}}^{(m)}}\right]=\frac{1}{\alpha}\times\mathbb{E}_{ \infty}\left[\mathbf{1}_{\tau^{\prime}\geq m}\mathbf{1}_{\theta_{0}\not\in C_ {\tau^{\prime}}^{(m)}}\right]\] \[\leq\frac{1}{\alpha}\times\mathbb{E}_{\infty}\left[\mathbf{1}_{ \theta_{0}\not\in C_{\tau^{\prime}}^{(m)}}\right]=\frac{1}{\alpha}\times \mathbb{P}_{\infty}\left(\theta_{0}\not\in C_{\tau^{\prime}}^{(m)}\right)\leq \frac{1}{\alpha}\times\alpha=1.\] The last inequality uses the fact that \((C_{n}^{(m)})_{m\geq n}\) is a level-\((1-\alpha)\) CS for \(\theta_{0}\). Thus, for every \(m\geq 1\), the process \((E_{n}^{(m)})_{n\geq 1}\) is a valid e-process. Step 2. Construct an e-detector.For every \(n\geq 1\), we define \(M_{n}\) to be equal to \(\sum_{m=1}^{n}E_{n}^{(m)}\), and observe that the process \((M_{n})_{n\geq 1}\) is an _e-detector_, as defined by Shin et al. (2024, Definition 2.2) because it satisfies the following two properties: * \((M_{n})_{n\geq 1}\) is adapted to \((\mathcal{F}_{n})_{n\geq 1}\): since for any \(n\geq 1\), all the \(E_{n}^{(m)}\) are \(\mathcal{F}_{n}\)-measurable by construction. * For any stopping time \(\tau^{\prime}\), we have \(\mathbb{E}_{\infty}[M_{\tau^{\prime}}]\leq\mathbb{E}_{\infty}[\tau^{\prime}]\), as noted by Shin et al. (2024, Definition 2.6). Step 3. Bound the ARL using the e-detector.Finally, we translate the stopping criterion of our proposed scheme (stated as the non-intersection of the confidence sequences) in terms of the e-detector \((M_{n})_{n\geq 1}\). In particular, we have \[\{\tau\leq n\}=\{\cap_{m=0}^{n}C_{n}^{(m)}=\emptyset\}\subset\{\exists m\in[ n]:\theta_{0}\not\in C_{n}^{(m)}\}=\{\exists m\in[n]:E_{n}^{(m)}=1/\alpha\}=\{M_{n} \geq 1/\alpha\}. \tag{5}\] In words, when \(T=\infty\), if the intersection of the CSs is empty prior to some time \(n\), it means that at least of of the CSs constructed prior to \(n\) must miscover. This in turn implies that the value of at least one of the e-processes at \(n\) is equal to \(1/\alpha\); or the value of the e-detector \(M_{n}\) is at least \(1/\alpha\). Recall, that in the first equality above, we have assumed that the sets in the confidence sequences are nested; that is, \(C_{n}^{(m)}\subset C_{n^{\prime}}^{(m)}\) for every \(m\leq n^{\prime}<n\). This allows us to look only at the intersection of the most recent sets to define the stopping condition. We now define a new stopping time \[\tau^{\prime}=\inf\{n\geq 1:M_{n}\geq 1/\alpha\},\] and observe that it is stochastically dominated by \(\tau\); that is, (5) implies that \[\{\tau^{\prime}>n\}=\{M_{n}<1/\alpha\}\subset\{\tau>n\},\quad\text{which implies}\quad\mathbb{E}_{\infty}[\tau^{\prime}]\leq\mathbb{E}_{\infty}[\tau].\] From Shin et al. (2024, Proposition 2.4), we know that \(\mathbb{E}_{\infty}[\tau^{\prime}]\geq 1/\alpha\), and we conclude the result by noting that \(\mathbb{E}_{\infty}[\tau]\geq\mathbb{E}_{\infty}[\tau^{\prime}]\) since \(\tau\) stochastically dominates \(\tau^{\prime}\). ### Proof of Theorem 2.5 The proof of this result follows the general argument developed by Shekhar and Ramdas (2023) for analyzing their BCS-Detector strategy, with some modifications due to the use of forward CSs (instead of backward CSs used in the BCS-Detector). In particular, we consider blocks of the post-change observations, each of length \(u\equiv u(\theta_{0},\theta_{1},T)\), starting at time \(T_{j}=T+ju\) for \(j\geq 0\). Note that all these blocks are independent of each other (since \(P_{1}\) is a product distribution), and also independent of the event \(\mathcal{E}=\{\theta_{0}\in C_{T}^{(1)}\}\). Now, observe that for \(k\geq 1\), we have \(\{\tau>T_{k}\}=\cap_{j=1}^{k}\{\tau>T_{j}\}\), which furthermore implies \[\{\tau>T_{k}\}\cap\mathcal{E}\subset\cap_{j=1}^{k}\{C_{T_{j}}^{(T_{j-1})}\cap C _{T}^{(1)}\neq\emptyset\}\cap\mathcal{E}\subset\cap_{j=1}^{k}\{C_{T_{j}}^{(T_{ j-1})}\text{ miscovers }\theta_{1}\}\cap\mathcal{E}.\] The last inclusion follows from the definition of \(u\equiv u(\theta_{0},\theta_{1},T)\), and the event \(\mathcal{E}\). Using this, we obtain: \[\sum_{t=T_{k}+1}^{T_{k+1}}\mathbb{P}_{T}\left(\tau\geq t| \mathcal{E}\right) \leq\sum_{t=T_{k}+1}^{T_{k+1}}\mathbb{P}_{T}\left(\tau\geq T_{k} |\mathcal{E}\right)=u\mathbb{P}_{T}\left(\tau>T_{k}|\mathcal{E}\right)\leq u \times\mathbb{P}_{T}\left(\cap_{j=1}^{k}\{C_{T_{j}}^{(T_{j-1})}\text{ miscovers }\theta_{1}\}\cap \mathcal{E}|\mathcal{E}\right)\] \[\overset{(i)}{\leq}\frac{u}{\mathbb{P}_{T}(\mathcal{E})}\times \prod_{j=1}^{k}\mathbb{P}_{T}\left(\{C_{T_{j}}^{(T_{j-1})}\text{ miscovers }\theta_{1}\}\cap\mathcal{E}\right)\leq\frac{u \alpha^{k}}{1-\alpha}.\] The inequality \((i)\) uses the fact that \(\mathcal{E}\) only depends on the pre-change observations, and hence is independent of the post-change CSs. We now fix some \(k_{0}>1\), and observe that \[\mathbb{E}_{T}[(\tau-T)+]\leq k_{0}u+\sum_{k=k_{0}}^{\infty}\frac{u\alpha^{k} }{1-\alpha}=u\left(k_{0}+\frac{\alpha^{k_{0}}}{1-\alpha}\right).\] By setting \(k_{0}=\lceil\log(1/1-\alpha)/\log(1/\alpha)\rceil\), we get the required statement for Problem 1.1. To prove the second part of Theorem 2.5, we proceed as above, considering blocks of post-change observations of length \(u\equiv u(\Theta_{0},\theta_{1})\) as defined in (1). We then define \(T_{j}=T+ju\) for \(j\geq 0\), and note that \[\{\tau>T_{k}\}\subset\cap_{j=1}^{k}\{C_{T_{j}}^{(T_{j-1})}\cap\Theta_{0}\neq \emptyset\}\subset\cap_{j=1}^{k}\{C_{T_{j}}^{(T_{j-1})}\text{ miscovers }\theta_{1}\}.\] The rest of the argument, then proceeds exactly as in the first part. ### Proof of Proposition 2.7 Before presenting the proof of Proposition 2.7, we first recall some of details of the betting CS first proposed by Waudby-Smith and Ramdas (2023). Background on betting CS.Given observations \(X_{1},X_{2},\ldots\) drawn from an independent process with mean \(\theta\), the betting CS is defined as \[C_{n}=\{s\in[0,1]:W_{n}(s)<1/\alpha\},\quad\text{with}\quad W_{n}(s):=\prod_{ i=1}^{n}(1+\lambda_{t}(s)(X_{t}-s)),\quad\text{for all }s\in[0,1],\] where \(\{\lambda_{t}(s):t\geq 1,s\in[0,1]\}\) are predictable bets, taking values in \([-1/(1-s),1/s]\). For certain betting strategies, such as the _mixture method_(Hazan, 2016, SS 4.3), the _regret_ is logarithmic for all \(s\). In particular, this implies that \[\sup_{\lambda\in\left[\frac{\tau-1}{1-s},\frac{s}{s}\right]}\sum_{t=1}^{n} \log\left(1+\lambda(X_{t}-s)\right)-\sum_{t=1}^{n}\log\left(1+\lambda_{t}(s)(X _{t}-s)\right)\leq 2\log n,\quad\text{for all }n\geq 13.\] Note that this idea of using the mixture method with known regret guarantees, for the specific context of betting CS was first considered by Orabona and Jun (2024+). We now present the details of the proof of Proposition 2.7. First we show that under the condition that \(T\geq 64\log(64/\Delta^{2}\alpha)/\Delta^{2}\), the analysis of the first setting (i.e., with unknown \(\Theta_{0}\)) can be reduced to the second case (with known \(\Theta_{0}\)). Then, we present the details of the proof of the second setting. Proof of (2).Using the fact that \(\log(1+x)\geq x-x^{2}/2\), we can further lower bound \(\log W_{n}(s)\) with \[\log W_{n}(s)\geq\sup_{\lambda\in\left[\frac{-1}{1-s},\frac{1}{2}\right]}\sum_{t =1}^{n}\lambda(X_{t}-s)-\frac{\lambda^{2}}{2}(X_{t}-s)^{2}-\log(n^{2}).\] By setting the value of \(\lambda\) to \(\frac{1}{n}\sum_{t=1}^{n}X_{t}-m\), and on simplifying, we can show that the betting CS after \(n\) observation satisfies \(|C_{n}^{(1)}|\leq 4\sqrt{\log(n/\alpha)/n}\). This implies that for \(T\geq 64\log(64/\Delta^{2}\alpha)/\Delta^{2}\), the width of the CS starting at time \(1\) must be smaller than \(\Delta/2=|\theta_{1}-\theta_{0}|/2\). If the event \(\mathcal{E}=\{\theta_{0}\in C_{T}^{(1)}\}\) happens (recall that this is a probability \(1-\alpha\) event), then we know that \(\theta_{0}\in\widetilde{\Theta}_{0}\coloneqq\{\theta:|\theta-\theta_{0}| \leq\Delta/2\}\). This set \(\widetilde{\Theta}_{0}\) plays the role of the known pre-change parameter class in the analysis. Hence the rest of the proof to obtain the upper bound stated in (2) proceeds exactly as in the case when the pre-change distribution class is known, and we present the details for the latter case next. Proof of (3).Since the proof of this result is long, we break it down into four simpler steps. _Step 1: Bound \((\tau-T)\) with the maximum of a class of stopping times \((N_{\theta})_{\theta\in\Theta_{0}}\)._ Introduce the stopping times \(N_{m}=\inf\{n\geq m:C_{n}^{(m)}\cap\Theta_{0}=\emptyset\}\), and note that \[\tau\leq\min_{m\geq 1}N_{m},\quad\text{which implies that}\quad\mathbb{E}_{T}[\tau]\leq \min_{m\geq T+1}\mathbb{E}_{T}[N_{m}].\] Since the post change observations are assumed to be drawn i.i.d. from a distribution with mean \(\theta_{1}\), all the \(N_{m}\) for \(m\geq T+1\), have the same distribution, and thus, the same expected value. Hence, it suffices to get an upper bound on \(N_{T+1}\). To simplify the ensuing argument, we will use \(C_{n}\) to denote \(C_{n+T+1}^{(T+1)}\) for \(n\geq 1\). Furthermore, we also assume that \(\theta_{1}<\theta\) for all \(\theta\in\Theta_{0}\), and \(\inf_{\theta\in\Theta_{0}}\theta=\theta_{1}+\Delta\). By the definition of betting CS, the stopping time \(N_{T+1}\) can be written as the supremum of a collection of stopping times: \[N_{T+1}-T=\sup_{\theta\in\Theta_{0}}N_{\theta},\quad\text{where}\quad N_{ \theta}=\inf\{n\geq 1:W_{n}(\theta)\geq 1/\alpha\}.\] _Step 2: Bound \((N_{\theta})_{\theta\in\Theta_{0}}\) with a monotonic class of stopping times._ Next, we will upper bound each \(N_{\theta}\) with another stopping time \(\gamma_{\theta}\), which have the property that \(\gamma_{\theta^{\prime}}<\gamma_{\theta}\) for \(\theta^{\prime}>\theta\). In particular, using the regret guarantee of the betting strategy, observe the following: \[\log(W_{n}(\theta)) \geq\sup_{\lambda\in\left[\frac{-1}{1-s},\frac{s}{2}\right]}\sum _{t=1}^{n}\log(1+\lambda(X_{t}-\theta))-2\log n\] \[\geq\sup_{\lambda\in\left[0,\frac{1}{1-s}\right]}\sum_{t=1}^{n} \log(1+\lambda(\theta-X_{t})-2\log n\coloneqq Z_{n}(\theta)-2\log n.\] Define a new stopping time \(\gamma_{\theta}=\inf\{n\geq 1:Z_{n}(\theta)-2\log n\geq\log(1/\alpha)\}\), and note that the above display implies \(\gamma_{\theta}\geq N_{\theta}\), and thus we have \(N_{T+1}-T\leq\sup_{\theta\in\Theta_{0}}\gamma_{\theta}\). We now show the monotonicity of \(\gamma_{\theta}\). For any \(\theta^{\prime}>\theta\), we have \(\lambda(\theta^{\prime}-X_{t})\geq\lambda(\theta-X_{t})\) for any \(\lambda>0\), which implies that \(\sum_{t=1}^{n}\log(1+\lambda(\theta^{\prime}-X_{t}))\geq\sum_{t=1}^{n}\log(1+ \lambda(\theta-X_{t}))\). Thus, we have the following relation (for \(\theta^{\prime}\geq\theta\)): \[Z_{n}(\theta^{\prime}) =\sup_{\lambda\in\left[0,\frac{1}{1-s}\right]}\sum_{t=1}^{n}\log( 1+\lambda(\theta^{\prime}-X_{t}))\geq\sup_{\lambda\in\left[0,\frac{1}{1-s} \right]}\sum_{t=1}^{n}\log(1+\lambda(\theta^{\prime}-X_{t}))\] \[\geq\sup_{\lambda\in\left[0,\frac{1}{1-s}\right]}\sum_{t=1}^{n} \log(1+\lambda(\theta-X_{t}))=Z_{n}(\theta).\] Thus, \(Z_{n}(\theta^{\prime})\geq Z_{n}(\theta)\), which implies that \(\gamma_{\theta^{\prime}}\leq\gamma_{\theta}\), and in particular, \(\gamma_{\theta}\leq\gamma_{\theta_{1}+\Delta}\) for all \(\theta\in\Theta_{0}\). This leads to the required conclusion \[N_{T+1}-T\leq\sup_{\theta\in\Theta_{0}}N_{\theta}\leq\sup_{\theta\in\Theta_{0}} \gamma_{\theta}\leq\gamma_{\theta_{1}+\Delta}.\] This is a crucial step, as it reduces the task of analyzing the supremum of a large collection of stopping times, into that of analyzing a single stopping time \(\gamma_{\theta_{1}+\Delta}\). _Step 3: Bound \(\gamma_{\theta_{1}+\Delta}\) with the 'oracle' stopping time \(\rho^{*}\)._ Let \(\lambda^{*}\equiv\lambda^{*}(\theta_{1}+\Delta)\) denote the log-optimal betting fraction, defined as \(\operatorname*{argmax}_{\lambda\in[0,1/(1-\theta_{1}-\Delta)}\mathbb{E}[\log(1 +\lambda(\theta_{1}+\Delta-X))]\), where \(X\) is drawn from the post-change distribution. By definition then, we have \[Z_{n}(\theta_{1}+\Delta)\geq Z_{n}^{*}(\theta_{1}+\Delta)\coloneqq\sum_{t=1}^ {n}\log(1+\lambda^{*}(\theta_{1}+\Delta-X_{t})),\] which immediately implies \[\gamma_{\theta_{1}+\Delta}\leq\rho^{*}\coloneqq\inf\{n\geq 1:Z_{n}^{*}(\theta_{1 }+\Delta)\geq\log(n^{2}/\alpha)\}.\] The stopping time \(\rho^{*}\) is much easier to analyze as it is the first crossing of the boundary \(\log(n^{2}/\alpha)\) by the random walk \(Z_{n}^{*}(\theta_{1}+\Delta)\) with i.i.d. increments. _Step 4: Evaluate the expectation of \(\rho^{*}\)._ Observe that \(Z_{n}^{*}\equiv Z_{n}^{*}(\theta_{1}+\Delta)=\sum_{t=1}^{n}V_{t}\), with \(V_{t}=\log(1+\lambda^{*}(\theta_{1}+\Delta-X_{t}))\). Without loss of generality, we can assume that \(\lambda^{*}<1/(1-\theta_{1}-\Delta)\) (if not, we simply repeat the argument with \(\lambda^{*}-\epsilon\) for an arbitrarily small \(\epsilon>0\)), and hence \((V_{t})_{t\geq 1}\) are i.i.d. and bounded increments, which means that \(\mathbb{E}[V_{t}]<\infty\). In fact, by the dual definition of the information projections (Honda and Takemura, 2010), we have \(\mathbb{E}[V_{t}]=K_{2}\equiv K_{2}(P_{1},\Theta_{0})\). Next, with \(n_{0}\coloneqq\inf\{n\geq 1:\log(n^{2}/\alpha)/n<K_{2}/2\}\), we have for \(n\geq n_{0}\) by an application of Hoeffding's inequality: \[\mathbb{P}\left(\rho^{*}>n\right)\leq\mathbb{P}\left(\frac{1}{n}\sum_{t=1}^{ n}V_{t}-K_{2}\leq-\frac{K_{2}}{2}\right)\leq\exp\left(-c^{\prime\prime}n \right),\] for some \(c^{\prime\prime}>0\). Hence, the expectation of \(\rho^{*}\) satisfies \[\mathbb{E}[\rho^{*}]=\sum_{n\geq 0}\mathbb{P}\left(\rho^{*}>n\right)\leq n_{0}+ \sum_{n\geq n_{0}}\exp\left(-c^{\prime\prime}n\right)=n_{0}+\frac{e^{-c^{ \prime\prime}n_{0}}}{1-e^{-c^{\prime\prime}}}<\infty.\] Thus, both \(\rho^{*}\) and \((V_{t})_{t\geq 1}\) have bounded expectations, and we can appeal to Wald's lemma (Durrett, 2019, Theorem 2.6.2) to obtain \(\mathbb{E}[Z_{\rho^{*}}^{*}]=\mathbb{E}[\rho^{*}]K_{2}\). Furthermore, by the definition of \(\rho^{*}\), and the boundedness of \((V_{t})_{t\geq 1}\), we can upper bound \(\mathbb{E}[Z_{\rho^{*}}^{*}]\) with \(\log(1/\alpha)+2\log(\mathbb{E}_{T}[\rho^{*}])+c^{\prime}\), where \(c^{\prime}=\max\{\log(1+\lambda_{\theta_{1}+\Delta}^{*}),\,\log(1-\lambda_{ \theta_{1}+\Delta}^{*})\}\). In other words, we have \[\mathbb{E}[\rho^{*}]\leq\frac{\log(1/\alpha)+2\log(\mathbb{E}[\rho^{*}])+c^{ \prime}}{K_{2}},\quad\text{which implies}\quad\mathbb{E}[\rho^{*}]=\mathcal{O} \left(\frac{\log(1/\alpha K_{2})}{K_{2}}\right).\] This completes the proof. ## 5 Conclusion In this paper, we proposed a changepoint detection scheme that constructs a new CS with every observation, and declares a detection as soon as the intersection of the active CSs becomes empty. The design of our scheme was motivated by the BCS-Detector of Shekhar and Ramdas (2023), which proceeds by initializing new "backward CSs" with each new observation. We showed that our new scheme matches the detection delay performance of BCS-Detector, while improving the ARL lower bound by a factor of 2. Furthermore, our scheme achieves this improvement under weaker model assumptions (i.e., without needing the ability to construct CSs in both forward and backward directions). Interestingly, our proposed scheme can be seen as a nonparametric generalization of Lorden's reduction from SCD to repeated sequential testing, due to the duality between sequential testing and CSs. ### Acknowledgement The authors acknowledge support from NSF grants IIS-2229881 and DMS-2310718
2302.14554
FPCD: An Open Aerial VHR Dataset for Farm Pond Change Detection
Change detection for aerial imagery involves locating and identifying changes associated with the areas of interest between co-registered bi-temporal or multi-temporal images of a geographical location. Farm ponds are man-made structures belonging to the category of minor irrigation structures used to collect surface run-off water for future irrigation purposes. Detection of farm ponds from aerial imagery and their evolution over time helps in land surveying to analyze the agricultural shifts, policy implementation, seasonal effects and climate changes. In this paper, we introduce a publicly available object detection and instance segmentation (OD/IS) dataset for localizing farm ponds from aerial imagery. We also collected and annotated the bi-temporal data over a time-span of 14 years across 17 villages, resulting in a binary change detection dataset called \textbf{F}arm \textbf{P}ond \textbf{C}hange \textbf{D}etection Dataset (\textbf{FPCD}). We have benchmarked and analyzed the performance of various object detection and instance segmentation methods on our OD/IS dataset and the change detection methods over the FPCD dataset. The datasets are publicly accessible at this page: \textit{\url{https://huggingface.co/datasets/ctundia/FPCD}}
Chintan Tundia, Rajiv Kumar, Om Damani, G. Sivakumar
2023-02-28T13:19:11Z
http://arxiv.org/abs/2302.14554v1
# Fpcd: An Open Aerial VHR Dataset for Farm Pond Change Detection ###### Abstract Change detection for aerial imagery involves locating and identifying changes associated with the areas of interest between co-registered bi-temporal or multi-temporal images of a geographical location. Farm ponds are man-made structures belonging to the category of minor irrigation structures used to collect surface run-off water for future irrigation purposes. Detection of farm ponds from aerial imagery and their evolution over time helps in land surveying to analyze the agricultural shifts, policy implementation, seasonal effects and climate changes. In this paper, we introduce a publicly available object detection and instance segmentation (OD/IS) dataset for localizing farm ponds from aerial imagery. We also collected and annotated the bi-temporal data over a time-span of 14 years across 17 villages, resulting in a binary change detection dataset called **F**arm **P**ond **C**hange **D**etection **D**ataset **(**FPCD**). We have benchmarked and analyzed the performance of various object detection and instance segmentation methods on our OD/IS dataset and the change detection methods over the FPCD dataset. The datasets are publicly accessible at this page: [https://huggingface.co/datasets/ctundia/FPCD](https://huggingface.co/datasets/ctundia/FPCD). Object Detection, Instance Segmentation, Change Detection, Remote Sensing. ## 1 Introduction Accurate and timely detection of geographical changes on earth's surface gives extensive information about the various activities and phenomena happening on earth. Change detection task helps in analyzing and understanding co-registered images for change information. A change instance between two images refers to the semantic level differences in the appearance between the two images in association to the regions of interest captured at different points in time. On geographical images, it helps to keep track of the changes to analyze evolution of land geography or land objects and to mitigate hazards at local and global scales. The availability of high-resolution aerial imagery has enabled land use and land cover monitoring to detect objects such as wells, farm ponds, check dams, etc. at instance level. 1 Footnote 1: Authors _a,b_ made equal contribution Change detection can be bi-temporal when two points in time are compared or multi-temporal when multiple points in time are compared. When multi-temporal data is captured by satellites, drones or aerial vehicles, they are constrained by spatial, spectral and temporal elements, in addition to atmospheric conditions, resolution, etc. The cloud cover, shadows and seasonal changes become visible in many aerial images that affect the overall appearance of the images resulting in the co-registered images to appear from different domains. Moreover, acquiring and annotating images from satellite images to build change detection datasets is a costly process that involves many underlying tasks. The absence of paired images of the same location captured at different times makes it difficult to obtain a useful change detection dataset. Also when paired bi-temporal images are available, there might not be changes present, or the bi-temporal images are captured in very short intervals. Farm ponds have become popular as private irrigation sources in developing countries like India over the last two decades. A farm pond is an artificial due out structure having an inlet and outlet for collecting the surface runoff water flowing from the farm area (Tundia et al., 2020). They are used to collect and store rainwater so as to provide irrigation to Figure 1: Different categories of farm ponds. From left to right: Wet farm pond (lined), Dry farm pond (lined), Wet farm pond (unlined) and Dry farm pond (unlined). crops during periods of water scarcity. It is one of the many minor irrigation structures (Tundia et al., 2022) with a cultivable command area of up to 2000 hectares. Farm ponds can be classified into two categories based on the presence or absence of water in it: wet farm ponds and dry farm ponds. Also, farm ponds can be either lined or unlined depending on the use of plastic lining to prevent water seepage into groundwater. Overall, farm ponds can be classified into 4 sub-categories (See Fig 1): lined wet farm pond, unlined wet farm pond, lined dry farm pond and unlined dry farm pond based on their structure and the presence of water. Though efforts have been put up to promote farm ponds, serious concerns have been raised over their implementation and their usage. The purpose of building farm ponds has long drifted from their original objective of storing rainwater for protective irrigation to being used as storage tanks for pumped-out groundwater exposing the underground water to evaporation losses. In the due time, farm ponds have accelerated the rate of groundwater exploitation to many folds (Prasad et al., 2022). The objective of our work is to develop change detection models to detect and visualize changes associated with farm ponds and to compute percentage differences in the presence/absence of wet/dry farm ponds. This can help in making policy decisions and in analyzing impacts of farm ponds on aspects like shifting of agricultural practice, policies being implemented, seasonal effects and changes, etc. ### Contributions Our contributions in this paper are: 1. **Farm pond change detection dataset (FPCM)**: A publicly available dataset for change detection tasks on farm pond categories for different purposes and stakeholders. 2. A small-scale public dataset for object detection and instance segmentation of four farm pond categories. The paper is organized as follows: section 1.3 covers the related work, section 2 covers the details of the proposed dataset, section 3 covers the experiments, section 4 covers the results and observation, and finally conclusion in section 5. ### Problem Formulation Generally, change detection tasks involve an input set of multi-temporal images and the corresponding ground truth mask, with most change detection datasets having bi-temporal images mapped to binary mask labels. A general assumption in most change detection tasks is that there is a pixel-to-pixel correspondence between the two images and these correspondences are registered to the same point on a geographical area. Based on the correspondences, each pixel in the change mask can be assigned a label indicating whether there is a change or not. In other words, pixels belonging to the change mask are assigned a change label if the corresponding area of interest has geographical changes with respect to each other and those are not assigned a change label when there are no changes. In a binary change detection setting, the paired input images are \(T_{0}\in R^{CchHxW}\) and \(T_{1}\in R^{CchHxW}\) with a size of CxHxW, where H and W are the spatial dimensions and C=3 is the input image channel dimensions. The ground truth mask can be represented as pixel based mask label, \(M\in R^{CchHxW}\) for the bi-temporal input images of the change detection task. ### Related Work #### 1.3.1 Datasets Change detection datasets generally use RGB images (Chen and Shi, 2020) or hyperspectral images (Daudt et al., 2018) with some datasets (Van Etten et al., 2021) having change instances up to 11 million. The image resolution of CD datasets range from a few centimeters (Ji et al., 2019), (Shao et al., 2021), (Tian et al., 2020) to 10 meters (Van Etten et al., 2021), (Daudt et al., 2018). Some of these datasets have image pairs ranging from a few hundred (Liu et al., 2022) to tens of thousands (Shi et al., 2022) and some have very few input image pairs of very high resolution. While most datasets have fixed image sizes. Most of the CD datasets have modest image sizes except for the exception of a few (Wu et al., 2017), (Ji et al., 2019). We summarize the different aspects of various change detection datasets along with their details in Table 1. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Dataset** & **Image Pairs** & **Rank (min)** & **Image Size** & **Bands** & **Mask Type** & **Change Inst.** \\ \hline \hline CLED (Ji et al., 2022) & 600 & 0.5 \(\times\) 0.2 & 512.2 \(\times\) 512 & RGB & Binary & - \\ \hline SL2data (Shen et al., 2021) & 5000 & to & 1024 \(\times\) 1024 & RGB & Binary & 65,920 \\ \hline SYSU-CD (Shi et al., 2022) & 20000 & 0.5 & 256 \(\times\) 256 & RGB & Binary & - \\ \hline DSIFN (Zhang et al., 2020) & 3988 & - & 512 \(\times\) 512 & RGB & Binary & - \\ \hline \hline LFNC-BC (Chen and Shi, 2020) & 637 & 0.5 & 1024 \(\times\) 1024 & RGB & Binary & 31,333 \\ \hline WBU & \multirow{4}{*}{1} & 32207 & \multirow{4}{*}{1534} & \multirow{4}{*}{166} & \multirow{4}{*}{166} & \multirow{4}{*}{Binary} & \multirow{4}{*}{2297} \\ Building & & & & & & & \\ CD Ji et al. (2019) & & & & & & \\ \hline SYLAR & 13 & 1.5 & 952 \(\times\) 640 & RGB & Binary & 382 \\ \hline **FPCM** & **684** & **0.156** & **1924 \(\times\) 568** & **0.156** & **0.156** & **0.156** \\ \hline \end{tabular} \end{table} Table 1: Comparison of change detection datasets. #### 1.3.2 Change Detection Techniques Deep learning based change detection methods can be classified into feature-based, patch-based and image based deep learning change detection. Earlier CNN architectures used siamese with triplet loss (Zhang et al., 2019) and weighted contrastive loss (Zhan et al., 2017) to learn discriminative features between change and no-change images, and used Euclidean distances between the images features to generate the difference images. Recent developments in deep learning have led to the usage of attention mechanisms and feature fusion at various scales improving feature extraction capabilities. We compare and list some of the existing encoder-decoder based models used for change detection below. Deeplabv3+ (Chen et al., 2018) uses spatial pyramid pooling module to encode multi-scale context at multiple effective fields-of-view and encode-decoder structure to capture sharper object boundaries. It improves upon DeepLabv3 (Chen et al., 2017) by applying depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules. Pyramid scene parsing network (PSPNet)(Zhao et al., 2017) along with pyramid pooling module applies different-region-based context aggregation to produce global prior representation for pixel-level prediction tasks. Unified Perceptual Parsing Net (UPNet)(Xiao et al., 2018) is a multi-task framework that can recognize visual concepts from a given image using a training strategy developed to learn from heterogeneous image annotations. Multi-scale Attention Net (MA-Net) (Fan et al., 2020) uses multi-scale feature fusion to improve the segmentation performance by introducing self-attention mechanism to integrate local features with their global dependencies. It uses Position-wise Attention Block (PAB) to model the feature inter-dependencies in spatial dimensions and Multi-scale Fusion Attention Block (MFAB) to capture the channel dependencies between any feature map by multi-scale semantic feature fusion. LinkNet (Chaurasia and Culurciello, 2017) proposes a novel deep neural network architecture that uses only 11.5 million parameters learning without any significant increase in number of parameters. Pyramid Attention Network (PAN) (Li et al., 2018) uses Feature Pyramid Attention module and Global Attention Upsample module to combine attention mechanism and spatial pyramid to extract precise dense features for pixel labeling. Unet++ (Zhou et al., 2018) improves on Unet (Ronneberger et al., 2015) with new skip pathways and is based on the idea that optimizer can learning easily with semantically similar feature maps by reducing the gap between the feature maps of the encoder and decoder sub-networks. Bitemporal image transformer (BiT) (Hao Chen and Shi, 2021) models contexts within the spatial-temporal domain in a deep feature differencing-based CD framework. The bi-temporal images are encoded as tokens and a transformer encoder models the contexts in the compact token-based space-time. A transformer decoder then refines the original features from the learned context-rich tokens back to the pixel-space. #### 1.3.3 Object Detection and Instance Segmentation Techniques Object detection is a computer vision task that localizes and classifies objects of interest in an image. Instance segmentation on the other hand provides a detailed inference for every single pixel in input image. Since the advent of deep learning(Kumar et al., 2021), object detection has vastly benefited from sophisticated architectures and image representations. Over the years, deep learning based detectors have evolved into one-stage and two-stage detectors. In one-stage detection, the input image is divided into regions simultaneously with the probabilistic prediction of objects, while in two-stage object detection the object proposals are classified in the second stage from a sparse set of candidate object proposals generated in the first stage. A few of the well-known two-stage detectors include the FasterRCNN (Ren et al., 2017), GridRCNN (Lu et al., 2019), while the YOLOv3 (Redmon and Farhadi, 2018), Generalized Focal Loss (Li et al., 2020), Gradient Harmonized SSD (Li et al., 2019) are one-stage detectors. Detecting objects from aerial imagery is affected by adversities like viewpoint variation, illumination, occlusion, etc. and becomes difficult due to objects being small, sparse and non-uniform in high-resolution aerial images. \begin{table} \begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular} \end{table} Table 2: Temporal object instances and their respective change classes. Row 1 corresponds to T0 images, row 2 corresponds to T1 images, row 3 corresponds to change mask. Columns - (a,b) No Change, (c) Farm pond constructed, (d) Farm pond demolished, (e) Farm pond dried and (f) Farm pond wetted. ## 2 Farm pond Change Detection (FPCD) Dataset FPCD is a novel publicly available change detection dataset that focuses on changes associated with irrigation structures from India. The details of the multi-temporal images including the district, village name and the time-interval over which the images were collected are summarized in Table 5. ### Dataset Collection Technique The images were collected by following the steps given below:- 1. **Area and Timestamp Selection:** The ground truth information about the farm pond locations and the list of villages were collected from different sources like news articles, reports and web-pages along with Jalyukt Shivar data(Maharashtra Remote Sensing Application Centre (MRSAC), 2015). The location of the Farm ponds would span across the different districts of Maharashtra, India. The villages were then rigorously filtered and chosen based on the presence or absence of farm ponds, and bi-temporal pairs were decided by visual inspection of various timestamps. 2. **Grid Formation:** For collecting the images of a fixed size for each village, the grids of latitude and longitude at a fixed zoom level of Google Maps were needed. We selected zoom level as 18 which provides resolution of sub-meter level. Grids of size 1024x768 pixels were created by using the village boundaries provided by Indian Village Boundaries Project (Data{Meet}, 2019) in the geo-location dimensions i.e. latitudes and longitudes. 3. **Image Collection:** Google Earth Pro desktop software was used for collecting historical imagery. For each grid cell, the map was set such that it would cover the grid cell boundaries with the view. Then the bi-temporal images were saved by setting the timestamp one at a time using the Google Earth Historical Imagery tool. 4. **Object & Instance Annotation:** Once we have collected all the image pairs for each village, we annotate the objects individually in each image. Depending on the types of farm ponds, there are four main classes as given in Table 4. We use the annotation tool, LabelMe(Russell et al., 2008) to annotate and label the above mentioned classes. And further we convert them into CoCo(Lin et al., 2014) format required for the object detection and instance segmentation tasks thus forming the Farm pond OD/IS dataset. 5. **Change Mask Generation:** In this step, we take bi-temporal pair of images and corresponding farm pond annotations. Based on the location and farm pond category we generate different types of change masks with change classes as given in Table 2. More details on the type of changes are given in subsection 2.2. ### Dataset Details A total of 694 images of size 1024 x 768 pixels at zoom level 18 were collected from Google Earth images using the technique described in subsection 2.1. The regions of Maharashtra in India were chosen, since it is largely a groundwater dependent region in western India. The images collected at zoom level 18 are at a very high resolution scale, up to 1 meter. The details of villages collected and their respective timestamps are given in Table 5. Most of the villages have timestamps during the months of Jan-April and the minimum year difference between bi-temporal images is 2 years and the maximum year difference is 9 years, earliest being 2007 and latest being 2021. The FPCD dataset consists of image pairs, change masks and object annotations of farm ponds \begin{table} \begin{tabular}{|c|c|} \hline **Amaudation** & **No. of** \\ **Class** & **Inters** \\ \hline Wei map road (lines) & 207 \\ \hline Wei map road (lines) & 203 \\ \hline Wei map road (lines) & 90 \\ \hline Wei map road (lines) & 608 \\ \hline \end{tabular} \end{table} Table 4: Class distribution of object instances in OD/IS dataset. \begin{table} \begin{tabular}{|c|c|c|} \hline **Character** & **No. of** \\ **Class** & **Inters** \\ \hline Team pond constructed & 431 \\ \hline Farm pond classified & 47 \\ \hline Farm pond verified & 90 \\ \hline Farm pond verified & 90 \\ \hline \end{tabular} \end{table} Table 3: Class distribution of change objects. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **District** & **Villages Name** & **Timestamp** & **Hangy** \\ & & **Pais** \\ \hline Nousi & Helio Snozzani & Feb-2019, Feb-2011 & 25 \\ \hline Aladin & Alkhatevada & Mar-2017, Mar-2018 & 24 \\ \hline Aladin & C as polygons set in COCO (Lin et al., 2014) format. For farm pond change detection, we identify four change classes i.e Farm pond constructed, Farm pond demolished, Farm pond dried and Farm pond wetted. Let's consider the bi-temporal pair as T0-T1, T0 being the image captured with the old timestamp and T1 being the image captured with new timestamp image. We identify a binary change as FP constructed, when there is no farm pond in the T0 image, but a farm pond is observed at the same location in the T1 image. Likewise, we identify a binary change as FP demolished, when a farm pond existed at any location in the T0 image and there is no farm pond at the same location or replaced by different terrain in the T1 image. We identify a binary change as farm pond dried when there existed a wet farm pond in the T0 image and the farm pond at the same location in T1 image becomes dry due to absence of water or presence of low water level. We identify a binary change as farm pond wetted, when there existed a dry farm pond in the T0 image and the farm pond at the same location in T1 image is filled with water or the water reaches the surface level. Some farmers may pump groundwater and fill the farm ponds instead of using surface-water runoff due to rain leading to water loss by further evaporation. This leads to depletion of groundwater and the practice of farm pond based irrigation becoming unsustainable (Prasad et al., 2022). Thus grouping farm pond constructed and farm pond demolished class helps in identifying the increase/decrease in farm ponds. This helps stakeholders like researchers and policy makers to monitor the impacts due to increase/decrease in farm ponds like change in agricultural patterns, impact on ground water, etc. We classify this task as Task-1. In certain dry and semi-arid regions, due to uncertain climatic conditions, yearly and seasonal rainfall is erratic. Agricultural officers and researchers often correlate climatic conditions and ground water levels with such changes to infer further observations eventually leading to necessary interventions needed for agricultural sustainability. Thus we group Farm pond dried and Farm pond wetted into Task-2. We also combined the above grouped changes so as to also provide overall statistics. We call this as Task-3. We further explain experimental details of Task-1, Task-2 and Task-3 in Section 3. The details of change class distribution are given in the Table 3. ## 3 Experiments and Evaluation Tasks like instance segmentation (Lin et al., 2014) and image segmentation (Lin et al., 2014) involves a similar pipeline like that of change detection. We used the change detection framework (Kaiyu Li, 2021) for conducting most of our CD-based experiments. The pytorch based change detection framework supports many models, encoders and deep architectures. In this change detection pipeline, the bi-temporal images are encoded into feature vectors using two encoders. The encoders can be either siamese or non-siamese type. The input images are encoded into encoded feature vectors, which are fused either by concatenating, summing, subtracting or finding the absolute difference of the vectors. The resulting feature vector are decoded as output which is compared to the ground truth mask. We apply simple transforms like flipping, scaling and cropping for augmentations of the dataset images at the global image level. Unlike other tasks, for change detection the same transform has to be applied to the multi-temporal images. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **Task 1** & **Task 2** & **Task 3** \\ \hline **Type** & Neg & Fun & Neg & Fun & Neg & Fun \\ \hline **Train** & 363 & 237 & 534 & 66 & 257 & 17 \\ \hline **Validation** & 52 & 4 & 83 & 11 & 45 & 49 \\ \hline \end{tabular} \end{table} Table 6: Distribution of positives and negative image pairs of various change detection tasks. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & **Task 1** & **Task 2** & **Task 3** \\ \hline **Model** & **Precision** & **Recall** & **Fusion** & **Recall** & **Fusion** & **Recall** & **Fusion** \\ \hline [MISSING_PAGE_POST] **Precision** & **Precision** & **Precision** & **Precision** \\ \hline **Model** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** & **Precision** \\ \hline **Model** ### Evaluation Criteria We report the precision, recall and F-score values as the metric to compare the performance in different change detection tasks under various encoder backbones settings which is defined as below: \[Precision=\frac{TP}{TP+FP} \tag{1}\] \[Recall=\frac{TP}{TP+FN} \tag{2}\] \[F-Score=\frac{2\ X\ Precision\ X\ Recall}{Precision+Recall} \tag{3}\] where, TP is the number of true positives, FP is the number of false positives, TN is the number of true negatives and FN is the number of false negatives. We report the bounding box mean average precision (bbox mAP) and segmentation mean average precision (segm mAP) as the performance metric to compare the various object detection and instance segmentation methods under different backbone settings in the benchmark. Intersection Over Union (IOU) is a measure that evaluates the overlap between ground truth and predicted bounding boxes and helps to determine if a detection is True Positive. _Average Precision_ (AP) is obtained by interpolating the precision at each recall level \(r\), taking the maximum precision whose recall value is greater than or equal to \(r+1\). Finally the mean of AP of all classes gives _mean Average Precision_ (mAP), the metric being used to compare different detectors. ### Experiments We benchmarked various object detection (OD) and instance segmentation (IS) methods on the farm pond OD/IS dataset. We use an existing encoder-decoder based change detection framework [10] for the binary change detection tasks corresponding to Farm pond construction/ demolition (Task 1) and Farm pond dried/ wetted (Task 2). The encoder belongs to siamese type and the features from each branch are fused by concatenation. A few experiments were conducted to analyze the effect of various components of the change detection pipeline. We combined the masks from Task 1 and Task 2 to understand the impact of performance on multiple change classes. Combining the tasks lead to masks having more change objects than either of the tasks (Refer Fig 2). The train-validation split for each Task has been mentioned in Table 6. We split the farm ponds dataset for OD/IS task in a 80/20 train/test proportion. We used the MMDetection [1] framework for implementing the various methods in the OD/IS benchmark. We used the pretrained backbones of ResNet-50, ResNet-101 and MobileNetv2 primarily trained on Imagenet, COCO and cityscapes for the object detection experiments and Resnet-50, Resnet-101 and Swin Transformer pretrained backbones for instance segmentation experiments. ## 4 Results and Observations The results of change detection on Farm pond construction/demolition (Task-1) taking ResNet-18, ResNet-50 and ResNet-101 as encoders are given in Table 7. BiT model despite not having best precision or recall values performs the best in terms of F-score with the score of **0.8704**. MANet despite performing highest in precision scores poorly in recall values leading to poor F-score and FPN scores the highest recall value. FPN has the best recall in Resnet-101 and consistently better than its smaller counterparts, ResNet-18 and ResNet-50. The results of change detection on Farm pond dried/wetted (Task-2) taking ResNet-18, ResNet-50 and ResNet-101 as encoders are given in Table 8. We can notice that Unet has the best F-Score of **0.9171** and the highest precision for ResNet-18. For ResNet-50 encoder, we can observe that FPN has the best F-Score of **0.9125**, though DeepLabV3 has best recall and UperNet has best precision values. DeepLabV3+ has the highest recall value with comparatively smaller precision than the best performing model. If we observe carefully, Task-2 has higher overall scores irrespective of the encoder architecture. This may be due to the presence of farm ponds at the same locations in both temporal images where the change is registered and also due to the fact that the number of positives are much smaller than the num Figure 2: T0 and T1 input image pair in the top row and the image masks correspondingly for Task 1, Task 2 and Task 3 from left to right in the bottom row. ber of negative images for the task. The results of combining change detection tasks (Task-1 and Task-2) as Task-3 taking ResNet-18, ResNet-50 and ResNet-101 as encoders are given in Table 9. For the ResNet-18 encoder, we can notice that BiT has the best F-score of **0.8741** and the highest recall. For the ResNet-50 encoder, we can notice that PSPNet has the best precision but with poor recall values affecting the F-score. We also conducted an ablation study on Task 3 to check if the absence of negative examples in the training set could affect the change detection task performed. We can see that BiT achieves the best F-Score with ResNet-18 as the encoder, but we can also see that there is a decrease in precision when compared to the best model in the complete dataset, despite better recall values. The results for F-scores also show a similar trend, suggesting that even with a high ratio, the performance is comparable even in the absence of negative images in the train set. Similarly, we observe that UNet++ with the best F-Score trails the model trained on the entire dataset by only a few points; this further supports the idea that positive samples aid in learning while negative images may increase the model's robustness. In this section, we analyze the performance of object detection and instance segmentation task on the Farm pond OD/IS dataset. The results of comparison of object detection on various existing models are given in Table 11. For the object detection task, Probabilistic Anchor Assignment (PAA) model achieves the best performance of **0.575** mAP with IoU(0.5:0.95) with the ResNet-50 backbone. We can notice that empirical attention with the attention component (0010) achieves the best mAP under IoU(0.75). Guided Anchoring with FasterRCNN model performs the best in bbox mAP(0.5) using ResNet-50 backbone. For the instance segmentation task, many models have the capability to generate segmentation masks along with bounding boxes results and the COCO metric for both are given in the Table 10. We can notice that CascadeRCNN with ResNet-50 performs the best with **0.577** bbox mAP(0.5:0.95) and **0.694** bbox mAP(0.75) indicating it to be a robust detector. Swin Transformer has the best bbox mAP(0.5) with its own independent backbone. For instance segmentation, Deformable ConvNetsv2 with MRCNN model using ResNet-50 as backbone performs the best on segm mAP metric with IoU(0.5:0.95), while MaskRCNN with ResNet-50 performs the best in segm mAP(0.75) and Swin Transformer performs the best under the segm mAP(0.5) metric. ## 5 Conclusion The availability of high-resolution images due to advances in remote sensing capabilities has led to the use of deep learning models for change detection. In this paper, we introduced **FPCD**, a publicly available dataset for change detection tasks for minor irrigation structures - farm ponds. We also introduced a small-scale public dataset for object detection and instance segmentation of four farm pond categories. Future works can address the issues associated with limited data availability and class imbalance issues in the dataset.
2309.16031
DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs
Mobile robots often rely on pre-existing maps for effective path planning and navigation. However, when these maps are unavailable, particularly in unfamiliar environments, a different approach become essential. This paper introduces DynaCon, a novel system designed to provide mobile robots with contextual awareness and dynamic adaptability during navigation, eliminating the reliance of traditional maps. DynaCon integrates real-time feedback with an object server, prompt engineering, and navigation modules. By harnessing the capabilities of Large Language Models (LLMs), DynaCon not only understands patterns within given numeric series but also excels at categorizing objects into matched spaces. This facilitates dynamic path planner imbued with contextual awareness. We validated the effectiveness of DynaCon through an experiment where a robot successfully navigated to its goal using reasoning. Source code and experiment videos for this work can be found at: https://sites.google.com/view/dynacon.
Gyeongmin Kim, Taehyeon Kim, Shyam Sundar Kannan, Vishnunandan L. N. Venkatesh, Donghan Kim, Byung-Cheol Min
2023-09-27T21:21:40Z
http://arxiv.org/abs/2309.16031v1
# DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs ###### Abstract Mobile robots often rely on pre-existing maps for effective path planning and navigation. However, when these maps are unavailable, particularly in unfamiliar environments, a different approach become essential. This paper introduces DynaCon, a novel system designed to provide mobile robots with contextual awareness and dynamic adaptability during navigation, eliminating the reliance of traditional maps. DynaCon integrates real-time feedback with an object server, prompt engineering, and navigation modules. By harnessing the capabilities of Large Language Models (LLMs), DynaCon not only understands patterns within given numeric series but also excels at categorizing objects into matched spaces. This facilitates dynamic path planner imbud with contextual awareness. We validated the effectiveness of DynaCon through an experiment where a robot successfully navigated to its goal using reasoning. Source code and experiment videos for this work can be found at: [https://sites.google.com/view/dynacon](https://sites.google.com/view/dynacon). ## I Introduction Humans commonly refer to maps, to gauge the distance from their current location and to their intended destination to find a path. Similarly, a mobile robot acquires data about a given environment beforehand, determines its location, and navigates to its endpoint [1]. However, challenges arise when the robot's performance is suboptimal, or when the map is not unavailable or contains noise, making standard path planning difficult [2]. In such situations, it is imperative for the robot to infer unknown information using its memory and predict the desired goal's position. This inference is often called **Contextual Awareness** in navigation. Recently, the Large Language Model (LLM) has garnered attention due to its proficiency in reading and generalizing input sentence contexts. Inspired by the rising trend of LLM applications across various studies, we tackle harnessing LLM for context-aware robot navigation. Our objective is to integrate LLM into a mobile robot, empowering it to understand the context of nearby objects by processing information in sentence form. Nevertheless, it is essential to prioritize real-time object detection and responses to environmental changes for successful navigation without collision [3][4]. Therefore, our ultimate aim is to equip the mobile robot with the ability to comprehend its surroundings, thereby enabling efficient navigation towards its destination, even when deprived of pre-existing world information. In this work, we introduce DynaCon, Dynamic path planner with Contextual awareness via LLM (Fig. 1). This path planner continually updates the list of surrounding objects whenever changes occur in their presence, thus significantly improving the accuracy of real-time planning for the mobile robots. More importantly, it empowers the mobile robot to engage in contextual navigation, enabling it to reach its destination in an unknown environment. To enhance the versatility of DynaCon, we employ two distinct approaches which are pattern recognition or _pattern-based reasoning_ and classification or _categorical reasoning_ for contextual estimation. The contributions of this paper are as follows: * We propose DynaCon, an innovative framework that integrates rapid engineering, real-time feedback, and navigation operations. DynaCon empowers effective robot navigation, even in challenging unknown and dynamic environments. * We introduce a prompt engineering structure, categorized into Role, Main Task, and Instruction components. This structure enhances the learning capabilities of LLM, making it adept at handling complex information. * We craft our prompts for LLM to enable reasoning in both _pattern-based_ and _categorical_ manners. This approach enhances LLM's ability for generalized robot navigation, bolstering its adaptability and effectiveness across diverse scenarios. Fig. 1: Given input sentences, DynaCon can estimate goals through either _pattern-based reasoning_ (top: identifying patterns within numerical sequences) or _categorical reasoning_ (bottom: categorizing objects into specific rooms). In both scenarios, DynaCon can execute navigation with periodic updates in detection, even in the absence of a map.
2303.18047
Differentially Private Stochastic Convex Optimization in (Non)-Euclidean Space Revisited
In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) in Euclidean and general $\ell_p^d$ spaces. Specifically, we focus on three settings that are still far from well understood: (1) DP-SCO over a constrained and bounded (convex) set in Euclidean space; (2) unconstrained DP-SCO in $\ell_p^d$ space; (3) DP-SCO with heavy-tailed data over a constrained and bounded set in $\ell_p^d$ space. For problem (1), for both convex and strongly convex loss functions, we propose methods whose outputs could achieve (expected) excess population risks that are only dependent on the Gaussian width of the constraint set rather than the dimension of the space. Moreover, we also show the bound for strongly convex functions is optimal up to a logarithmic factor. For problems (2) and (3), we propose several novel algorithms and provide the first theoretical results for both cases when $1<p<2$ and $2\leq p\leq \infty$.
Jinyan Su, Changhong Zhao, Di Wang
2023-03-31T13:29:27Z
http://arxiv.org/abs/2303.18047v1
# Differentially Private Stochastic Convex Optimization in (Non)-Euclidean Space Revisited ###### Abstract In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) in Euclidean and general \(\ell_{p}^{d}\) spaces. Specifically, we focus on three settings that are still far from well understood: (1) DP-SCO over a constrained and bounded (convex) set in Euclidean space; (2) unconstrained DP-SCO in \(\ell_{p}^{d}\) space; (3) DP-SCO with heavy-tailed data over a constrained and bounded set in \(\ell_{p}^{d}\) space. For problem (1), for both convex and strongly convex loss functions, we propose methods whose outputs could achieve (expected) excess population risks that are only dependent on the Gaussian width of the constraint set, rather than the dimension of the space. Moreover, we also show the bound for strongly convex functions is optimal up to a logarithmic factor. For problems (2) and (3), we propose several novel algorithms and provide the first theoretical results for both cases when \(1<p<2\) and \(2\leq p\leq\infty\). ## 1 Introduction Learning from data that contains sensitive information has become a critical consideration. It enforces machine learning algorithms to not only learn effectively from the training data but also provide a certain level of guarantee on privacy preservation. To address the privacy concern, as a rigorous notion for statistical data privacy, differential privacy (DP) [12] has received much attention in the past few years and has become a de facto technique for private data analysis. As the two most fundamental models in machine learning, Stochastic Convex Optimization (SCO) [34] with its empirical form, Empirical Risk Minimization (ERM), can find numerous applications, such as biomedicine and healthcare. However, as these applications always involve sensitive data, it is essential to design DP algorithms for SCO and ERM, which corresponds to the problem of DP-SCO and DP-ERM, respectively. DP-SCO and DP-ERM have been extensively studied for over a decade, starting from [9]. For example, [7] presents the optimal rates of general DP-ERM for both convex and strongly loss functions. [5, 14] later study the optimal rates of general DP-SCO, which is later extended by [30, 3] to loss functions that satisfy the growth condition. [6, 2] provide the first study on DP-SCO over non-Euclidean space, i.e., the \(\ell_{p}\) space with \(1\leq p\leq\infty\). While there are a vast number of studies on DP-SCO/DP-ERM, there are still several open problems left, especially the constrained case in Euclidean space where the convex constraint set has some specific geometric structures, and the case where the space is non-Euclidean. In detail, while it has been shown that the optimal rate of DP-ERM over \(\ell_{2}\)-norm ball depends on \(O(\sqrt{d})\) and \(O(d)\) for convex and strongly convex loss, respectively [7], [31] show that for general constraint set \(\mathcal{C}\), the bound on \(d\) could be improved to \(O(G_{\mathcal{C}})\) and \(O(G_{\mathcal{C}}^{2})\) for these two classes of functions, where \(G_{\mathcal{C}}\) is the Gaussian width of set \(\mathcal{C}\) (see Definition 12 for details), which could be far less than the dimension \(d\). However, compared to DP-ERM with Gaussian width, DP-SCO with Gaussian width is far from well understood. The best-known result even cannot recover the optimal rate of the \(\ell_{2}\)-norm ball case [1]. For the non-Euclidean case, [6] only study the constrained case where the constrained set has a bounded diameter. Theoretical behaviors for the unconstrained case are still unknown. Moreover, In the Euclidean case, recently, there has been a line of work focusing on DP-SCO where the distribution of loss gradients is heavy-tailed rather than uniformly bounded [36, 19, 23]. However, non-Euclidean DP-SCO with heavy-tailed data has not been studied so far. In this paper, we study the theoretical behaviors of three problems: (1) DP-SCO (with Lipschitz loss) over a convex constraint set \(\mathcal{C}\) in Euclidean space; (2) unconstrained DP-SCO in \(\ell_{p}^{d}\) space; (3) DP-SCO with heavy-tailed data over a convex constraint set \(\mathcal{C}\) in \(\ell_{p}^{d}\) space. Specifically, our contributions can be summarized as follows. **1.** For problem (1), we consider both convex and strongly convex (smooth) loss functions. We show that for convex functions, there is an \((\epsilon,\delta)\)-DP algorithm whose output could achieve an (expected) excess population risk of \(O(\frac{G_{\mathcal{C}}\sqrt{\log(1/\delta)}}{\epsilon n}+\frac{1}{\sqrt{n}})\), where \(n\) is the sample size. The rate could be improved to \(O(\frac{G_{\mathcal{C}}^{2}\log(1/\delta)}{n^{2}\epsilon^{2}}+\frac{1}{n})\) for strongly convex functions. Moreover, we also show that the bound for strongly convex functions is optimal up to a factor of \(\text{Poly}(\log d)\) if \(\mathcal{C}\) is contained in the unit \(\ell_{2}\)-norm ball. To the best of our knowledge, this is the first lower bound of DP-SCO that depends on Gaussian width. **2.** We then study problem (2). Specifically, when \(1<p<2\), we propose a novel method named Noisy Regularized Mirror Descent, which adds regularization terms and Generalized Gaussian noise to Mirror Descent. By analyzing its stability, we show the output could achieve an excess population risk of \(\tilde{O}(\kappa^{\frac{4}{5}}(\frac{\sqrt{d\log(1/\delta)}}{n\epsilon})^{ \frac{2}{5}})\), where \(\kappa=\min\{\frac{1}{p-1},2\log d\}\). We also discuss the case when \(2\leq p\leq\infty\). **3.** Finally, we consider problem (3), assuming that the second-order moment of \(\|\cdot\|_{*}\)-norm of the loss gradient is bounded. When \(1<p<2\), through a noisy, shuffled, and truncated version of Mirror Descent, we show a bound of \(\tilde{O}(\frac{\sqrt[4]{\kappa^{2}d\log(1/\delta)}}{\sqrt{n\epsilon}})\) in the high privacy regime \(\epsilon=\tilde{O}(n^{-\frac{1}{2}})\), and a bound of \(O(\frac{\kappa^{\frac{2}{5}}(d\log(1/\delta))^{\frac{1}{5}}}{(n\epsilon)^{ \frac{1}{5}}})\) for general \(0<\epsilon<1\). We also study the case when \(2\leq p\leq\infty\). ## 2 Related Work As there is a long list of work on DP-SCO/DP-ERM, here we just mention the work close to the problems we study in this paper. See Table 1 and 2 for detailed comparisons. **DP-SCO/DP-ERM with Gaussian width.** For DP-ERM over \(\ell_{2}\)-norm ball, although [7] show the optimal rate of \(O(\frac{\sqrt{d\log(1/\delta)}}{n\epsilon})\) and \(O(\frac{d\log(1/\delta)}{n^{2}\epsilon^{2}})\) for convex and strongly convex loss, respectively, [31] show that for general constraint set \(\mathcal{C}\) it is possible to improve the factor \(d\) to the Gaussian width of \(\mathcal{C}\). After that, [24] further improve the rate for generalized linear functions, [39] provide an accelerated algorithm, and [37] extend to non-convex loss functions. However, all of them only study the problem of DP-ERM, and their methods cannot be generalized to DP-SCO directly. For DP-SCO, the only known result is given by [1], which studies general convex loss under the setting where there is some public data. As we can see from Table 1, our result significantly improves theirs. Moreover, we show a nearly optimal rate for strongly convex functions, which is the first lower bound of DP-SCO/DP-ERM that depends on the Gaussian width. **DP-SCO in \(\ell_{p}^{d}\) space.** Compared to the Euclidean space case, there is little work on DP-SCO in non-Euclidean (\(\ell_{p}^{d}\)) space. [6] provide the first study of the problem for \(1\leq p\leq\infty\) and propose several results for \(p=1\), \(1<p<2\) and \(2\leq p\leq\infty\). Later [17] further extend to the online setting. However, all the previous algorithms and utility analyses highly rely on the assumption that the diameter of the constrained set is bounded and known, i.e., their results will not hold in the unconstrained case, which is more difficult than the constrained case. In this paper, we fill the gap by providing the first results for unconstrained DP-SCO in \(\ell_{p}^{d}\) space by proposing several new methods. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Methods & Problem & Assumption & Convex Bound & Strongly Convex Bound \\ \hline [31] & ERM & Lipschitz & \(\tilde{O}(\frac{G_{\mathcal{C}}}{n\epsilon})\) & \(\tilde{O}(\frac{G_{\mathcal{C}}^{2}}{n^{2}\epsilon^{2}})\) \\ \hline [24] & ERM & Lipschitz and GLM & \(\tilde{O}(\frac{\sqrt{G_{\mathcal{C}}}}{\sqrt{n\epsilon}})\) & — \\ \hline [1] & SCO & Lipschitz & \(\tilde{O}(\frac{\sqrt{G_{\mathcal{C}}}}{\sqrt{n\epsilon}}+\frac{1}{\sqrt{n}})\) & — \\ \hline **This paper** & SCO & Lipschitz & \(\tilde{O}(\frac{G_{\mathcal{C}}}{n\epsilon}+\frac{1}{\sqrt{n}})\) & \(\tilde{O}(\frac{G_{\mathcal{C}}^{2}}{n^{2}\epsilon^{2}}+\frac{1}{n})\) (*) \\ \hline \end{tabular} \end{table} Table 1: Comparisons on the results for \((\epsilon,\delta)\) DP-SCO/DP-ERM in Euclidean space with bounded constraint set \(\mathcal{C}\) (dependence on other parameters are omitted). Here \(G_{\mathcal{C}}\) is the Gaussian width of \(\mathcal{C}\), \(n\) is the sample size, and \(n_{public}\) is the size of public data. \(\tilde{O}\) hides other logarithmic factors. (*): We also show such a bound is nearly optimal when \(\mathcal{C}\) is contained in unit \(\ell_{2}\) ball. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Methods & Constrained & Assumption & Bound for \(\ell_{p}^{d}\;(1<p<2)\) & Bound for \(\ell_{p}^{d}\;(2\leq p\leq\infty)\) \\ \hline [6] & Yes & Lipschitz & \(\tilde{O}(\sqrt{\frac{\kappa}{n}}+\frac{\kappa\sqrt{d}}{n\epsilon})\) & \(\tilde{O}(\frac{d^{\frac{1}{2}-\frac{1}{p}}}{\sqrt{n}}+\frac{d^{1-\frac{1}{p} }}{n\epsilon})\) \\ \hline **This paper** & No & Lipschitz & \(\tilde{O}(\kappa^{\frac{4}{3}}\cdot(\frac{\sqrt{d}}{n\epsilon})^{\frac{2}{3}})\) & \(\tilde{O}(d^{1-\frac{2}{p}}(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{\epsilon n}))\) \\ \hline **This paper** & Yes & Heavy-tailed & \(\tilde{O}(\frac{\sqrt{\kappa^{2}d}}{\sqrt{n\epsilon}})/\tilde{O}(\frac{\kappa^ {\frac{2}{3}}(d)^{\frac{1}{6}}}{(n\epsilon)^{\frac{2}{3}}})\) (*) & \(\tilde{O}(\frac{d^{\frac{2}{3}-\frac{1}{p}}}{\sqrt{n}}+\frac{d^{\frac{2}{3}- \frac{1}{p}}}{\sqrt{n\epsilon}})\) \\ \hline \end{tabular} \end{table} Table 2: Comparisons on the results for \((\epsilon,\delta)\) DP-SCO in \(\ell_{p}^{d}\) space with \(1<p\leq\infty\) (dependence on other parameters are omitted). Here \(d\) is the dimension, \(n\) is the sample size, and \(\kappa=\min\{\frac{1}{p-1},2\log d\}\). \(\tilde{O}\) hides other logarithmic factors. (*): The first bound is for the case of \(\epsilon=\tilde{O}(n^{-\frac{1}{2}})\) and the second one is for general \(0<\epsilon<1\). Preliminaries In this section, we recall some definitions and lemmas that would be used throughout the paper. **Definition 1** (Differential Privacy [12]).: Given a data universe \(\mathcal{X}\), we say that two datasets \(D,D^{\prime}\subseteq\mathcal{X}\) are neighbors if they differ by only one data sample, which is denoted as \(D\sim D^{\prime}\). A randomized algorithm \(\mathcal{A}\) is \((\epsilon,\delta)\)-differentially private (DP) if for all neighboring datasets \(D,D^{\prime}\) and for all events \(S\) in the output space of \(\mathcal{A}\), we have \(\Pr(\mathcal{A}(D)\in S)\leq e^{\epsilon}\Pr(\mathcal{A}(D^{\prime})\in S)+\delta\). **Lemma 1** (Advanced Composition Theorem [13]).: Given target privacy parameters \(0<\epsilon<1\) and \(0<\delta<1\), to ensure \((\epsilon,T\delta^{\prime}+\delta)\)-DP over \(T\) mechanisms, it suffices that each mechanism is \((\epsilon^{\prime},\delta^{\prime})\)-DP, where \(\epsilon^{\prime}=\frac{\epsilon}{2\sqrt{2T\ln(2/\delta)}}\) and \(\delta^{\prime}=\frac{\delta}{T}\). **Definition 2** (DP-SCO in General Normed Space [6]).: Given a dataset \(D=\{x_{1},\cdots,x_{n}\}\) from a data universe \(\mathcal{X}\) where \(\{x_{i}=(z_{i},y_{i})\}_{i}\) with a feature vector \(z_{i}\) and a label/response \(y_{i}\) are i.i.d. samples from some unknown distribution \(\mathcal{D}\), a normed space \((\mathbf{E},\|\cdot\|)\) of dimension \(d\), a convex constraint set \(\mathcal{C}\subseteq\mathbf{E}\), and a convex loss function \(\ell:\mathcal{C}\times\mathcal{X}\mapsto\mathbb{R}\). Differentially Private Stochastic Convex Optimization (DP-SCO) is to find \(\theta^{\mathrm{priv}}\) to minimize the population risk, _i.e._, \(\mathcal{L}(\theta)=\mathbb{E}_{x\sim\mathcal{D}}[\ell(\theta,x)]\) with the guarantee of being differentially private.1 The utility of the algorithm is measured by the (expected) excess population risk, that is \(\mathcal{L}(\theta^{\mathrm{priv}})-\mathcal{L}(\theta^{*})\), where \(\theta^{*}=\arg\min_{\theta\in\mathcal{C}}\mathcal{L}(\theta)\). Besides the population risk, we can also measure the _empirical risk_ of dataset \(D\): \(\hat{\mathcal{L}}(\theta,D)=\frac{1}{n}\sum_{i=1}^{n}\ell(\theta,x_{i})\). Footnote 1: Note that in this paper we consider the proper learning case, that is \(\theta^{\mathrm{priv}}\) should be in \(\mathcal{C}\). In Definition 2, we consider DP-SCO in general normed space with a convex set \(\mathcal{C}\subseteq\mathbf{E}\). In this paper, we mainly focus on two cases: 1) Constraint Euclidean case where \(\mathbf{E}=\mathbb{R}^{d}\), \(\|\cdot\|\) is the \(\ell_{2}\)-norm, and \(\mathcal{C}\) is a bounded set whose diameter is denoted as \(\|\mathcal{C}\|_{2}=\max_{\theta,\theta^{\prime}\in\mathcal{C}}\|\theta- \theta^{\prime}\|_{2}\); 2) \(\ell_{q}^{d}\) case where \(\mathbf{E}=\mathbb{R}^{d}\) and \(\|\cdot\|\) is the \(\ell_{p}\)-norm \(\|\cdot\|_{p}\) with \(1<p\leq\infty\) (where \(||x||_{p}=(\sum_{j=1}^{d}|x_{j}|^{p})^{\frac{1}{p}}\)), and \(\mathcal{C}\) could be either bounded or unbounded. Since \(\ell_{p}^{d}\) spaces are regular. To better illustrate our idea, we will introduce regular spaces. Let \((\mathbf{E},||\cdot||)\) be a normed space of dimension \(d\) and let \(\langle\cdot,\cdot\rangle\) be an arbitrary inner product over \(\mathbf{E}\) (not necessarily inducing the norm \(\|\cdot\|\)). The dual norm over \(\mathbf{E}\) is defined as \(||y||_{*}=\max_{||x||\leq\leq}\langle y,x\rangle\). So \((\mathbf{E},||\cdot||_{*})\) is also a \(d\)-dimensional normed space. For example, let \(\ell_{p}^{d}=(\mathbb{R}^{d},||\cdot||_{p})\) with \(1\leq p\leq\infty\), the dual norm of \(\ell_{p}^{d}\) is \(\ell_{q}^{d}\), where \(\frac{1}{p}+\frac{1}{q}=1\). We call a normed space regular if its dual norm is sufficiently smooth. In detail, we have the following definition. **Definition 3** (\(\kappa\)-regular Space [21]).: Given \(\kappa\geq 1\), we say a normed space \((\mathbf{E},||\cdot||)\)\(\kappa\)-regular if there exists a \(\kappa_{+}\), s.t., \(1\leq\kappa_{+}\leq\kappa\) and there exists a norm \(||\cdot||_{+}\) such that \((\mathbf{E},||\cdot||_{+})\) is \(\kappa_{+}\)-smooth, i.e., for all \(x,y\in\mathbf{E}\), \[||x+y||_{+}^{2}\leq||x||_{+}^{2}+\langle\nabla(||\cdot||_{+}^{2})(x),y\rangle+ \kappa_{+}||y||_{+}^{2}.\] And \(||\cdot||\) and \(||\cdot||_{+}\) are equivalent with the following constraint: \(||x||^{2}\leq||x||_{+}^{2}\leq\frac{\kappa}{\kappa_{+}}||x||^{2}\) (\(\forall x\in\mathbf{E}\)). For \(\ell_{p}^{d}\) space with \(2\leq p\leq\infty\), it is \(\kappa\)-regular with \(\kappa=\min\{p-1,2e\log d\}\). In this case we have \(\|x\|_{+}=\|x\|_{r}\) with \(r=\min\{p,2\log d+1\}\) and \(\kappa_{+}=(r-1)\)[11]. So in the \(\ell_{p}\) spaces with \(1<p<2\) we focus on, their dual spaces are \(\kappa\)-regular with \(\kappa=\min\{\frac{1}{p-1},2\ln d\}\). In the following, we introduce the mechanisms that will be used in the latter sections. **Lemma 2** (Gaussian Mechanism).: Given a dataset \(D\in\mathcal{X}^{n}\) and a function \(q:\mathcal{X}^{n}\to\mathbb{R}^{d}\), the Gaussian mechanism is defined as \(q(D)+\xi\) where \(\xi\sim\mathcal{N}(0,\frac{2\Delta_{2}^{2}(q)\log(1.25/\delta)}{\varepsilon^{2} }\mathbb{I}_{d})\), where \(\Delta_{2}(q)\) is the \(\ell_{2}\)-sensitivity of the function \(q\), _i.e.,_\(\Delta_{2}(q)=\sup_{D\sim D^{\prime}}\|q(D)-q(D^{\prime})\|_{2}\). Gaussian mechanism preserves \((\epsilon,\delta)\)-DP. Note that the Gaussian mechanism is tailored for the case where the query has bounded \(\ell_{2}\)-norm sensitivity. [6] propose a Generalized Gaussian mechanism that leverages the regularity of the dual space \((\mathbf{E},\|\cdot\|_{*})\). **Definition 4** (Generalized Gaussian distribution [6]).: Let \((\mathbf{E},||\cdot||_{*})\) be a \(d\)-dimensional \(\kappa\)-regular space with smooth norm \(||\cdot||_{+}\). Define the generalized Gaussian distribution \(\mathcal{GG}_{||\cdot||_{+}}(\mu,\sigma^{2})\), as one with density \(g(z)=C(\sigma,d)\cdot e^{-\frac{||z-\mu||_{+}^{2}}{2\sigma^{2}}}\), where \(C(\sigma,d)=[\text{Area}(\{||x||_{+}=1\})\frac{(2\sigma^{2})^{d/2}}{2}\Gamma( \frac{d}{2})]^{-1}\), and the Area is the \(d-1\) dimensional surface measure on \(\mathbb{R}^{d}\). **Lemma 3** (Generalized Gaussian mechanism [6]).: Given a dataset \(D\in\mathcal{X}^{n}\), and a query \(q:\mathcal{X}^{n}\to\mathbf{E}\) with bounded \(||\cdot||_{*}\)-sensitivity: \(s=\sup_{D\sim D^{\prime}}||q(D)-q(D^{{}^{\prime}})||_{*}\), the Generalized Gaussian mechanism is defined as \(q(D)+\xi\) where \(\xi\sim\mathcal{GG}_{||\cdot||_{+}}(0,\frac{2\kappa\log(1/\delta)s^{2}}{ \varepsilon^{2}})\). The Generalized Gaussian mechanism preserves \((\epsilon,\delta)\)-DP. **Lemma 4** (Prop 4.2 in [6]).: For any \(m\geq 1\), if \(z\sim\mathcal{GG}_{||\cdot||_{+}}(0,\sigma^{2})\), then \(\mathbb{E}[\|z\|_{+}^{m}]\leq(2\sigma^{2})^{\frac{m}{2}}\Gamma(\frac{m+d}{2})/ \Gamma(\frac{d}{2})\). Specifically, \(\mathbb{E}[\|z\|_{*}^{2}]\leq\mathbb{E}[\|z\|_{+}^{2}]\leq d\sigma^{2}\), where \(\Gamma(\cdot)\) is the Gamma function. In the following, we recall some terminologies on the properties of the loss function and the constraint set \(\mathcal{C}\). **Definition 5**.: (\(L\)-Lipschitz) Given the loss function \(\ell(\cdot,\cdot):\mathcal{C}\times\mathcal{X}\to\mathbb{R}\). It is \(L\)-Lipschitz w.r.t. the norm \(||\cdot||\) if for all \(x\in\mathcal{X}\) and \(w_{1},w_{2}\in\mathcal{C}\) we have \[|\ell(w_{1},x)-\ell(w_{2},x)|\leq L\cdot||w_{1}-w_{2}||.\] **Definition 6**.: (\(\beta\)-Smooth) Given the loss function \(\ell(\cdot,\cdot):\mathcal{C}\times\mathcal{X}\to\mathbb{R}\). It is \(\beta\)-smooth w.r.t. the norm \(||\cdot||\) if its gradient is \(\beta\)-Lipschitz w.r.t. \(||\cdot||\), namely, for all \(x\in\mathcal{X}\) and \(w_{1},w_{2}\in\mathcal{C}\) we have \[||\nabla\ell(w_{1},x)-\nabla\ell(w_{2},x)||_{*}\leq\beta\cdot||w_{1}-w_{2}||.\] **Definition 7**.: (Strongly convex) Given the loss function \(\ell(\cdot,\cdot):\mathcal{C}\times\mathcal{X}\to\mathbb{R}\), it is \(\alpha\)-strongly convex w.r.t. the norm \(||\cdot||\) if for all \(x\in\mathcal{X}\) and \(w_{1},w_{2}\in\mathcal{C}\), \[\langle\nabla\ell(w_{1},x)-\nabla\ell(w_{2},x),w_{1}-w_{2}\rangle\geq\alpha \cdot||w_{1}-w_{2}||^{2}.\] **Definition 8**.: (Bregman divergence) For a convex function \(\Phi:\mathbf{E}\to\mathbb{R}\), the Bregman divergence is defined as \[D_{\Phi}(y,x)=\Phi(y)-\Phi(x)-\langle\nabla\Phi(x),y-x\rangle.\] Notice that the Bregman divergence is always positive, and it is convex in the first argument. **Definition 9**.: (Relative strongly convex [26]) A function \(f:\mathbf{E}\to\mathbb{R}\) is \(\alpha\)-strongly convex **relative** to \(\Phi:\mathbf{E}\to\mathbb{R}\) if for all \(x,y\in\mathbf{E}\), \[f(x)+\langle\nabla f(x),y-x\rangle+\alpha D_{\Phi}(y,x)\leq f(y).\] **Definition 10**.: (Relative smooth [26]) A function \(f:\mathbf{E}\to\mathbb{R}\) is \(\beta\)-smooth **relative** to \(\Phi:\mathbf{E}\to\mathbb{R}\) if \(\forall x,y\in\mathbf{E}\), \(f(x)+\langle\nabla f(x),y-x\rangle+\beta D_{\Phi}(y,x)\geq f(y)\). Next, we introduce some basic concepts on Minkowski norm of a symmetric, closed, and convex set \(\mathcal{C}\). **Definition 11** (Minkowski norm).: For a centrally symmetric convex set \(\mathcal{C}\subseteq\mathbb{R}^{d}\), the Minkowski norm (denoted by \(||\cdot||_{\mathcal{C}}\)) is defined as follows. For any vector \(v\in\mathbb{R}^{d}\), \[||\cdot||_{\mathcal{C}}=\min\{r\in\mathbb{R}^{+}:v\in r\mathcal{C}\}.\] The dual norm of \(||\cdot||_{\mathcal{C}}\) is denoted as \(||\cdot||_{\mathcal{C}^{*}}\), and for any vector \(v\in\mathbb{R}^{d}\), \(||v||_{\mathcal{C}^{*}}=\max\limits_{w\in\mathcal{C}}\lvert\langle w,v\rangle\rvert\). Note that by Holder's inequality, for any pair of dual norms \(||\cdot||\) and \(||\cdot||_{*}\), and any \(x,y\in\mathbb{R}^{d}\), \(|\langle x,y\rangle|\leq||x||\cdot||y||_{*}\). So we have \(|\langle x,y\rangle|\leq||x||_{\mathcal{C}}\cdot||y||_{\mathcal{C}^{*}}\). In the constrained Euclidean case, our work relies on appropriately quantifying the size of a convex body, which leads to the following definition of Gaussian width. **Definition 12**.: (Gaussian width) Let \(\xi\sim\mathcal{N}(0,\mathbb{I}_{d})\) be a Gaussian random vector in \(\mathbb{R}^{d}\), for a set \(\mathcal{C}\), the Gaussian width is defined as \(G_{\mathcal{C}}=\mathbb{E}_{\xi}[\sup\limits_{w\in\mathcal{C}}\langle\xi,w\rangle]\). Compared to dimension \(d\), the Gaussian width of a convex set \(\mathcal{C}\subset\mathbb{R}^{d}\) could be much smaller. For example, when \(\mathcal{C}\) is the unit \(\ell_{1}\)-norm ball, \(G_{\mathcal{C}}=O(\sqrt{\log d})\); and when \(\mathcal{C}\) is the set of of all unit s-sparse vectors on \(\mathbb{R}^{d}\), \(G_{\mathcal{C}}=O(\sqrt{s\log\frac{d}{s}})\). We refer readers to [31] for details. ## 4 DP-SCO in Euclidean Space In this section, we focus on the Euclidean case with a closed, bounded, and convex constraint set \(\mathcal{C}\), and the loss function could be either convex or strongly convex. ### General Convex Case Before showing our idea, we need to discuss the weakness of previous approaches. Note that since our goal is getting an upper bound that depends on the Gaussian width of the constrained set \(\mathcal{C}\), we will not discuss the approaches that achieve upper bounds that are polynomial in \(d\). In general, all methods can be categorized into two classes: gradient perturbation and objective function perturbation. In gradient perturbation methods [31], the key idea is modifying the Mirror Descent by adding noise to gradients. While this approach could achieve satisfactory bounds for the empirical risk [39, 37], however, when considering the population risk we need to use batched gradients at each iteration, which will induce a sub-optimal rate [1]. Instead of perturbing the gradient, [31] show that the objective function perturbation method in [10] could also achieve an upper bound that only depends on the Gaussian width, instead of \(d\). However, this approach has two weaknesses: First, [31] only shows the bound for the empirical risk, and whether its excess population risk is satisfactory or not is unknown; Secondly, it is well-known that the objective perturbation approach needs to exactly get the minimizer of the perturbed objective function, which is inefficient in practice. Motivated by the objective perturbation method in [31], our algorithm is an approximate version proposed in [5]. See detailed procedures in Algorithm 1. In detail, first, similar to the standard objective perturbation, we add a random and linear term \(\frac{\langle{\bf G},\theta\rangle}{n}\) with Gaussian noise \({\bf G}\) and an \(\ell_{2}\) regularization to the original empirical risk function to obtain a new objective function \(\mathcal{J}(\theta,D)\). Then we obtain an approximate minimizer \(\theta_{2}\) of the perturbed empirical risk \(\mathcal{J}(\theta,D)\) via any efficient optimization method (such as proximal SVRG [40] or projected SGD) to ensure that \(\mathcal{J}(\theta_{2},D)-\min_{\theta\in\mathcal{C}}\mathcal{J}(\theta,D)\) is at most \(\alpha\). Formally, we can define such an optimization method as an optimizer function \(\mathcal{O}:\mathcal{F}\times[0,1]\rightarrow\mathcal{C}\), where \(\mathcal{F}\) is the class of objectives and the other argument is the optimization accuracy. Finally, we perturb \(\theta_{2}\) with Gaussian noise to fuzz the difference between \(\theta_{2}\) and the true minimizer, we then project the perturbed \(\theta_{2}\) onto set \(\mathcal{C}\). Since the algorithm itself is not new, here we highlight our contributions: First, with some specific parameters, we show such an algorithm could achieve an excess population risk of \(O(\frac{Gc}{n\epsilon}+\frac{1}{\sqrt{n}})\), while [5] only show an upper bound of \(O(\frac{\sqrt{d}}{n\epsilon}+\frac{1}{\sqrt{n}})\); Second, we extend the algorithm to the strongly convex case (see Section 4.2 for details). In the following, we will show the theoretical guarantees of our algorithm. First, we need the following assumption on the loss function \(\ell\). **Assumption 1**.: The loss function \(\ell\) is twice differentiable, \(L\)-Lipschitz and \(\beta\)-smooth w.r.t. the Euclidean norm \(||\cdot||_{2}\) over \(\mathcal{C}\). ``` 1:Input: Dateset \(D\), loss function \(\ell\), regularization parameter \(\lambda\), optimizer \(\mathcal{O}:\mathcal{F}\times[0,1]\rightarrow\mathcal{C}\), where \(\mathcal{F}\) is the class of objectives, and the other argument is the optimization accuracy. \(\alpha\in[0,1]:\) optimization accuracy. 2:Sample \({\bf G}\sim\mathcal{N}(0,\sigma_{1}^{2}\mathbb{I}_{d})\) where \(\sigma_{1}^{2}=\frac{128L^{2}\log(2.5/\delta)}{\epsilon^{2}}\). Set \(\lambda\geq\frac{r\beta}{\epsilon n}\), where \(r=\min\{d,2\cdot\text{rank}(\nabla^{2}\ell(\theta,x))\}\) with \(\text{rank}(\nabla^{2}\ell(\theta,x))\) being the maximal rank of the Hessian of \(\ell\) for all \(\theta\in\mathcal{C}\) and \(x\sim\mathcal{P}\). 3:Let \(\mathcal{J}(\theta,D)=\hat{\mathcal{L}}(\theta,D)+\frac{\langle{\bf G},\theta \rangle}{n}+\lambda||\theta||_{2}^{2}\). 4:return\(\hat{\theta}=\text{Proj}_{\mathcal{C}}[\mathcal{O}(\mathcal{J},\alpha)+{ \bf H}]\) where \({\bf H}\sim\mathcal{N}(0,\sigma_{2}^{2}\mathbb{I}_{d})\) with \(\sigma_{2}^{2}=\frac{64\alpha\log(2.5/\delta)}{\lambda\epsilon^{2}}\) ``` **Algorithm 1**\(\mathcal{A}_{\text{App-ObjP}}\): Approximate Objective perturbation **Theorem 1**.: Suppose that Assumption 1 holds and that the smoothness parameter \(\beta\) satisfies \(\beta\leq\frac{cn\lambda}{r}\). Then for any \(0<\epsilon,\delta<1\), \(\mathcal{A}_{\text{App-ObjP}}\) (Algorithm 1) is \((\epsilon,\delta)\)-DP. It is notable that although we need to assume \(\beta\) is not large enough, as we will see in Theorem 2, the assumption will always hold when \(n\) is sufficiently large. **Theorem 2**.: Suppose that Assumption 1 holds. When \(n\) is large enough such that \(n\geq\frac{r^{2}\beta^{2}||\mathcal{C}||_{2}^{2}}{\epsilon^{2}L^{2}}\) and \(n\geq O\left(\frac{\sqrt{d\log(1/\delta)}}{\epsilon}\right)\), take \(\lambda=\frac{L}{\sqrt{n}||\mathcal{C}||_{2}}\) and \(\alpha\leq\min\left\{\frac{L||\mathcal{C}||_{2}}{n^{\frac{1}{2}}},\frac{ \epsilon^{2}L||\mathcal{C}||_{2}^{3}}{G_{c}^{2}\log(1/\delta)n^{\frac{2}{2}}}\right\}\) in Algorithm 1, we have \[\mathbb{E}[\mathcal{L}(\hat{\theta})]-\mathcal{L}(\theta^{*})\leq O\left(\frac{ L\cdot Gc\sqrt{\log(1/\delta)}}{\epsilon n}+\frac{L||\mathcal{C}||_{2}}{\sqrt{n}} \right),\] where the expectation is taken over the internal randomness of the algorithm. **Remark 1**.: While we consider the same algorithm as in [5], there are several crucial differences. First, to achieve the upper bound of \(O(\frac{\sqrt{d}}{n\epsilon}+\frac{1}{\sqrt{n}})\), [5] only need to set \(\alpha\leq O(\frac{1}{n^{2}}\max\{\frac{1}{\sqrt{n}},\frac{d}{n\epsilon}\})\) while we need to be more aggressive by choosing \(\alpha\leq O(\epsilon^{2}n^{-\frac{5}{2}})\). This is reasonable as we aim to get an improved upper bound. Thus we have to get a more accurate estimation. Secondly, besides enforcing the perturbed approximation to lie in the set \(\mathcal{C}\) as it does in [5], the projection operator in Step 4 of Algorithm 1 plays a more critical role in achieving a bound that depends on \(G_{\mathcal{C}}\) in our analysis, i.e., the bound in [5] will still hold even there is no projection step, while this is not true for our case. Specifically, although the noise \(\mathbf{H}\) is a \(d\)-dimensional Gaussian noise, we can show that due to the projection operator, the error introduced by the noise depends only on \(G_{\mathcal{C}}\) rather than \(\sqrt{d}\), i.e., \(||\hat{\theta}-\theta_{2}||_{2}^{2}\leq O(\sqrt{\frac{\alpha\log(1/\delta)}{ \lambda}}\cdot\frac{G_{\mathcal{C}}}{\epsilon})\). A similar idea has also been used in privately answering multiple linear queries [28]. ### Strongly Convex Case We aim to extend our above idea to the strongly convex case. First, we impose the following assumption. **Assumption 2**.: We assume the loss is twice differentiable, \(L\)-Lipschitz and \(\beta\)-smooth w.r.t. \(||\cdot||_{2}\), and it is \(\Delta\)-strongly convex w.r.t. \(||\cdot||_{\mathcal{C}}\) over the set \(\mathcal{C}\). Note that we can relax the assumption to strongly convex w.r.t \(\|\cdot\|_{2}\) as \(\|v\|_{2}\geq\mathcal{C}_{\min}\cdot\|v\|_{\mathcal{C}}\), where \(\mathcal{C}_{\min}\) is in Theorem 5. See the proof of Theorem 5 for details. Our method is shown in Algorithm 2. Note that, compared with Algorithm 1, the main difference is the regularization parameter \(\lambda\). This is because the loss function is already \(\Delta\)-strongly convex, thus smaller \(\lambda\) will be sufficient to make \(\mathcal{J}\) to be \(\frac{r\beta}{\epsilon n}\)-strongly convex. Moreover, when \(n\) is large enough, we can see \(\lambda=0\), indicating that we can get an improved excess population risk compared to the convex case. ``` 1:Input: Dateset \(D\), loss function \(\ell\), regularization parameter \(\lambda\), optimizer \(\mathcal{O}:\mathcal{F}\times[0,1]\rightarrow\mathcal{C}\), where \(\mathcal{F}\) is the class of objectives and the other argument is the optimization accuracy. \(\alpha\in[0,1]\) : optimization accuracy. 2:Sample \(\mathbf{G}\sim\mathcal{N}(0,\sigma_{1}^{2}\mathbb{I}_{d})\) where \(\sigma_{1}^{2}=\frac{128L^{2}\log(2.5/\delta)}{\epsilon^{2}}\). Set \(\lambda=\max\left\{\frac{r\beta}{\epsilon n}-\Delta,0\right\}\), where \(r=\min\{d,2\cdot\text{rank}(\nabla^{2}\ell(\theta,x))\}\) with \(\text{rank}(\nabla^{2}\ell(\theta,x))\) being the maximal rank of the Hessian of \(\ell\) for all \(\theta\in\mathcal{C}\) and \(x\sim\mathcal{P}\). 3:Let \(\mathcal{J}(\theta,D)=\hat{\mathcal{L}}(\theta,D)+\frac{\langle\mathbf{G}, \theta\rangle}{n}+\lambda||\theta||_{2}^{2}\). 4:return\(\hat{\theta}=\text{Proj}_{\mathcal{C}}[\mathcal{O}(\mathcal{J},\alpha)+ \mathbf{H}]\) where \(\mathbf{H}\sim\mathcal{N}(0,\sigma_{2}^{2}\mathbb{I}_{d})\) with \(\sigma_{2}^{2}=\frac{64\alpha\log(2.5/\delta)\cdot||\mathcal{C}||_{2}^{2}}{ \Delta\epsilon^{2}}\) ``` **Algorithm 2**\(\mathcal{A}_{\text{App-ObjP-SC}}\): Approximate Objective perturbation for strongly convex function **Theorem 3**.: If the loss function satisfies Assumption 2. Then for any \(0<\epsilon,\delta<1\), \(\mathcal{A}_{\text{App-ObjP-SC}}\) (Algorithm 2) is \((\epsilon,\delta)\)-DP. **Theorem 4**.: Suppose that Assumption 2 holds. If \(n\) is large enough such that \(n\geq O(\max\{\frac{L^{2}||\mathcal{C}||_{2}^{2}}{\Delta^{2}},\frac{|| \mathcal{C}||_{2}^{2}r^{2}\beta^{2}}{L^{2}\epsilon^{2}}\})\) and \(n\geq O\left(\frac{\sqrt{d\log(1/\delta)}}{\epsilon}\right)\), then by setting \(\alpha\leq O\left(\min\left\{\frac{L^{2}||\mathcal{C}||_{2}^{2}}{\Delta n^{2}}, \frac{L^{4}||\mathcal{C}||_{2}^{6}\epsilon^{2}}{\Delta^{3}n^{4}G_{\mathcal{C}} ^{2}\log(1/\delta)}\right\}\right)\), we have \[\mathbb{E}[\mathcal{L}(\hat{\theta})]-\mathcal{L}(\theta^{*})\leq O\left(\frac{ L^{2}||\mathcal{C}||_{2}^{2}}{\Delta n\epsilon}+\frac{G_{\mathcal{C}}^{2}L^{2} \log(1/\delta)}{\Delta n^{2}\epsilon^{2}}\right),\] where the expectation is taken over the internal randomness of the algorithm. **Remark 2**.: First, it is notable that an objective perturbation method for strongly convex loss has also been presented by [31]. However, there are two major differences: (1) the method in [31] needs to solve the perturbed objective function exactly, indicating it is inefficient; (2) [31] only provide the excess empirical risk. It is unknown whether their method could achieve the same bound as ours for the excess population risk. Secondly, when \(\mathcal{C}\) is an \(\ell_{2}\)-norm ball, the bounds in Theorem 2 and Theorem 4 will recover the optimal rate of DP-SCO over \(\ell_{2}\)-norm ball for convex and strongly convex loss functions, respectively [5]. Thirdly, the terms of \(O(\frac{G_{\mathcal{C}}}{n\epsilon})\) and \(O(\frac{G_{\mathcal{C}}^{2}}{n^{2}\epsilon^{2}})\) match the best-known results of excess empirical risk for the convex and strongly convex case, respectively [31]. In Remark 2, we showed that our results are optimal when \(\mathcal{C}\) is an \(\ell_{2}\)-norm ball and are comparable to the best results of DP-ERM with Gaussian width. A natural question is whether we can further improve these two upper bounds. In the following, we partially answer the question by providing a lower bound for strongly convex loss functions. **Theorem 5**.: Let \(\mathcal{C}\) be a symmetric body contained in the unit Euclidean ball \(\mathcal{B}_{2}^{d}\) in \(\mathbb{R}^{d}\) and satisfies \(\|\mathcal{C}\|_{2}=1\). For any \(n=O(\frac{\sqrt{d\log(1/\delta)}}{\epsilon})\), \(\epsilon=O(1)\) and \(2^{-\Omega(n)}\leq\delta\leq 1/n^{1+\Omega(1)}\), there exists a loss \(\ell\) which is \(1\)-Lipschitz w.r.t. \(\|\cdot\|_{2}\) and \(\mathcal{C}_{\min}^{2}\)-strongly convex w.r.t. \(\|\cdot\|_{\mathcal{C}}\), and a dataset \(D=\{x_{1},\cdots,x_{n}\}\subseteq\mathcal{C}^{n}\) such as for any \((\epsilon,\delta)\)-differentially private algorithm on minimizing the empirical risk function \(\hat{\mathcal{L}}(\theta,D)\) over \(\mathcal{C}\), its output \(\theta^{priv}\in\mathcal{C}\) satisfies \[\mathbb{E}[\mathcal{L}(\theta^{priv})]-\mathcal{L}(\theta^{*})=\Omega\left( \max\left\{\frac{G_{\mathcal{C}}^{2}\log(1/\delta)}{(\log(2d))^{4}\epsilon^{2 }n^{2}},\frac{1}{n}\right\}\right),\] where the expectation is taken over the internal randomness of the algorithm \(\mathcal{A}\). Here \(\mathcal{C}_{\min}=\min\{\|v\|_{2}:v\in\partial\mathcal{C}\}\) with \(\partial\mathcal{C}\) as the boundary of the set \(\mathcal{C}\), i.e., it is the distance between the original point to the boundary of \(\mathcal{C}\). Taking \(\Delta=\mathcal{C}_{\min}^{2}\) and \(L=1\) in Theorem 4, we can see the rate of excess population risk in Theorem 4 for strongly convex loss functions is nearly optimal by a factor of \(\tilde{O}(\mathcal{C}_{\min}^{-2})\). It is unknown whether we can further close the gap, and we will leave it as an open problem. ## 5 DP-SCO in \(\ell_{p}^{d}\) Space In this section, we will focus on DP-SCO in \(\ell_{p}^{d}\) space where \(1<p\leq\infty\). As we mentioned in the Introduction section, we study two settings: (1) \(\mathcal{C}\) is \(\mathbb{R}^{d}\) and the gradient of the loss function is bounded (i.e., the loss is Lipschitz); (2) \(\mathcal{C}\) is bounded, and the distribution of gradient of the loss is heavy-tailed. Similar to the previous study in [6], for each setting, there are two cases: \(1<p<2\) and \(2\leq p\leq\infty\). Notice that, unlike the previous section, we only study the case where the loss functions are convex. The reason is that except for the Euclidean space, for a strongly convex function, the ratio between its smoothness and strong convexity, i.e., the condition number, will depend on the dimension of \(\mathbf{E}\). For example, in the \(\ell_{1}^{d}\) space, it has been shown that there is no function whose condition number is less than \(d\)[22]. ### Unconstrained Case In this part, we will study Lipschitz loss under the following assumption that is commonly used in the related work on general stochastic convex optimization. **Assumption 3**.: We assume \(\ell(\cdot,x)\) is convex, \(\beta\)-smooth and \(L\)-Lipschitz w.r.t. \(||\cdot||\) over \(\mathbb{R}^{d}\). Due to its difficulty, we first consider the case where \(1<p<2\). See Algorithm 3 for details. Note that Algorithm 3 could be considered as a noisy and regularized version of the standard mirror descent, i.e., at each iteration, we first perform linearization of \(\hat{\mathcal{L}}(w_{t},D)\), then we add a generalized Gaussian noise to its gradient to privatize the algorithm, a Bregman divergence term and a regularized term \(\alpha\Phi(\cdot)\) with some specific \(\alpha\) to the linearization term. Then we solve the perturbed and regularized optimization problem. We output a linear combination of the intermediate parameters as the final output. It is notable that although our method is a noisy modification of Mirror Descent, it is completely different from the previous private Mirror Descent based methods in [31, 39, 6, 1]: First, instead of directly adding noise to the gradient in standard Mirror Descent, here we have an additional regularization term, which is crucial for us to make the algorithm stable, indicating that we can get an excess population risk. To be more specific, first, by the definition of \(\|\cdot\|_{+}\), and the duality between strong convexity and smoothness, we can easily see \(\Phi\) is \(1\)-strongly convex w.r.t \(\|\cdot\|\). This indicates that the function \(\hat{\mathcal{L}}(w,D)+\alpha\Phi(w)\) is relatively strongly convex and smooth (note that it is not smooth as the regularization term is not smooth when \(1<p<2\)). And the update step is just a noisy version of Mirror Descent for \(\hat{\mathcal{L}}(w,D)+\alpha\Phi(w)\). Recently, it has been shown that Mirror Descent is stable for relatively strongly convex and smooth functions. Thus, we can also show that Algorithm 3 is stable, indicating that we can get an excess population risk. From the above intuition, we can also see the parameter \(\alpha\) need to be carefully tuned to balance the stability and the excess empirical risk. The second difference is that, instead of using the last iterate or the average of iterates, our output is a linear combination of intermediate iterates, which is due to the noise we added. In the following we show the main results. **Theorem 6**.: For the \(\ell_{p}^{d}\) space with \(1<p<2\), suppose Assumption 3 holds, then for any \(0<\epsilon,\delta<1\), Algorithm 3 is \((\epsilon,\delta)\)-DP. **Theorem 7**.: For the \(\ell_{p}^{d}\) space with \(1<p<2\), suppose Assumption 3 holds. In Algorithm 3, take \(\alpha=\frac{4\beta}{T}\log_{2}\frac{n}{T}\) and \(T=O((\frac{n\epsilon\epsilon}{\sqrt{d\log(1/\delta)}})^{\frac{2}{5}})\), assume \(n\) is sufficiently large such that \(n\geq O\left(\frac{\epsilon^{4}}{(d\log(1/\delta))^{2}\kappa^{1/2}}\right)\), then we have \[\mathbb{E}[\mathcal{L}(\hat{w})]-\mathcal{L}(\theta^{*})\leq\tilde{O}(\kappa^{ \frac{4}{5}}\cdot(\frac{\sqrt{d\log(1/\delta)}}{n\epsilon})^{\frac{2}{5}}),\] where \(\tilde{O}\) hides \(\beta,L\) and a factor of \(\mathbb{E}_{D}[\tilde{C}_{D}^{2}]\) with \(\tilde{C}_{D}^{2}=\|\tilde{w}^{*}\|_{\kappa_{+}}^{2}\leq\|\tilde{w}^{*}\|^{2}\) and \(\tilde{w}^{*}=\underset{w\in\mathbf{E}}{\operatorname{arg\,min}}\hat{ \mathcal{L}}(w,D))\). The key idea to prove Theorem 7 is to show that Algorithm 3 is uniformly stable (w.r.t \(\|\cdot\|\)) by bounding the term \(\mathbb{E}[\|w_{t+1}-w_{t+1}^{\prime}\|]\), where \(w_{t+1}^{\prime}\) is the corresponding iterate of the algorithm when the input data is \(D^{\prime}\), which is a neighboring data of \(D\). To show this, rather than analyzing the stability of \(w_{t+1}\) directly via the approach in [18], our strategy is bounding \(\|w_{t+1}-w_{\alpha}^{*}\|\), where \(w_{\alpha}^{*}=\arg\min\hat{\mathcal{L}}(w,D)+\alpha\Phi(w)\). As the regularized function \(\hat{\mathcal{L}}(w,D)+\alpha\Phi(w)\) now is relatively smooth and convex, the stability of \(w_{\alpha}^{*}\) is \(O(\frac{1}{n})\). Thus we can get the sensitivity of \(w_{t+1}\). Then we can bound the sensitivity of \(\hat{w}\). **Remark 3**.: In the constrained case, [6] show that it is possible to achieve an upper bound of \(\tilde{O}((M+M^{2})(\frac{\sqrt{\kappa}}{\sqrt{n}}+\frac{\kappa\sqrt{d\log 1/ \delta}}{ne}))\), where \(M\) is the diameter of set \(\mathcal{C}\). Thus, we can see there is still a gap between the unconstrained case and the constrained case. Next, we study the case where \(2\leq p\leq\infty\). The key idea is to reduce the \(\ell_{p}^{d}\) space to the Euclidean space by leveraging the relationship between the \(\ell_{p}\) norm and the Euclidean norm. Thus, here we adopt the Phased DP-SGD algorithm proposed by [14]. As the parameters in the original Phased DP-SGD depend on the diameter, we modify them to the unconstrained case. Specifically, we have the following result. **Theorem 8**.: For the \(\ell_{p}^{d}\) space with \(2\leq p\leq\infty\), suppose Assumption 3 holds. Then for any \(0<\epsilon,\delta<1\), there is an \((\epsilon,\delta)\)-DP algorithm whose output \(\theta\) satisfies \[\mathbb{E}[\mathcal{L}(\theta)]-\mathcal{L}(\theta^{*})\leq O(d^{1-\frac{2}{p }}\|\theta^{*}\|^{2}(\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log(1/\delta)}}{\epsilon n })).\] In the constrained case, [6] shows the optimal rate of \(O(Md^{\frac{1}{2}-\frac{1}{p}}(\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log 1/\delta}}{n \epsilon}))\), where \(M\) is the diameter of the set \(\mathcal{C}\) w.r.t. \(\|\cdot\|\). Thus, we can see there is a difference of \(O(d^{\frac{1}{2}-\frac{1}{p}})\). This is because, rather than linear in \(M\) in the constrained case, in the Euclidean and unconstrained case, we can show the excess population risk depends on \(\|\theta^{*}\|_{2}^{2}\), which is less than \(d^{1-\frac{2}{p}}\|\theta^{*}\|^{2}\). ### Heavy-tailed and Constrained Case In the above section, we studied DP-SCO with Lipschitz loss functions, i.e., the \(\|\cdot\|_{*}\) norm of the loss gradient is uniformly bounded by \(L\). Next, we will relax this assumption to a heavy-tailed distribution, i.e., we only assume the variance of the loss gradient w.r.t \(\|\cdot\|_{*}\) is finite. As we have discussed the difficulty of the unconstrained case compared to the constrained case, throughout the section, we focus on the constrained case with the \(\|\cdot\|\)-norm diameter \(M\). **Assumption 4**.: We assume \(\ell(\cdot,x)\) is convex and \(\beta\)-smooth \(||\cdot||\) over \(\mathcal{C}\). Moreover, for all \(w\in\mathcal{C}\) there exists a known constant \(\sigma>0\) such that \(\mathbb{E}[||\nabla\ell(w,x)-\nabla\mathcal{L}(w)||_{*}^{2}]\leq\sigma^{2}\). It is noteworthy that the heavy-tailedness assumption is commonly used in previous related work, such as [35]. Besides the norm of gradient, there is another line of work that only assumes the second-order moment of each coordinate of the gradient is bounded [19, 23, 36, 38, 32]. We leave such a relaxed assumption as future work. Like the previous section, we first study the case where \(1<p<2\). We present our algorithm in Algorithm 4, which could be considered a shuffled, truncated, and noisy version of one-pass Mirror Descent. Specifically, in the first step, we shuffle the dataset and divide it into several batches (we will use one batch for one iteration). Using the by-now standard method of privacy amplification by shuffling [15], we can amplify the overall privacy guarantee by a factor of \(\tilde{O}(\frac{1}{n})\) as compared to the analysis for the unshuffled dataset. Next, motivated by [27], at each iteration, we first conduct a truncation step to each sample gradient \(\nabla\ell(w_{t-1},x)\). Such an operator can not only remove outliers, but also upper bound the \(\|\cdot\|_{*}\)-sensitivity of the truncated gradients to \(O(\beta M+\lambda)\). Then we perform the Mirror Descent update by these perturbed and truncated sample gradients. In the following, we show the privacy and utility guarantees of our algorithm. ``` 1:Input: Dataset \(D\), loss function \(\ell\), initial point \(w_{0}=0\), smooth parameter \(\beta\) and \(\lambda\). 2:Randomly permute the data and denote the permuted data as \(\{x_{1},\cdots,x_{n}\}\). 3:Divide the permuted data into \(T\) batches \(\{B_{i}\}_{i=1}^{T}\) where \(|B_{i}|=\frac{n}{T}\) for all \(i=1,\cdots,T\) 4:for\(t=1,\cdots,T\)do 5:for each \(x\in B_{t}\)do 6:\(g_{x}=\begin{cases}\nabla\ell(w_{t-1},x)&\text{if }||\nabla\ell(w_{t-1},x)||_{*} \leq\beta M+\lambda\\ 0&\text{otherwise}\end{cases}\) 7:endfor 8: Update \[w_{t}=\operatorname*{arg\,min}_{w\in\mathcal{C}}\left\{\left\langle\frac{ \sum\limits_{x\in B_{t}}g_{x}+Z_{x}^{t}}{|B_{t}|},w\right\rangle+\gamma_{t} \cdot D_{\Phi}(w,w_{t-1})\right\},\] where \(Z_{x}^{t}\sim\mathcal{GG}_{||\cdot||_{+}}(\sigma_{1}^{2})\) with \(\sigma_{1}^{2}=O\left(\frac{\log(\frac{n}{2})\cdot\kappa(\beta M+\lambda)^{2} \cdot\log(1/\delta)}{n\epsilon^{2}}\right)\), \(||\cdot||_{+}\) is the smooth norm for \((\mathbf{E},||\cdot||_{*})\). \(\kappa=\min\{\frac{1}{p-1},2\log d\}\) and \(\Phi(x)=\frac{\kappa}{2}||x||_{\kappa_{+}}^{2}\) with \(\kappa_{+}=\frac{\kappa}{\kappa-1}\). 9:endfor 10:return\(\hat{w}=(\sum_{t=1}^{T}\gamma_{t}^{-1})^{-1}\cdot\sum_{t=1}^{T}\gamma_{t}^{- 1}w_{t}\) ``` **Algorithm 4** Shuffled Truncated DP Mirror Descent **Theorem 9**.: For the \(\ell_{p}^{d}\) space with \(1<p<2\), suppose Assumption 4 holds. Algorithm 4 is \((\epsilon,\delta)\)-DP if \(\epsilon=O(\sqrt{\frac{\log(n/\delta)}{n}})\) and \(0<\delta<1\). **Theorem 10**.: For the \(\ell_{p}^{d}\) space with \(1<p<2\), suppose Assumption 4 holds and assume \(n\) is sufficiently large such that \(n\geq O(\frac{\max\{\beta^{2},1\}M^{2}\sqrt{d\kappa^{2}\log(1/\delta)}}{\epsilon})\). Given a failure probability \(\delta^{\prime}>0\), in Algorithm 4, take \(T=O(\frac{M^{2}n^{2}\epsilon^{2}}{\lambda^{2}d\log(1/\delta)})\), \(\{\gamma\}_{t=1}^{T}=\tilde{\gamma}=\sqrt{T}\), and \(\lambda=O(\frac{\sqrt{n\epsilon}}{\sqrt[4]{\kappa^{2}d\log(1/\delta)}})\), then the output \(\hat{w}\) satisfies the following with probability \(1-\delta^{\prime}\) \[\mathbb{E}[\mathcal{L}(\hat{w})]-\mathcal{L}(w^{*})\leq\tilde{O}(\frac{M \sqrt[4]{\kappa^{2}d\log(1/\delta)}\log(1/\delta^{{}^{\prime}})}{\sqrt{n \epsilon}}),\] where the expectation is taken over the randomness of noise, and the probability is w.r.t. the dataset \(D\sim\mathcal{D}^{n}\). **Remark 4**.: First, note that due to the privacy amplification, here the noise added to each sample gradient is \(\tilde{O}(\frac{\beta M+\lambda}{\sqrt{n\epsilon}})\) rather than \(\tilde{O}(\frac{\beta M+\lambda}{\epsilon})\) if without shuffling. Secondly, note that the truncation step is quite different from the previous work on DP-SCO with heavy-tailed data [36], i.e., we enforce the sample gradient to become zero if its norm exceeds the threshold. Finally, compared to the best-known result \(O(\sqrt{\frac{\pi}{n}})\) in the non-private and heavy-tailed case [27] and the bound \(\tilde{O}(\sqrt{\frac{\pi}{n}}+\frac{\kappa\sqrt{d}}{n\epsilon})\) for private and Lipschitz case [6], we can see there may exist a space to improve our bound further. There are two limitations in Theorem 10. First, Algorithm 4 is \((\epsilon,\delta)\) only for \(\epsilon=\tilde{O}(n^{-\frac{1}{2}})\), which cannot be generalized to mid or low privacy regime. Secondly, Theorem 10 only holds for the case \(1<p<2\). To address the first issue, we can slightly modify the algorithm by using batched Mirror Descent without shuffling, while we will get a worse upper bound. For the second one, similar to Theorem 8, we can reduce the problem to the Euclidean case. Informally, we have the following two results. **Theorem 11** (Informal).: For the \(\ell_{p}^{d}\) space with \(1<p<2\), suppose Assumption 4 holds. For all \(0<\epsilon,\delta<1\), there is an \((\epsilon,\delta)\)-DP algorithm whose output could achieve an excess population risk of \(O\left(M^{\frac{4}{3}}\frac{\kappa^{\frac{3}{2}}(d\log(1/\delta))^{\frac{1}{ \delta}}}{(n\epsilon)^{\frac{1}{3}}}\right)\). **Theorem 12** (Informal).: For the \(\ell_{p}^{d}\) space with \(2\leq p\leq\infty\), suppose Assumption 4 holds (and with some additional mild assumptions). For all \(0<\epsilon,\delta<1\), there is an \((\epsilon,\delta)\)-DP algorithm whose output could achieve an expected excess population risk of \(O\left(\frac{d^{\frac{3}{2}}-\frac{1}{p}}{\sqrt{n}}+\frac{d^{\frac{3}{2}}- \frac{1}{2p}}{\sqrt{n\epsilon}}\right)\). ## Acknowledgements Di Wang was supported in part by the baseline funding BAS/1/1689-01-01, funding from the CRG grand URF/1/4663-01-01, FCC/1/1976-49-01 from CBRC. He was also supported by the funding of the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI).
2309.13149
Imaging the ring and jet in M87 at 85 GHz with the ngEHT and ngEHT+ngVL
The Global mm-VLBI Array (GMVA) has demonstrated the ability to resolved what may be the general relativistic shadow of the supermassive black hole in M87 at 86 GHz, as well as delineate the inner jet to $\sim 1$~mas distance. We investigate the ability of the planned ngEHT, and the ngEHT + ngVLA at 85 GHz, to image such a nuclear 'ring', and the associated jet, using a constructed model based on the current estimate of the ring size, and a scaled version of the best VLBA image of the M87 jet at 43 GHz. While the resolution does not improve due to the limit set by the diameter of the Earth, the ngEHT alone should provide both a higher fidelity image of the ring on scales $\le 0.1$~mas, and a good image of a more extended jet to $\sim 1$~mas. Adding the ngVLA improves substantially the dynamic range (factor 3.5), as well as adds the ability to image structures on larger scales, in this case out to at least 5~mas, and potentially to much larger scales given the $\sim 10^5$ range in spatial scales covered by the ngVLA itself. Both arrays provide good image fidelity ($\le 0.1$), in the inner $\sim 1$~mas, but the ngEHT-only image does not reproduce the outer jet well, or at all, with fidelity values greater than unity. The combined array reproduces much of the outer jet with good fidelity ($\le 0.3$). Adding the ngVLA also decreases the susceptibility to antenna-based phase errors by a similar factor, and improves the ability for fringe fitting and subsequent phase and amplitude self-calibration. As for scales $< 100~\mu$as, ie. the ring itself, adding the ngVLA makes little change for very bright sources, where uniform weighting can be employed. But for faint sources, adding the ngVLA adds potentially an order-of magnitude sensitivity improvement (Issaoun et al. 2023).
C. L. Carilli, R. C. Walker, E. Murphy, B. Mason
2023-09-22T19:15:31Z
http://arxiv.org/abs/2309.13149v1
# Next Generation Very Large Array Memo No. 118 September 2023 ###### Abstract The Global mm-VLBI Array (GMVA) has demonstrated the ability to resolved what may be the general relativistic shadow of the supermassive black hole in M87 at 86 GHz, as well as delineate the inner jet to \(\sim 1\) mas distance. We investigate the ability of the planned ngEHT, and the ngEHT + ngVLA at 85 GHz, to image such a nuclear 'ring', and the associated jet, using a constructed model based on the current estimate of the ring size, and a scaled version of the best VLBA image of the M87 jet at 43 GHz. While the resolution does not improve due to the limit set by the diameter of the Earth, the ngEHT alone should provide both a higher fidelity image of the ring on scales \(\leq 0.1\) mas, and a good image of a more extended jet to \(\sim 1\) mas. Adding the ngVLA improves substantially the dynamic range (factor 3.5), as well as adds the ability to image structures on larger scales, in this case out to at least 5 mas, and potentially to much larger scales given the \(\sim 10^{5}\) range in spatial scales covered by the ngVLA itself. Both arrays provide good image fidelity (\(\leq 0.1\)), in the inner \(\sim 1\) mas, but the ngEHT-only image does not reproduce the outer jet well, or at all, with fidelity values greater than unity. The combined array reproduces much of the outer jet with good fidelity (\(\leq 0.3\)). Adding the ngVLA also decreases the susceptibility to antenna-based phase errors by a similar factor, and improves the ability for fringe fitting and subsequent phase and amplitude self-calibration. As for scales
2301.00071
Spherical functions and Stolarski's invariance principle
In the previous paper [25], Stolarsky's invariance principle, known for point distributions on the Euclidean spheres [27], has been extended to the real, complex, and quaternionic projective spaces and the octonionic projective plane. Geometric features of these spaces as well as their models in terms of Jordan algebras have been used very essentially in the proof. In the present paper, we give a new pure analytic proof of the extended Stolarsky's invariance principle, relying on the theory of spherical functions on compact symmetric Riemannian manifolds of rank one.
Maksim Skriganov
2022-12-30T23:28:41Z
http://arxiv.org/abs/2301.00071v2
# Spherical functions and Stolarsky's invariance principle ###### Abstract. In the previous paper [25], Stolarsky's invariance principle, known for point distributions on the Euclidean spheres [27], has been extended to the real, complex, and quaternionic projective spaces and the octonionic projective plane. Geometric features of these spaces as well as their models in terms of Jordan algebras have been used very essentially in the proof. In the present paper, a new pure analytic proof of the extended Stolarsky's invariance principle is given, relying on the theory of spherical functions on compact symmetric Riemannian manifolds of rank one. Key words and phrases:Geometry of distances, discrepancies, spherical functions, projective spaces, Jacobi polynomials 2010 Mathematics Subject Classification: 11K38, 22F30, 52C99 ## 1. Introduction and main results _1.1 Introduction._ In 1973 Kenneth B. Stolarsky [27] established the following remarkable formula for point distributions on the Euclidean spheres. Let \(S^{d}=\{x\in\mathbb{R}^{d+1}:\|x\|=1\}\) be the standard \(d\)-dimensional unit sphere in \(\mathbb{R}^{d+1}\) with the geodesic (great circle) metric \(\theta\) and the Lebesgue measure \(\mu\) normalized by \(\mu(S^{d})=1\). We write \(\mathcal{C}(y,t)=\{x\in S^{d}:(x,y)>t\}\) for the spherical cap of height \(t\in[-1,1]\) centered at \(y\in S^{d}\). Here we write \((\cdot,\cdot)\) and \(\|\cdot\|\) for the inner product and the Euclidean norm in \(\mathbb{R}^{d+1}\). For an \(N\)-point subset \(\mathcal{D}_{N}\subset S^{d}\), the spherical cap quadratic discrepancy is defined by \[\lambda^{cap}[\mathcal{D}_{N}]=\int_{-1}^{1}\int_{S^{d}}\left(\,\#\{|\mathcal{ C}(y,t)\cap\mathcal{D}_{N}\}-N\mu(\mathcal{C}(y,t))\,\right)^{2}\mathrm{d}\mu(y) \,\mathrm{d}t. \tag{1.1}\] We introduce the sum of pairwise Euclidean distances between points of \(\mathcal{D}_{N}\) \[\tau[\mathcal{D}_{N}]=\frac{1}{2}\sum\nolimits_{x_{1},x_{2}\in\mathcal{D}_{N }}\|x_{1}-x_{2}\|=\sum\nolimits_{x_{1},x_{2}\in\mathcal{D}_{N}}\sin\frac{1}{ 2}\theta(x_{1},x_{2}), \tag{1.2}\] and write \(\langle\tau\rangle\) for the average value of the Euclidean distance on \(S^{d}\), \[\langle\tau\rangle=\frac{1}{2}\iint_{S^{d}\times S^{d}}\|y_{1}-y_{2}\|\,d\mu( y_{1})\,\mathrm{d}\mu(y_{2}). \tag{1.3}\] The study of the quantities (1.1) and (1.2) falls within the subjects of discrepancy theory and geometry of distances, see [1, 6] and references therein. It turns out that the quantities (1.1) and (1.2) are not independent and are intimately related by the following remarkable identity \[\gamma(S^{d})\lambda^{cap}[\mathcal{D}_{N}]+\tau[\mathcal{D}_{N}]=\langle \tau\rangle N^{2}, \tag{1.4}\] for an arbitrary \(N\)-point subset \(\mathcal{D}_{N}\subset S^{d}\). Here \(\gamma(S^{d})\) is a positive constant independent of \(\mathcal{D}_{N}\), \[\gamma(S^{d})=\frac{d\,\sqrt{\pi}\ \Gamma(d/2)}{2\,\Gamma((d+1)/2)}\,. \tag{1.5}\] The identity (1.4) is known in the literature as _Stolarsky's invariance principle_. Its original proof given in [27] has been simplified in [7, 10]. In our previous paper [25] Stolarsky's invariance principle (1.4) has been extended to the real \(\mathbb{R}P^{n}\), the complex \(\mathbb{C}P^{n}\), the quaternionic \(\mathbb{H}P^{n}\) projective spaces, and the octonionic \(\mathbb{O}P^{2}\) projective plane. Geometric features of such spaces as well as their models in terms of Jordan algebras have been used very essentially in the proof. The aim of the present paper is to give an alternative pure analytic proof relying on the theory of spherical functions. _1.2 Discrepancies and metrics. \(L_{1}\)-invariance principles_. Let us consider Stolarsky's invariance principle in a broader context. Let \(\mathcal{M}\) be a compact metric measure space with a fixed metric \(\theta\) and a finite Borel measure \(\mu\), normalized, for convenience, by \[\operatorname{diam}(\mathcal{M},\theta)=\pi,\quad\mu(\mathcal{M})=1, \tag{1.6}\] where \(\operatorname{diam}(\mathcal{E},\rho)=\sup\{\rho(x_{1},x_{2}):x_{1},x_{2}\in \mathcal{E}\}\) denotes the diameter of a subset \(\mathcal{E}\subseteq\mathcal{M}\) with respect to a metric \(\rho\). We write \(\mathcal{B}(y,r)=\{x\in\mathcal{M}:\theta(x,y)<r\}\) for the ball of radius \(r\in\mathcal{I}\) centered at \(y\in\mathcal{M}\) and of volume \(v(y,r)=\mu(\mathcal{B}(y,r))\). Here \(\mathcal{I}=\{r=\theta(x_{1},x_{2}):x_{1},x_{2}\in\mathcal{M}\}\) denotes the set of all possible radii. If the space \(\mathcal{M}\) is connected, we have \(\mathcal{I}=[\,0,\pi\,]\). We consider distance-invariant metric spaces for which the volume of a ball \(v(r)=v(y,r)\) is independent of \(y\in\mathcal{M}\), see [20, p. 504]. The typical examples of distance-invariant spaces are homogeneous spaces \(\mathcal{M}=G/H\) with \(G\)-invariant metrics \(\theta\) and measures \(\mu\). For an \(N\)-point subset \(\mathcal{D}_{N}\subset\mathcal{M}\), the ball quadratic discrepancy is defined by \[\lambda[\xi,\mathcal{D}_{N}]=\int_{\mathcal{I}}\int_{\mathcal{M}}\left(\,\# \{\mathcal{B}(y,r)\cap\mathcal{D}_{N}\}-Nv(r)\right)\,\mathrm{d}\mu(y)\, \mathrm{d}\xi(r), \tag{1.7}\] where \(\xi\) is a finite measure on the set of radii \(\mathcal{I}\). Notice that for \(S^{d}\) spherical caps and balls are related by \(\mathcal{C}(y,t)=\mathcal{B}(y,r)\), \(t=\cos r\), and the discrepancies (1.1) and (1.7) are related by \(\lambda^{cap}[\mathcal{D}_{N}]=\lambda[\xi^{\natural},\mathcal{D}_{N}]\), where \(\mathrm{d}\xi^{\natural}(r)=\sin r\,\mathrm{d}r\), \(r\in\mathcal{I}=[0,\pi]\). The ball quadratic discrepancy (1.7) can be written in the form \[\lambda[\xi,\mathcal{D}_{N}]=\sum\nolimits_{x_{1},x_{2}\in\mathcal{D}_{N}} \lambda(\xi,x_{1},x_{2}) \tag{1.8}\] with the kernel \[\lambda(\xi,x_{1},x_{2})=\int_{\mathcal{I}}\int_{\mathcal{M}}\Lambda( \mathcal{B}(y,r),x_{1})\,\Lambda(\mathcal{B}(y,r),x_{2})\,\mathrm{d}\mu(y)\, \mathrm{d}\xi(r)\,, \tag{1.9}\] where \[\Lambda(\mathcal{B}(y,r),x)=\chi(\mathcal{B}(y,r),x)-v(r), \tag{1.10}\] and \(\chi(\mathcal{E},\cdot)\) denotes the characteristic function of a subset \(\mathcal{E}\subseteq\mathcal{M}\). For an arbitrary metric \(\rho\) on \(\mathcal{M}\) we introduce the sum of pairwise distances \[\rho[\mathcal{D}_{N}]=\sum\nolimits_{x_{1},x_{2}\in\mathcal{D}_{N}}\rho(x_{1},x_{2}). \tag{1.11}\] and the average value \[\langle\rho\rangle=\int_{\mathcal{M}\times\mathcal{M}}\rho(y_{1},y_{2})\,\mathrm{d }\mu(y_{1})\,\mathrm{d}\mu(y_{2}). \tag{1.12}\] We introduce the following symmetric difference metrics on the space \(\mathcal{M}\) \[\theta^{\Delta}(\xi,y_{1},y_{2}) =\frac{1}{2}\int_{\mathcal{I}}\mu(\mathcal{B}(y_{1},r)\Delta \mathcal{B}(y_{2},r))\,\mathrm{d}\xi(r)\] \[=\frac{1}{2}\int_{\mathcal{I}}\int_{\mathcal{M}}\chi(\mathcal{B} (y_{1},r)\Delta\mathcal{B}(y_{2},r),y)\,\mathrm{d}\mu(y)\,\mathrm{d}\xi(r), \tag{1.13}\] where \(\mathcal{B}(y_{1},r)\Delta\mathcal{B}(y_{2},r)=\mathcal{B}(y_{1},r)\cup \mathcal{B}(y_{2},r)\setminus\mathcal{B}(y_{1},r)\cap\mathcal{B}(y_{2},r)\) is the symmetric difference of the balls \(\mathcal{B}(y_{1},r)\) and \(\mathcal{B}(y_{2},r)\). In line with the definitions (1.11) and (1.12), we put \[\theta^{\Delta}[\xi,\mathcal{D}_{N}]=\sum\nolimits_{x_{1},x_{2}\in\mathcal{D }_{N}}\theta^{\Delta}(\xi,x_{1},x_{2}).\] and \[\langle\theta^{\Delta}(\xi)\rangle=\int_{\mathcal{M}\times\mathcal{M}}\theta^ {\Delta}(\xi,y_{1},y_{2})\,\mathrm{d}\mu(y_{1})\,\mathrm{d}\mu(y_{2})\,.\] A direct calculation leads to the following result. **Proposition 1.1**.: _Let a compact metric measure space \(\mathcal{M}\) be distance-invariant, then we have_ \[\lambda(\xi,y_{1},y_{2})+\theta^{\Delta}(\xi,y_{1},y_{2})=\langle\theta^{ \Delta}(\xi)\rangle. \tag{1.14}\] _In particular, we have the following invariance principle_ \[\lambda[\,\xi,\mathcal{D}_{N}\,]+\theta^{\Delta}[\,\xi,\mathcal{D}_{N}\,]= \langle\theta^{\Delta}(\xi)\rangle\,N^{2} \tag{1.15}\] _for an arbitrary \(N\)-point subset \(\mathcal{D}_{N}\subset\mathcal{M}\)._ **Proof.** In view of the symmetry of the metric \(\theta\), we have \[\chi(\mathcal{B}(x,r),y)=\chi(\mathcal{B}(y,r),x)=\chi_{r}(\theta(y,x))\,, \tag{1.16}\] where \(\chi_{r}(\cdot)\) is the characteristic function of the segment \([0,r]\), \(0\leqslant r\leqslant\pi\). Therefore, \[\chi(\mathcal{B}(y_{1},r)\Delta\mathcal{B}(y_{2},r),y)\] \[\qquad\qquad=\chi(\mathcal{B}(y_{1},r),y)+\chi(\mathcal{B}(y_{2},r),y)-2\chi(\mathcal{B}(y_{1},r)\cap\mathcal{B}(y_{2},r),y)\,,\] and \(\int_{\mathcal{M}}\chi(\mathcal{B}(x,r),y)\mathrm{d}\mu(x)=\int_{\mathcal{M}} \chi(\mathcal{B}(x,r),y)\mathrm{d}\mu(y)=v(r)\). Using these relations, we obtain \[\left.\begin{aligned} \lambda(\xi,x_{1},x_{2})=& \int_{\mathcal{I}}\left(\mu(\mathcal{B}(x_{1},r)\cap\mathcal{B}(x_{2},r))-v( r)^{2}\right)\mathrm{d}\xi(r)\,,\\ \theta^{\Delta}(\xi,y_{1},y_{2})=&\int_{\mathcal{I} }\left(v(r)-\mu(\mathcal{B}(y_{1},r)\cap\mathcal{B}(y_{2},r))\right)\mathrm{d }\xi(r)\,,\\ \langle\theta^{\Delta}(\xi)\rangle=&\int_{\mathcal{I }}\left(v(r)-v(r)^{2}\right)\mathrm{d}\xi(r)\,.\end{aligned}\right\} \tag{1.17}\] The relations (1.17) imply (1.14). In the case of spheres \(S^{d}\), relations of the type (1.14) and (1.15) were given in [27]. Their extensions to more general metric measure spaces are given [24, Eq. (1.30)] and [25, Proposition 1.1]. A probabilistic version of the invariance principle (1.15) is given [23, Theorem 2.1]. Notice that \[\chi(\mathcal{B}(y_{1},r)\Delta\mathcal{B}(y_{2},r),y)=\left|\chi(\mathcal{B}( y_{1},r),y)-\chi(\mathcal{B}(y_{2},r),y)\right|. \tag{1.18}\] Therefore, \[\theta^{\Delta}(\xi,y_{1},y_{2})=\frac{1}{2}\int_{\mathcal{I}}\int_{\mathcal{ M}}\left|\chi(\mathcal{B}(y_{1},r),y)-\chi(\mathcal{B}(y_{2},r),y)\right| \mathrm{d}\mu(y)\,\mathrm{d}\xi(r) \tag{1.19}\] is an \(L_{1}\)-metric. Recall that a metric space \(\mathcal{M}\) with a metric \(\rho\) is called isometrically \(L_{q}\)-embeddable (\(q=1\text{ or }2\)), if there exists a mapping \(\varphi:\mathcal{M}\ni x\to\varphi(x)\in L_{q}\), such that \(\rho(x_{1},x_{2})=\left\|\varphi(x_{1})-\varphi(x_{2})\right\|_{L_{q}}\) for all \(x_{1}\), \(x_{2}\in\mathcal{M}\). Notice that the \(L_{2}\)-embeddability is stronger and implies the \(L_{1}\)-embeddability, see [14, Sec. 6.3]. It follows from (1.19) that the space \(\mathcal{M}\) with the symmetric difference metrics \(\theta^{\Delta}(\xi)\) is isometrically \(L_{1}\)-embeddable by the formula \[\mathcal{M}\ni x\to\chi(\mathcal{B}(x,r),y)\in L_{1}(\mathcal{M}\times \mathcal{I})\,, \tag{1.20}\] The identity (1.15) can be called the \(L_{1}\)-invariance principle, while Stolarsky's invariance principle (1.4) should be called the \(L_{2}\)-invariance principle, because it involves the Euclidean metric. The identities of such a type including correspondingly \(L_{1}\) and \(L_{2}\) metrics could be also called weak and strong invariance principles. _1.3 \(L_{2}\)-invariance principles_. Recall the definition and necessary facts on two-point homogeneous spaces. Let \(G=G(\mathcal{M})\) be the group of isometries of a metric space \(\mathcal{M}\) with a metric \(\theta\), _i.e._\(\theta(gx_{1},gx_{2})=\theta(x_{1},x_{2})\) for all \(x_{1}\), \(x_{2}\in\mathcal{M}\) and \(g\in G\). The space \(\mathcal{M}\) is called _two-point homogeneous_, if for any two pairs of points \(x_{1}\), \(x_{2}\) and \(y_{1}\), \(y_{2}\) with \(\theta(x_{1},x_{2})=\theta(y_{1},y_{2})\) there exists an isometry \(g\in G\), such that \(y_{1}=gx_{1}\), \(y_{2}=gx_{2}\). In this case, the group \(G\) is obviously transitive on \(\mathcal{M}\) and \(\mathcal{M}=G/H\) is a homogeneous space, where the subgroup \(K\subset G\) is the stabilizer of a point \(x_{0}\in\mathcal{M}\). Furthermore, the homogeneous space \(\mathcal{M}\) is symmetric, _i.e._ for any two points \(y_{1}\), \(y_{2}\in\mathcal{M}\) there exists an isometry \(g\in G\), such that \(gy_{1}=y_{2}\), \(gy_{2}=y_{1}\). There is a large number of two-point homogeneous spaces. For example, all Hamming spaces, known in the coding theory, are two-point homogeneous. We will consider connected spaces. This assumption turns out to be a strong restriction. All compact connected two-point homogeneous spaces \(\mathcal{Q}=G/H\) are known. By Wang's theorem they are the following, see [18, 19, 22, 31, 32]: (i) The \(d\)-dimensional Euclidean spheres \(S^{d}=SO(d+1)/SO(d)\times\{1\}\), \(d\geqslant 2\), and \(S^{1}=O(2)/O(1)\times\{1\}\). (ii) The real projective spaces \(\mathbb{R}P^{n}=O(n+1)/O(n)\times O(1)\). (iii) The complex projective spaces \(\mathbb{C}P^{n}=U(n+1)/U(n)\times U(1)\). (iv) The quaternionic projective spaces \(\mathbb{H}P^{n}=Sp(n+1)/Sp(n)\times Sp(1)\), (v) The octonionic projective plane \(\mathbb{O}P^{2}=F_{4}/\operatorname{Spin}(9)\). Here we use the standard notation from the theory of Lie groups; in particular, \(F_{4}\) is one of the exceptional Lie groups in Cartan's classification. All these spaces are Riemannian symmetric manifolds of rank one. Geometrically, this means that all geodesic sub-manifolds in \(\mathcal{Q}\) are one-dimensional and coincide with geodesics. From the spectral stand point, this also means that all operators on \(\mathcal{Q}\) commuting with the action of the group \(G\) are functions of the Laplace-Beltrami operator on \(\mathcal{Q}\), see [18, 19, 31, 32] for more details. The spaces \(\mathbb{F}P^{n}\) as Riemannian manifolds have dimensions \[d=\dim_{\mathbb{R}}\mathbb{F}P^{n}=nd_{0},\quad d_{0}=\dim_{\mathbb{R}}\mathbb{ F}, \tag{1.21}\] where \(d_{0}=1,2,4,8\) for \(\mathbb{F}=\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\), \(\mathbb{O}\), correspondingly. For the spheres \(S^{d}\) we put \(d_{0}=d\) by definition. Projective spaces of dimension \(d_{0}\) (\(n=1\)) are homeomorphic to the spheres \(S^{d_{0}}\): \(\mathbb{R}P^{1}\,\approx S^{1},\mathbb{C}P^{1}\,\approx S^{2},\mathbb{H}P^{1} \,\approx S^{4},\mathbb{O}P^{1}\,\approx S^{8}\). We can conveniently agree that \(d>d_{0}\) (\(n\geqslant 2\)) for projective spaces, while the equality \(d=d_{0}\) holds only for spheres. Under this convention, the dimensions \(d=nd_{0}\) and \(d_{0}\) define uniquely (up to homeomorphism) the corresponding homogeneous space which we denote by \(\mathcal{Q}=\mathcal{Q}(d,d_{0})\). In what follows we always assume that \(n=2\) if \(\mathbb{F}=\mathbb{O}\), since projective spaces \(\mathbb{O}P^{n}\) over octonions do not exist for \(n>2\). We consider \(\mathcal{Q}\) as a metric measure space with the metric \(\theta\) and measure \(\mu\) proportional to the invariant Riemannian distance and measure on \(\mathcal{Q}\). The coefficients of proportionality are defined to satisfy (1.6). Any space \(\mathcal{Q}\) is distance-invariant and the volume of balls in the space is given by \[v(r) =\kappa\int_{0}^{r}(\sin\frac{1}{2}u)^{d-1}(\cos\frac{1}{2}u)^{d_ {0}-1}\,\mathrm{d}u\quad r\in[\,0,\pi\,]\] \[=\kappa\,2^{1-d/2-d_{0}/2}\int_{\cos r}^{1}(1-t)^{\frac{d}{2}-1} \,(1+t)^{\frac{d_{0}}{2}-1}\,\mathrm{d}t, \tag{1.22}\] where \(\kappa=\kappa(d,d_{0})=B(d/2,d_{0}/2)^{-1}\); \(B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)\) and \(\Gamma(a)\) are the beta and gamma functions. Equivalent forms of (1.22) can be found in the literature, see, for example, [16, pp. 177-178], [19, pp. 165-168], [20, pp. 508-510]. For even \(d_{0}\), the integrals (1.22) can be calculated explicitly that gives convenient expressions for \(v(r)\) in the case of \(\mathbb{C}P^{n}\), \(\mathbb{H}P^{n}\) and \(\mathbb{O}P^{2}\), see, for example, [22]. The chordal metric on the spaces \(\mathcal{Q}\) is defined by \[\tau(x_{1},x_{2})=\sin\frac{1}{2}\theta(x_{1},x_{2})\,,\quad x_{1},x_{2}\in \mathcal{Q}. \tag{1.23}\] The formula (1.23) defines a metric because the function \(\varphi(\theta)=\sin\theta/2\), \(0\leqslant\theta\leqslant\pi\), is concave, increasing, and \(\varphi(0)=0\), that implies the triangle inequality, see [14, Lemma 9.0.2]. For the sphere \(S^{d}\) we have \[\tau(x_{1},x_{2})=\sin\frac{1}{2}\theta(x_{1},x_{2})=\frac{1}{2}\,\|x_{1}-x_{ 2}\|,\quad x_{1},x_{2}\in S^{d}. \tag{1.24}\] **Lemma 1.1**.: _The space \(\mathcal{Q}=\mathcal{Q}(d,d_{0}),\,d=nd_{0},\) can be embedded into the unit sphere_ \[\Pi:\mathcal{Q}\ni x\to\Pi(x)\in S^{m-1}\subset\mathbb{R}^{m},\quad m=\frac{1 }{2}(n+1)(d+2)-1, \tag{1.25}\] _such that_ \[\tau(x_{1},x_{2})=\left(\frac{d}{2(d+d_{0})}\right)^{1/2}\|\Pi(x_{1})-\Pi(x_{ 2})\|,\quad x_{1},x_{2}\in\mathcal{Q}, \tag{1.26}\] _where \(\|\cdot\|\) is the Euclidean norm in \(\mathbb{R}^{m}\)._ Hence, the metric \(\tau(x_{1},x_{2})\) is proportional to the Euclidean length of a segment joining the corresponding points \(\Pi(x_{1})\) and \(\Pi(x_{2})\) on the unit sphere. The chordal metric \(\tau\) on the complex projective space \(\mathbb{C}P^{n}\) is known as the Fubini-Study metric. An interesting discussion of the properties of chordal metrics for projective spaces can be found in the paper [12, 13]. Lemma 1.1 will be proved in Section 2, and the embedding (1.25) will be described explicitly in terms of spherical functions on the space \(\mathcal{Q}\). Note that the embedding (1.25) can be described in different ways, see, for example, [25, 29]. The following general result has been established in [25, Theorems 1.1 and 1.2]. **Theorem 1.1**.: _For each space \(\mathcal{Q}=\mathcal{Q}(d,d_{0}),\,d=nd_{0}\), we have the equality_ \[\tau(x_{1},x_{2})=\gamma(\mathcal{Q})\;\theta^{\Delta}(\xi^{\natural},x_{1},x_ {2}). \tag{1.27}\] _where \(\mathrm{d}\xi^{\natural}(r)=\sin r\,\mathrm{d}r\), \(r\in[0,\pi]\) and_ \[\gamma(\mathcal{Q})=\frac{\sqrt{\pi}}{4}\left(d+d_{0}\right)\frac{\Gamma(d_{0 }/2)}{\Gamma((d_{0}+1)/2)}=\frac{n+1}{2}\,\gamma(S^{d_{0}})\,, \tag{1.28}\] _where \(\gamma(S^{d_{0}})\) is defined by (1.5). Therefore,_ \[\left.\begin{aligned} \gamma(\mathbb{R}P^{n})& =\frac{n+1}{2}\,\gamma(S^{1})=\frac{\pi}{4}\left(n+1\right),\\ \gamma(\mathbb{C}P^{n})&=\frac{n+1}{2}\,\gamma(S^{2} )=n+1\,,\\ \gamma(\mathbb{H}P^{n})&=\frac{n+1}{2}\,\gamma(S^{4 })=\frac{4}{3}\left(n+1\right),\\ \gamma(\mathbb{O}P^{2})&=\ \frac{3}{2}\,\gamma(S^{8})= \frac{192}{35}\,.\end{aligned}\right\}\] Comparing Theorem 1.1 with Proposition 1.1, we arrive to the following. **Corollary 1.1**.: _We have the following \(L_{2}\)-invariance principle_ \[\gamma(\mathcal{Q})\,\lambda[\xi^{\natural},\mathcal{D}_{N}]+\tau[\mathcal{D} _{N}]=\langle\tau\rangle N^{2} \tag{1.29}\] _for an arbitrary \(N\)-point subset \(\mathcal{D}_{N}\subset\mathcal{Q}\)._ The constant \(\gamma(\mathcal{Q})\) has the following geometric interpretation \[\gamma(Q)=\frac{\langle\tau\rangle}{\langle\theta^{\Delta}(\xi^{\natural}) \rangle}=\frac{\mathrm{diam}(\mathcal{Q},\tau)}{\mathrm{diam}(\mathcal{Q},\, \theta^{\Delta}(\xi^{\natural}))}\,. \tag{1.30}\] Indeed, it suffices to calculate the average values (1.12) of both metrics in (1.27) to obtain the first equality in (1.30). Similarly, writing (1.27) for any pair of antipodal points \(x_{1}\), \(x_{2}\), \(\theta(x_{1},x_{2})=\pi\), we obtain the second equality in (1.30). The average value \(\langle\tau\rangle\) of the chordal metric \(\tau\) can be easily calculated with the help of the formulas (1.12) and (1.22): \[\langle\tau\rangle=B(d/2,d_{0}/2)^{-1}\,B((d+1)/2,d_{0}/2)\,. \tag{1.31}\] In the case of spheres \(S^{d}\), the identity (1.29) coincides with (1.4). The identity (1.29) can be thought of as an extension of Stolarsky's invariance principle to all projective spaces. Applications of \(L_{1}\)- and \(L_{2}\)-invariance principles and similar identities to the discrepancy theory, geometry of distances, information and probability theory have been given in many papers, see, for example, [1, 3, 4, 5, 6, 7, 8, 9, 10, 23, 24, 25, 27]. The above discussion raises the following _open questions_: - Are there measures \(\xi\) on the set of radii for spaces \(Q\) (for spheres \(S^{d}\), say) other than the measure \(\xi^{\natural}\) such that the corresponding symmetric difference metrics \(\theta^{\Delta}(\xi)\) are the \(L_{2}\)-metrics? - Are there compact measure metric spaces other than spheres \(S^{d}\) and projective spaces \(\mathbb{F}P^{n}\) for which the \(L_{2}\)-invariance principle is also true? _1.4 Proof of Theorem 1.2._ In the present paper we use the theory of spherical functions to prove the following result. **Theorem 1.2**.: _The equality (1.27) is equivalent to the following series of formulas for Jacobi polynomials_ \[\int_{-1}^{1}\left(P_{l}^{(d/2,d_{0}/2)}(t)\right)^{2}\,(1-t)^{d} \,(1+t)^{d_{0}}\,\,\mathrm{d}t\] \[=\,\frac{2^{d+d_{0}+1}\,(1/2)_{l-1}}{(l!)^{2}}\,B(d+1,d_{0}+1)\,T _{l}(d/2,d_{0}/2) \tag{1.32}\] _for all \(l\geqslant 0\), where_ \[T_{l}(d/2,d_{0}/2)=\frac{\Gamma(d/2+l+1)\,\Gamma(d_{0}/2+l+1)\,\Gamma(d/2+d_{ 0}/2+3/2))}{\Gamma(d/2+1)\,\Gamma(d_{0}/2+1)\,\Gamma(d/2+d_{0}/2+3/2+l)}\,. \tag{1.33}\] Here \(P_{l}^{(\alpha,\beta)}(t),\,t\in[-1,1],\,\alpha>-1,\,\beta>-1\), are the standard Jacobi polynomials of degree \(l\) normalized by \[P_{l}^{(\alpha,\beta)}(1)=\binom{\alpha+l}{l}=\frac{\Gamma(\alpha+l+1)}{\Gamma (l+1)\Gamma(\alpha+1)}\,. \tag{1.34}\] The polynomials \(P_{l}^{(\alpha,\beta)}\) can be given by Rodrigues' formula \[P_{l}^{(\alpha,\beta)}(t)=\frac{(-1)^{l}}{2^{l}l!}(1-t)^{-\alpha}(1+t)^{-\beta }\frac{\mathrm{d}^{l}}{\mathrm{d}t^{l}}\left\{(1-t)^{n+\alpha}(1+t)^{n+\beta }\right\}. \tag{1.35}\] Notice that \(|P_{l}^{(\alpha,\beta)}(t)|\leqslant P_{l}^{(\alpha,\beta)}(1)\) for \(t\in[-1,1]\). Recall that \(\{P_{l}^{(\alpha,\beta)},l\geqslant 0\}\) form a complete orthogonal system in \(L_{2}\) on the segment \([-1,1]\) with the weight \((1-t)^{\alpha}(1+t)^{\beta}\) and the following orthogonality relations \[\int\limits_{0}^{\pi}P_{l}^{(\alpha,\beta)}(\cos u)P_{l^{\prime}}^{(\alpha, \beta)}(\cos u)(\sin\frac{1}{2}u)^{2\alpha+1}(\cos\frac{1}{2}u)^{2\beta+1}\,du \tag{1.36}\] \[=2^{-\alpha-\beta-1}\int\limits_{-1}^{1}P_{l}^{(\alpha,\beta)}(t)P_{l^{\prime }}^{(\alpha,\beta)}(t)(1-t)^{\alpha}(1+t)^{\beta}\,dt=M_{l}(\alpha,\beta)^{-1} \delta_{ll^{\prime}}, \tag{1.37}\] where \(\delta_{ll^{\prime}}\) is Kronecker's symbol, \(M_{0}=B(\alpha+1,\beta+1)^{-1}\) and \[M_{l}(\alpha,\beta)=(2l+\alpha+\beta+1)\frac{\Gamma(l+1)\Gamma(l+\alpha+ \beta+1)}{\Gamma(l+\alpha+1)\Gamma(l+\beta+1)},\quad l\geqslant 1. \tag{1.38}\] All necessary facts about Jacobi polynomials can be found in [2, 28]. We also use the notation \[(a)_{0}=1,\quad(a)_{k}=a(a+1)\ldots(a+k-1)=\frac{\Gamma(a+k)}{\Gamma(a)} \tag{1.39}\] for the rising factorial powers and \[\langle a\rangle_{0}=1,\quad\langle a\rangle_{k}=a(a-1)\ldots(a-k+1)=(-1)^{k}\,(- a)_{k} \tag{1.40}\] for the falling factorial powers. Theorem 1.2 reduces the proof of Theorem 1.1 to the proof of the formulas (1.32). Perhaps such formulas are known but I could not find them in the literature. Not much is known about the integrals \(\int_{-1}^{1}\left(P_{l}^{(\alpha,\beta)}(t)\right)^{2}(1-t)^{\nu}(1+t)^{\sigma }\,\mathrm{d}t\) for general Jacobi polynomials \(P^{(\alpha,\beta)}\). Only for spheres \(S^{d}\) Jacobi polynomials \(P_{l}^{(d/2,d/2)}\) coincide (up to constant factors) with Gegenbauer polynomials, and in this case very general formulas for such integrals are given in [11]. In the present paper we will prove the following statement. **Lemma 1.2**.: _For all \(l\geqslant 0\), \(\operatorname{Re}\alpha>-1/2\) and \(\operatorname{Re}\beta>-1/2\), we have_ \[\int_{-1}^{1}\left(P_{l}^{(\alpha,\beta)}(t)\right)^{2} (1-t)^{2\alpha}(1+t)^{2\beta}\,\mathrm{d}t\] \[=\,\frac{2^{2\alpha+2\beta+1}\,(1/2)_{l}}{(l!)^{2}}\,B(2\alpha+1, 2\beta+1)\,T_{l}(\alpha,\beta), \tag{1.41}\] _where_ \[T_{l}(\alpha,\beta) =\,\frac{(\alpha+1)_{l}\,(\beta+1)_{l}}{(\alpha+\beta+3/2)_{l}}\] \[=\,\frac{\Gamma(\alpha+l+1)\,\Gamma(\beta+l+1)\,\Gamma(\alpha+ \beta+3/2))}{\Gamma(\alpha+1)\,\Gamma(\beta+1)\,\Gamma(\alpha+\beta+3/2+l)} \tag{1.42}\] _is a rational function of \(\alpha\) and \(\beta\)._ The integral (1.41) converges for \(\operatorname{Re}\alpha>-1/2\) and \(\operatorname{Re}\beta>-1/2\), and represents in this region a holomorphic function of two complex variables. The equality (1.41) defines an analytic continuation of the integral (1.41) to \(\alpha\in\mathbb{C}\) and \(\beta\in\mathbb{C}\). For \(\alpha=d/2\), \(\beta=d_{0}/2\) and \(l\) replaced with \(l-1\), the equality (1.41) coincides with (1.32). This proves Theorem 1.1. Lemma 1.2 will be proved in Section 3. The crucial point in the proof is Watson's theorem on the value of hypergeometric series \({}_{3}F_{2}(1)\). ## 2. Spherical functions. Proofs of Lemma 1.1 and Theorem 1.2 ### Invariant kernels and spherical functions The general theory of spherical functions on homogeneous spaces can be found in [18, 19, 30, 32]. The homogeneous spaces \(\mathcal{Q}\) of interest to us belong to the class of so-called commutative spaces and symmetric Gelfand pairs. In this case the theory becomes significantly simpler. For Euclidean spheres \(S^{d}\) this theory is well known, see, for example, [15, 21]. However, the theory of spherical functions on general spaces \(\mathcal{Q}\) is probably not commonly known. In this section we describe the basic facts about spherical functions on spaces \(\mathcal{Q}\) in a form convenient for our purposes. Let us consider the quasi-regular representation \(U(g)f(x)=f(g^{-1}x)\), \(f\in L_{2}(\mathcal{Q})\), \(x\in\mathcal{Q}\), \(g\in G\), and its decomposition into the orthogonal sum \[U(g)=\widehat{\bigoplus}_{l\geqslant 0}\;U_{l}(g),\quad L_{2}(\mathcal{Q})= \widehat{\bigoplus}_{l\geqslant 0}\;V_{l}\,, \tag{2.1}\] of irreducible representations \(U_{l}(g)\) in mutually orthogonal subspaces \(V_{l}\) of dimensions \(m_{l}<\infty\). Let \(\mathcal{A}\) denote the algebra of Hilbert-Schmidt operators \(K\) in \(L_{2}(\mathcal{Q})\) commuting with the action of the group \(G\): \(KU(g)=U(g)K,g\in G\). Each \(K\in\mathcal{A}\) is an integral operator \(Kf(x)=\int_{\mathcal{Q}}\,K(x,y)\,f(y)\,\mathrm{d}\mu(y)\) with the _invariant kernel_ \[K(gx_{1},gx_{2})=K(x_{1},x_{2}),\ x_{1},x_{2}\in\mathcal{Q},\ g\in G, \tag{2.2}\] and the Hilbert-Schmidt norm \(\|K\|_{HS}\) defined by \[\|K\|_{HS}^{2}= \,\mathrm{Tr}\ KK^{*}\] \[= \int_{\mathcal{Q}\times\mathcal{Q}}\,|K(x,y)|^{2}\,\mathrm{d}\mu (x)\mathrm{d}\mu(y)=\int_{\mathcal{Q}}\,|K(x,y)|^{2}\,\mathrm{d}\mu(x)<\infty, \tag{2.3}\] where \(\mathrm{Tr}\) denotes the trace of an operator, and the second integral is independent of \(y\) in view of (2.2). Since the space \(\mathcal{Q}\) is two-point homogeneous, the condition (2.2) implies that the kernel \(K(x_{1},x_{2})\) depends only on the distance \(\theta(x_{1},x_{2})\), and can be written as \[K(x_{1},x_{2})=K(\theta(x_{1},x_{2}))=k(\cos\theta(x_{1},x_{2})),\ x_{1},x_{2} \in\mathcal{Q}, \tag{2.4}\] with functions \(K(u),\,u\in[0,\pi]\), and \(k(t),\,t\in[-1,1]\). The cosine is presented here for convenience in some further calculations. It is useful to keep in mind that the formula (2.4) can be also written as \[K(x_{1},x_{2})=K(\theta(x,x_{0}))=k(\cos\theta(x,x_{0})), \tag{2.5}\] where \(x_{0}\in\mathcal{Q}\) is the fixed point of the subgroup \(H,\,x_{1}=g_{1}x_{0},\,x_{2}=g_{2}x_{0},\,x=g_{2}^{-1}g_{1}x_{0},\,g_{1},\,g_{2}\in G\), and \(K(hx,x_{0})=K(x,x_{0}),\,h\in H\). Therefore, invariant kernels can be thought of as functions on the double co-sets \(H\setminus G/H\). In terms of the function \(K(\cdot)\) and \(k(\cdot)\), the Hilbert-Schmidt norm (2.3) takes the form \[\|K\|_{HS}^{2}=\int_{0}^{\pi}\,|K(u)|^{2}\,\mathrm{d}v(u)\] \[= \kappa\int_{0}^{\pi}\,|K(u)|^{2}(\sin\frac{1}{2}u)^{d-1}(\cos \frac{1}{2}u)^{d_{0}-1}\,\mathrm{d}u\] \[= \kappa\,2^{1-d/2-d_{0}/2}\,\int_{-1}^{1}\,|k(z)|^{2}\,(1-z)^{ \frac{d}{2}-1}\,(1+z)^{\frac{d_{0}}{2}-1}\,\mathrm{d}z, \tag{2.6}\] here \(v(\cdot)\) is the volume function (1.22). We conclude from (2.2) and (2.4) that for \(K\in\mathcal{A}\) its kernel is symmetric, \(K(x_{1},x_{2})=K(x_{2},x_{1})\), the value \(K(x,x)=k(1)\) is independent of \(x\in\mathcal{Q}\), and if an operator \(K\) is self-adjoint, then its kernel is real-valued. It follows from (2.2) and (2.4) that the algebra \(\mathcal{A}\) is commutative. Indeed, \[(K_{1}K_{2})(x_{1},x_{2})=\int_{\mathcal{Q}}\,K_{1}(x_{1},x)K_{2} (x,x_{2})\mathrm{d}\mu(x)\] \[= \int_{\mathcal{Q}}\,K_{2}(x_{2},x)K_{1}(x,x_{1})\mathrm{d}\mu(x)= (K_{2}K_{1})(x_{2},x_{1})=(K_{2}K_{1})(x_{1},x_{2}).\] Therefore, the decomposition (2.1) is multiplicity-free, that is any two representations \(U_{l}\) and \(U_{l^{\prime}}\), \(l\neq l^{\prime}\), are non-equivalent, because otherwise the algebras \(\mathcal{A}\) could not be commutative. Let \(P_{l}\) denote orthogonal projectors in \(L_{2}(\mathcal{Q})\) onto the subspaces \(V_{l}\) in (2.1), \[P_{l}^{*}=P_{l}\,,\quad P_{l}\,P_{l^{\prime}}=\delta_{l,l^{\prime}}\,P_{l}\,, \quad\sum\nolimits_{l\geqslant 0}P_{l}=I\,, \tag{2.7}\] where \(\delta_{l,l^{\prime}}\) is Kronecker's symbol and \(I\) is the identity operator in \(L_{2}(\mathcal{Q})\). By Schur's lemma, we have for \(K\in\mathcal{A}\) \[P_{l}\,K\,P_{l^{\prime}}=\delta_{l,l^{\prime}}\,c_{l}(K)\,P_{l},\,, \tag{2.8}\] where \(c_{l}(K)\) is a constant. Calculating the trace of both sides of the equality (2.8), we find \(c_{l}(K)=m_{l}^{-1}\operatorname{Tr}KP_{l}\). Therefore, we have the expansions \[K=\sum\nolimits_{l,l^{\prime}\geqslant 0}P_{l}\,K\,P_{l^{\prime}}=\sum \nolimits_{l\geqslant 0}c_{l}(K)\,P_{l}, \tag{2.9}\] and \[K_{1}\,K_{2}=\sum\nolimits_{l\geqslant 0}c_{l}(K_{1})\,c_{l}(K_{2})\,P_{l}, \tag{2.10}\] for \(K_{1},K_{2}\in\mathcal{A}\). It follows from (2.10) with \(K_{1}=K\) and \(K_{2}=K^{*}\) that \[||K||_{HS}^{2}=\sum\nolimits_{l\geqslant 0}m_{l}\,|c_{l}(K)|^{2}\,<\infty, \tag{2.11}\] The equality (2.11) implies that the series (2.10) converges in the norm (2.3), and the series (2.10) converges in the norm (2.3) for the subclass of nuclear operators. Since \(V_{l}\) are invariant subspaces, \(P_{l}\in\mathcal{A}\), their kernels \(P_{l}(\cdot,\cdot)\) are symmetric and real-valued, and can be written as follows \[P_{l}(x_{1},x_{2})=p_{l}(\cos\theta(x_{1},x_{2}))=\sum\nolimits_{1}^{m_{l}}\, \psi_{l,j}(x_{1})\,\psi_{l,j}(x_{2}), \tag{2.12}\] where \(\{\psi_{l,j}(\cdot)\}_{1}^{m_{l}}\) is an orthonormal and real-valued basis in \(V_{l}\). Hence, subspace \(V_{l}\) and irreducible representations \(U_{l}\) in (2.1) can be thought of as defined over the field of reals, this means that all representations \(U_{l}\) in (2.1) are of the real type. Using (2.12), we obtain the formulas \[\|P\|_{HS}^{2}=m_{l},\quad\operatorname{Tr}P_{l}=p_{l}(1)=m_{l}>0. \tag{2.13}\] Furthermore, \[P_{l}(x,x)=p_{l}(1)=\sum\nolimits_{1}^{m_{l}}\,\psi_{l,j}(x)^{2}. \tag{2.14}\] is independent of \(x\in\mathcal{Q}\). Applying Cauchy-Schwartz inequality to (2.12) and taking (2.14) into account, we obtain the bound \[|P_{l}(x_{1},x_{2})|=|p_{l}(\cos\theta(x_{1},x_{2}))|\leqslant p_{l}(1). \tag{2.15}\] It follows from (2.14) and (2.13) that the mapping \[\Pi_{l}:\,\mathcal{Q}\ni x\to(m_{l}^{-1/2}\psi_{l,1}(x)\ldots m_{l}^{-1/2} \psi_{l,m_{l}}(x))\in S^{m_{l}-1}\subset\mathbb{R}^{m_{l}} \tag{2.16}\] defines an embedding of the space \(\mathcal{Q}\) into the unite sphere in \(\mathbb{R}^{m_{l}}\). By definition the (zonal) _spherical function_ are kernels of the operators \(\Phi_{l}=m_{l}^{-1}P_{l}\): \[\Phi_{l}(x_{1},x_{2})=\phi_{l}(\cos\theta(x_{1},x_{2}))=\frac{p_{l}(\cos \theta(x_{1},x_{2}))}{p_{l}(1)}. \tag{2.17}\] From (2.14) and (2.17) we conclude that \(|\phi_{l}(\cos\theta(x_{1},x_{2}))|\leqslant\phi_{l}(1)=1\). Comparing (2.13), (2.14) and (2.17), we find the formulas for dimensions \[m_{l} =||\Phi_{l}||_{HS}^{-2}=\left(\kappa\int_{0}^{\pi}\,|\phi_{l}(\cos u )|^{2}(\sin\frac{1}{2}u)^{d-1}(\cos\frac{1}{2}u)^{d_{0}-1}\,\mathrm{d}u\right)^ {-1}\] \[=\left(\kappa\,2^{1-d/2-d_{0}/2}\int_{-1}^{1}\,|\phi_{l}(t)|^{2} \,(1-t)^{\frac{d}{2}-1}\,(1+t)^{\frac{d_{0}}{2}-1}\,\,\mathrm{d}t\right)^{-1}. \tag{2.18}\] In terms of spherical functions the formulas (2.9), (2.10) and (2.11) take the form \[K(x_{1},x_{2})=k(\cos\theta(x_{1},x_{2}))=\sum\nolimits_{l\geqslant 0}c_{l}(K) \,m_{l}\,\phi_{l}(\cos\theta(x_{1},x_{2})), \tag{2.19}\] where \[c_{l}(K) =\operatorname{Tr}K\Phi_{l}=\int_{Q}K(x,x_{0})\,\Phi(x,x_{0})\, \mathrm{d}\mu(x)\] \[=\kappa\int_{0}^{\pi}\,K(u)\,\phi_{l}(\cos u)\,(\sin\frac{1}{2}u )^{d-1}(\cos\frac{1}{2}u)^{d_{0}-1}\,\mathrm{d}u\] \[=\kappa\,2^{1-d/2-d_{0}/2}\int_{-1}^{1}\,k(t)\,\phi_{l}(t)\,(1-t )^{\frac{d}{2}-1}\,(1+t)^{\frac{d_{0}}{2}-1}\,\,\mathrm{d}t, \tag{2.20}\] and \[(K_{1}K_{2})(x_{1},x_{2})= \int_{Q}K_{1}(\theta(x_{1},y))\,K_{2}(\theta(y,x_{2}))\,\mathrm{d }\mu(y)\] \[= \int_{Q}k_{1}(\cos\theta(x_{1},y))\,k_{2}(\cos\theta(y,x_{2}))\, \mathrm{d}\mu(y)\] \[= \sum\nolimits_{l\geqslant 0}c_{l}(K_{1})\,c_{l}(K_{2})\,m_{l}\, \phi_{l}(\cos\theta(x_{1},x_{2})), \tag{2.21}\] and \[||K||_{HS}^{2}=\sum\nolimits_{l\geqslant 0}m_{l}\,|c_{l}(K)|^{2}<\infty\,, \tag{2.22}\] The facts listed above are valid for all compact two-point homogeneous spaces. Since the spaces \(Q\) are also symmetric Riemannian manifolds of rank one, the invariant kernels \(p_{l}(\cos\theta(x,x_{0}))\) are eigenfunctions of the radial part of the Laplace-Beltrami operator on \(Q\) (in the spherical coordinates centered at \(x_{0}\)). This leads to the following explicit formula for spherical functions \[\Phi(x_{1},x_{2})=\phi_{l}(\cos\theta(x_{1},x_{2}))=\frac{P_{l}^{(\frac{d}{2}- 1,\frac{d_{0}}{2}-1)}(\cos\theta(x_{1},x_{2}))}{P_{l}^{(\frac{d}{2}-1,\frac{d _{0}}{2}-1)}(1)},\quad l\geqslant 0. \tag{2.23}\] where \(P_{n}^{(\alpha,\beta)}(t),\,t\in[-1,1]\), are Jacobi polynomials (1.35). For more details, we refer to [16, p. 178], [19, Chap. V, Theorem 4.5], [20, pp. 514-512, 543-544], [30, Chapters 2 and 17]: [32, Theorem 11.4.21]. From (1.34) and (1.38) we obtain \[P_{l}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(1)=\frac{\Gamma(l+d/2)}{\Gamma(l+1) \Gamma(1+d/2)}\,, \tag{2.24}\] and \(M_{l}=M_{l}(\frac{d}{2}-1,\frac{d_{0}}{2}-1)\), where \(M_{0}=B(d/2,d_{0}/2)^{-1}\) and \[M_{l}=(2l-1+(d+d_{0})/2)\frac{\Gamma(l+1)\Gamma(l-1+(d+d_{0})/2)}{\Gamma(l+d/2 )\Gamma(l+d_{0}/2)}\,,\,\,\,l\geqslant 1, \tag{2.25}\] Substituting (2.23) into (2.18) and using (2.24) and (2.25), we obtain the following explicit formulas for dimensions of irreducible representations (2.1) : \(m_{0}=1\) and \[m_{l}=M_{l}\,\kappa^{-1}\,(P^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(1 ))^{2}\] \[= (2l-1+(d+d_{0})/2)\frac{\Gamma(l-1+(d+d_{0})/2)\Gamma(l+d/2)\Gamma (d_{0}/2)}{\Gamma((d+d_{0})/2)\Gamma(l+d_{0}/2)\Gamma(d/2)\Gamma(l+1)},\quad l \geqslant 1. \tag{2.26}\] ### Spherical functions and metrics In the following Lemma 2.1 we will describe the spherical function expansions for the chordal and symmetric difference metrics. Originally these expansions have been established in [25, Lemma 4.1] and [24, Theorema 4.1(ii)]. For completeness, we give a short proof of these results. **Lemma 2.1**.: _(i) For the chordal metric (1.23), we have_ \[\tau(x_{1},x_{2})=\frac{1}{2}\sum\nolimits_{l\geqslant 1}\,M_{l}\,C_{l}\, \left[\,1-\phi_{l}(\cos\theta(x_{1},x_{2}))\,\right], \tag{2.27}\] _where_ \[C_{l}=B((d+1)/2,l+d_{0}/2)\,\Gamma(l+1)^{-1}\,(1/2)_{l-1}\,P_{l}^{(\frac{d}{2} -1,\frac{d_{0}}{2}-1)}(1)\,. \tag{2.28}\] _(ii) For the symmetric difference metrics (1.13), we have_ \[\theta^{\Delta}(\xi,x_{1},x_{2})=\kappa\,\sum\nolimits_{l\geqslant 1}\,l^{-2} \,M_{l}\,A_{l}(\xi)\,\left[\,1-\phi_{l}(\cos\theta(x_{1},x_{2}))\,\right], \tag{2.29}\] _where_ \[A_{l}(\xi)=\int_{0}^{\pi}\left\{P_{l-1}^{(\frac{d}{2},\frac{d_{0}}{2})}(\cos r )\right\}^{2}(\sin\frac{1}{2}r)^{2d}(\cos\frac{1}{2}r)^{2d_{0}}\,\mathrm{d} \xi(r). \tag{2.30}\] _The series (2.40) and (2.29) converge absolutely and uniformly._ Proof.: _(i)_ Let us consider the expansion (2.19) for the chordal metric (1.23). Since \[\tau(x_{1},x_{2})=\sin\frac{1}{2}\theta(x_{1},x_{2}).=\sqrt{\frac{1-\cos(x_{1 },x_{2})}{2}}\,, \tag{2.31}\] we put \(k(t)=\sqrt{(1-t)/2}\) in the formula (2.20). This gives \[c_{l}(K)=\kappa\,2^{1/2-d/2-d_{0}/2}\left(P_{l}^{(\frac{d}{2}-1,\frac{d_{0}}{2 }-1)}(1)\right)^{-1}\,I_{l}(K)\,, \tag{2.32}\] where \[I_{l}(K) =\int_{-1}^{1}P_{1}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(t)\,(1-t )^{\frac{d}{2}-\frac{1}{2}}\,(1+t)^{\frac{d_{0}}{2}-1}\,\mathrm{d}t\] \[=2^{d/2+d_{0}/2-1/2}(l!)^{-1}\,(-1/2)_{l}\,B(d/2+1/2,d_{0}/2+l), \tag{2.33}\] The formula (2.33) can be found in the tables [17, Sec.7.391, Eq.(4)] or derived directly, using Rodrigues' formula (1.35) and integrating \(l\) times by parts. Notice that the symbol \((-1/2)_{l}\) in (2.33) takes the values \((-1/2)_{0}=1\) and \((-1/2)_{l}=-1/2(1/2)_{l-1}\) for \(l\geqslant 1\). Using (2.33), (2.37), (2.26) and (2.25), we find that \(m_{l}\,c_{l}(k)=-1/2\,M_{l}\,C_{l}\) for \(l\geqslant 1\), where \(C_{l}\) are given in (2.28). Therefore, the expansion (2.19) takes the form \[\tau(x_{1},x_{2})=c_{0}-\frac{1}{2}\sum\nolimits_{l\geqslant 1}\,M_{l}\,C_{l} \,\phi_{l}(\cos\theta(x_{1},x_{2}))\,, \tag{2.34}\] We put here \(x_{1}=x_{2}\) to obtain \(c_{0}=1/2\sum\nolimits_{l\geqslant 1}\,M_{l}\,C_{l}\). Substituting this equality into (2.39), we arrive to the expansion (2.40). Applying Stirling's approximation to the gamma functions in \(M_{l}\) and \(C_{l}\), we observe that the terms in (2.40) are of the order \(O(l^{-2})\). Therefore, the series (2.40) converges absolutely and uniformly. _(ii)_ Let us consider the expansion (2.19) for the symmetric difference metric (1.13). We have \[\theta^{\Delta}(\xi,x_{1},x_{2})=\int_{0}^{\pi}\Big{(}v(r)-\mu(\mathcal{B}(y_{1 },r)\cap\mathcal{B}(y_{2},r))\Big{)}\,\mathrm{d}\xi(r)\,, \tag{2.35}\] see (1.17). In view of (1.16) the term \(\mu(\mathcal{B}(y_{1},r)\cap\mathcal{B}(y_{2},r))\) can be written as \[\mu(\mathcal{B}(y_{1},r)\cap\mathcal{B}(y_{2},r))=\int_{\mathcal{Q}}\chi_{r}( \theta(x_{1},y))\,\chi_{r}(\theta(y,x_{2})\,\mathrm{d}\mu(y), \tag{2.36}\] where \(\chi_{r}(\cdot)\) is the characteristic function of the segment \([0,r]\), \(0\leqslant r\leqslant\pi\). Let us consider the expansion (2.19) for the invariant kernel \(\chi_{r}(\theta(x_{1},x_{2}))\). We put \(K(u)=\chi_{r}(u)\), \(u\in[0,\pi]\), to calculate the corresponding coefficients (2.20). We obtain \(c_{0}(K)=v(r)\) and \[c_{l}(K)=\kappa\,\left(P_{l}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(1)\right)^{- 1}\,I_{l}(K)\,,\quad l\geqslant 1, \tag{2.37}\] where \[I_{l}(K) =\int_{0}^{r}\,P_{1}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(\cos u) \,(\sin\frac{1}{2}u)^{d-1}(\cos\frac{1}{2}u)^{d_{0}-1}\,\mathrm{d}u\] \[=2^{1-d/2-d_{0}/2}\int_{\cos r}^{1}\,P_{1}^{(\frac{d}{2}-1,\frac {d_{0}}{2}-1)}(t)\,(1-t)^{\frac{d}{2}-1}\,(1+t)^{\frac{d_{0}}{2}-1}\,\, \mathrm{d}t,\] \[=l^{-1}\,(\sin\frac{1}{2}r)^{d}\,(\cos\frac{1}{2}r)^{d_{0}}\,P_{ l-1}^{(\frac{d}{2},\frac{d_{0}}{2})}(\cos r)\,. \tag{2.38}\] The last formula in (2.38) can be extracted from the tables [17, Sec.7.391, Eq.(11)] or derived directly, using Rodrigues' formula (1.35). Using the formula (2.21) together with (2.37) and (2.38), we find that \[\mu(\mathcal{B}(x_{1},r)\cap\mathcal{B}(x_{2},r))=\,\,\,\,v(r)^{2}+\kappa\, \sum\nolimits_{l\geqslant 1}\,l^{-2}\,M_{l}\,a_{l}(r)\,\phi_{l}(\cos\theta(x_{1},x_{ 2}))\,, \tag{2.39}\] where \[a_{l}(r)=\left\{P_{l-1}^{(\frac{d}{2},\frac{d_{0}}{2})}(\cos r)\right\}^{2}( \sin\frac{1}{2}r)^{2d}(\cos\frac{1}{2}r)^{2d_{0}}\,\mathrm{d}\xi(r). \tag{2.40}\] Substituting (2.39) into (2.35), we obtain the expansion \[\theta^{\Delta}(\xi,x_{1},x_{2})=\langle\theta^{\Delta}(\xi)\rangle-\kappa\, \sum\nolimits_{l\geqslant 1}\,l^{-2}\,M_{l}\,A_{l}(\xi)\,\phi_{l}(\cos\theta(x_{1},x _{2}))\,, \tag{2.41}\] where \(\langle\theta^{\Delta}(\xi)\rangle=\int_{0}^{\pi}\Big{(}v(r)-v(r)^{2}\Big{)} \,\mathrm{d}\xi(r)\) is the average value of the metric and \(A_{l}(\xi)=\int_{0}^{\pi}\,a_{l}(r)\mathrm{d}r\) are given in (2.30). Since \(\theta^{\Delta}(\xi)\) is a metric, we put \(x_{1}=x_{2}\) to obtain \(\langle\theta^{\Delta}(\xi)\rangle=\kappa\,\sum\nolimits_{l\geqslant 1}\,l^{-2} \,M_{l}\,A_{l}(\xi)\,.\) Substituting this equality into (2.41), we arrive to the expansion (2.29). The series (2.39) and (2.29) converge absolutely and uniformly in view of (2.22). ### Proof of Lemma 1.1 Let us consider the embedding (2.16) for \(l=1\). From the formula (2.26) we find \[m_{1}=\frac{d(d+d_{0}+2)}{2d_{0}}=\frac{(n+1)(d+2)}{2}-1,\quad d=nd_{0}, \tag{2.42}\] and for \(x_{1},x_{2}\in\mathcal{Q}\), we have \[\|\Pi_{1}(x_{1})-\Pi_{1}(x_{2})\|^{2}=2-2(\Pi_{1}(x_{1}),\Pi_{1}(x_{2}))=2(1- \phi_{1}(\cos\theta(x_{1},x_{2})), \tag{2.43}\] where \(\|\cdot\|\) and \((\cdot,\cdot)\) are the Euclidean norm and inner product in \(\mathbb{R}^{m_{1}}\). On the other hand, from Rodrigues' formula (1.35) we obtain \[P_{1}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(t)=((d+d_{0})t+d-d_{0})/4,\] \(P_{1}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(1)=d/2\), and \[\frac{1-t}{2}=\frac{d}{d+d_{0}}\left[1-\frac{P_{1}^{(\frac{d}{2}-1,\frac{d_{0 }}{2}-1)}(t)}{P_{1}^{(\frac{d}{2}-1,\frac{d_{0}}{2}-1)}(1)}\right].\] Therefore, \[\tau(x_{1},x_{2})^{2}=\frac{1-\cos\theta(x_{1},x_{2})}{2}=\frac{d}{d+d_{0}} \Big{[}1-\phi_{1}(\cos\theta(x_{1},x_{2}))\Big{]}. \tag{2.44}\] Comparing (2.43) and (2.44), we complete the proof. ### Proof of Theorem 1.2 Comparing the expansions (2.40) and (2.29), we conclude that the equality (1.27) is equivalent to the series of formulas \[\gamma(Q)\,l^{-2}\,B(d/2,d_{0}/2)^{-1}\,A_{l}(\xi^{\natural})=C_{l}/2\,,\quad l \geqslant 1\,. \tag{2.45}\] The integral (2.30) with the special measure \(\mathrm{d}\xi^{\natural}(r)=\sin r\,\mathrm{d}r\) takes the form \[A_{l}(\xi^{\natural}) =\int_{0}^{\pi}\left\{P_{l-1}^{(\frac{d}{2},\frac{d_{0}}{2})}( \cos r)\right\}^{2}(\sin\frac{1}{2}r)^{2d}(\cos\frac{1}{2}r)^{2d_{0}}\,\sin r \,\mathrm{d}r\] \[=2^{-d-d_{0}}\int_{-1}^{1}\left(P_{l-1}^{(d/2,d_{0}/2)}(t)\right) ^{2}\,(1-t)^{d}\,(1+t)^{d_{0}}\,\,\mathrm{d}t\,. \tag{2.46}\] Hence, the formulas (2.45) can be written as follows \[\int_{-1}^{1}\left(P_{l-1}^{(d/2,d_{0}/2)}(t)\right)^{2}\,(1-t)^{ d}\,(1+t)^{d_{0}}\,\,\mathrm{d}t\] \[=\frac{2^{d+d_{0}+1}\,(1/2)_{l-1}}{((l-1)!)^{2}}\,B(d+1,d_{0}+1)\, T^{*}, \tag{2.47}\] where \[T^{*}=\frac{(l!)^{2}\,B(d/2,d_{0}/2)\,\,C_{l}}{4\,(1/2)_{l-1}\,B(d+1,d_{0}+1) \,\gamma(Q)}\,. \tag{2.48}\] On the other hand, using (2.24) and (2.28), we find \[C_{l}=(l!)^{-1}\,(1/2)_{l-1}\frac{\Gamma(d/2+1/2)\,\Gamma(l+d/2)\,\Gamma(l+d_ {0}/2)}{\Gamma(l+1/2+d/2+d_{0}/2)\,\Gamma(d/2)}\,. \tag{2.49}\] Substituting (2.49) and (1.28) into (2.48), we obtain \[T^{*}= \pi^{-1/2}\,(d+d_{0})^{-1}\,\frac{\Gamma(d+d_{0}+2)}{\Gamma(d+1)\, \Gamma(d_{0}+1)}\times\] \[\times\frac{\Gamma(d/2+1/2)\,\Gamma(l+d/2)\,\Gamma(d_{0}/2+1/2)\, \Gamma(l+d_{0}/2)}{\Gamma(d/2+d_{0}/2)\,\Gamma(l+d/2+d_{0}/2+1/2)}\,. \tag{2.50}\] Applying the duplication formula for the gamma function \[\Gamma(2z)=\pi^{-1/2}\,2^{2z-1}\,\Gamma(z)\,\Gamma(z+1/2) \tag{2.51}\] to the first co-factor in (2.50), we find \[\pi^{-1/2}\,(d+d_{0})^{-1}\,\frac{\Gamma(d+d_{0}+2)}{\Gamma(d+1)\, \Gamma(d_{0}+1)}\] \[=\,\frac{\Gamma(d/2+d_{0}/2)\,\Gamma(d/2+d_{0}/2+3/2)}{\Gamma(d/2 +1/2)\,\Gamma(d_{0}/2+1)\,\Gamma(d_{0}/2+1/2)\,\Gamma(d_{0}/2+1)}\,, \tag{2.52}\] where the relation \(\Gamma(z+1)=z\Gamma(z)\) with \(z=d/2+d_{0}/2\) has been also used. Substituting (2.52) into (2.50), we find that \(T^{*}=T_{l-1}(d/2,d_{0}/2)\). Replacing \(l-1\), \(l\geqslant 1\), with \(l\geqslant 0\), we compete the proof. ## 3. Proof of Lemma 1.2 Lemma 1.2 follows from Lemma 3.1 and Lemma 3.2 given below. **Lemma 3.1**.: _For all \(l\geqslant 0\), \(\operatorname{Re}\alpha>-1/2\) and \(\operatorname{Re}\beta>-1/2\), we have_ \[\int_{-1}^{1}\Big{(}P_{l}^{(\alpha,\beta)}(t)\Big{)}^{2} (1-t)^{2\alpha}(1+t)^{2\beta}\,\mathrm{d}t\] \[=\,\frac{2^{2\alpha+2\beta+1}}{(l!)^{2}}\,B(2\alpha+1,2\beta+1)\, \frac{W_{l}(\alpha,\beta)}{(2\alpha+2\beta+2)_{2l}}\,, \tag{3.1}\] _where_ \[W_{l}(\alpha,\beta)\] \[= \sum\nolimits_{k=0}^{2l}\frac{(-1)^{l+k}}{k!}\langle 2l\rangle_{k} \,\langle\alpha+l\rangle_{k}\,\langle\beta+l\rangle_{2l-k}\,(2\alpha+1)_{2l-k} \,(2\beta+1)_{k} \tag{3.2}\] _is a polynomial of \(\alpha\) and \(\beta\)._ Proof.: Using Rodrigues' formula (1.35), we can write \[\int_{-1}^{1}\Big{(}P_{l}^{(\alpha,\beta)}(t)\Big{)}^{2}\,(1-t)^{2\alpha}(1+t) ^{2\beta}\,\mathrm{d}t=\Big{(}\,\frac{1}{2^{l}\,l!}\,\Big{)}^{2}\,I_{l}(\alpha,\beta)\,. \tag{3.3}\] where \[I_{l}(\alpha,\beta)=\int_{-1}^{1}\Big{(}\frac{\mathrm{d}^{l}}{\mathrm{d}t^{l}} \,\big{[}(1-t)^{l+\alpha}(1+t)^{l+\beta}\big{]}\,\Big{)}^{2}\,\mathrm{d}t\,. \tag{3.4}\] Integrating in (3.4) \(l\) times by parts, we obtain \[I_{l}(\alpha,\beta)\] \[\quad=(-1)^{l}\,\int_{-1}^{1}\big{(}(1-t)^{l+\alpha}(1+t)^{l+ \beta}\big{)}\,\,\frac{\mathrm{d}^{2l}}{\mathrm{d}t^{2l}}\,\big{(}(1-t)^{l+ \alpha}(1+t)^{l+\beta}\big{)}\,\,\mathrm{d}t\,, \tag{3.5}\] since all terms outside the integral vanish. By Leibniz's rule, \[\frac{\mathrm{d}^{2l}}{\mathrm{d}t^{2l}}\left((1-t)^{l+\alpha}\left( 1+t\right)^{l+\beta}\right)\] \[\qquad=\sum\nolimits_{k=0}^{2l}\,\binom{2l}{k}\,\frac{\mathrm{d}^ {k}}{\mathrm{d}t^{k}}(1-t)^{l+\alpha}\,\frac{\mathrm{d}^{2l-k}}{\mathrm{d}t^{2 l-k}}\left(1+t\right)^{l+\beta},\] where \(\binom{2l}{k}=\langle 2l\rangle_{k}/k!\) and \[\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}(1-t)^{l+\alpha}=(-1)^{k}\,\langle \alpha+l\rangle_{k}\,(1-t)^{l-k+\alpha}\,,\] \[\frac{\mathrm{d}^{2l-k}}{\mathrm{d}t^{2l-k}}(1+t)^{l+\beta}=\langle\beta+l \rangle_{2l-k}\,(1+t)^{-l+k+\beta}\,.\] Substituting these formulas into (3.5), we obtain \[I_{l}(\alpha,\beta)\] \[= \,2^{2\alpha+2\beta+2l+1}\sum\nolimits_{k=0}^{2l}\frac{(-1)^{l+ k}}{k!}\langle 2l\rangle_{k}\,\langle\alpha+l\rangle_{k}\,\langle\beta+l \rangle_{2l-k}\,\,I_{l}^{(k)}(\alpha,\beta)\,, \tag{3.6}\] where \[I_{l}^{(k)}(\alpha,\beta)=B(2\alpha+2l-k+1,2\beta+k+1). \tag{3.7}\] Here we have used the following Euler's integral \[2^{1-a-b}\int_{-1}^{1}(1-t)^{a-1}\,(1+t)^{b-1}\,\mathrm{d}t=B(a,b)=\frac{ \Gamma(a)\Gamma(b)}{\Gamma(a+b)} \tag{3.8}\] with \(\mathrm{Re}\,a>0\), \(\mathrm{Re}\,b>0\). The formula (3.7) can be written as follows \[I_{l}^{(k)}(\alpha,\beta)=\frac{\Gamma(2\alpha+2l-k+1)\,\Gamma(2 \beta+k+1)}{\Gamma(2\alpha+2\beta+2l+2)}\] \[= \frac{\Gamma(2\alpha+2l-k+1)}{\Gamma(2\alpha+1)}\frac{\Gamma(2 \beta+k+1)}{\Gamma(2\beta+1)}\frac{\Gamma(2\alpha+1)\,\Gamma(2\beta+1)}{ \Gamma(2\alpha+2\beta+2)}\frac{\Gamma(2\alpha+2\beta+2)}{\Gamma(2\alpha+2 \beta+2l+2)}\] \[= \frac{(2\alpha+1)_{2l-k}\,(2\beta+1)_{k}}{(2\alpha+2\beta+2)_{2l }}\,B(2\alpha+1,2\beta+1)\,. \tag{3.9}\] Combining the formulas (3.9), (3.6) and (3.3), we obtain (3.1). The next Lemma 3.2 is more specific, it relies on Watson's theorem for generalized hypergeometric series, see [2, 26]. We consider the series of the form \[{}_{3}F_{2}(a,b,c;d,e;z)=\sum\nolimits_{k\geqslant 0}\frac{(a)_{k}\,(b)_{k} \,(c)_{k}}{(d)_{k}\,(e)_{k}\,k!}\,z\,, \tag{3.10}\] where neither \(d\) nor \(e\) are negative integers. The series absolutely converges for \(|z|\leqslant 1\), if \(\mathrm{Re}(d+e)>\mathrm{Re}(a+b+c)\). The series (3.10) terminates, if one of the numbers \(a,b,c\) is a negative integer. **Watson's theorem.**_We have_ \[{}_{3}F_{2}(a, b,c;(a+b+1)/2,2c;1)\] \[= \frac{\Gamma(1/2)\,\Gamma(c+1/2)\,\Gamma((a+b+1)/2)\,\Gamma(c-(a+ b-1)/2)}{\Gamma((a+1)/2)\,\Gamma((b+1)/2)\,\Gamma(c-(a-1)/2)\,\Gamma(c-(b-1)/2)}\,. \tag{3.11}\] _provided that_ \[\operatorname{Re}\left(2c-a-b+1\right)>0. \tag{3.12}\] The condition (3.12) ensures the convergence of hypergeometric series in (3.11). Furthermore, this condition is necessary for the truth of equality (3.11) even in the case of terminated series. The proof of Watson's theorem can be found in [2, Theorem 3.5.5], [26, p.54, Eq.(2.3.3.13)]. **Lemma 3.2**.: _For all \(l\geqslant 0\), \(\alpha\in\mathbb{C}\) and \(\beta\in\mathbb{C}\), the polynomial (3.2) is equal to_ \[W_{l}(\alpha,\beta)= 2^{2l}\left(\alpha+1\right)_{l}(\beta+1)_{l}\,(\alpha+\beta+1)_{l}\] \[= 2^{2l}\,\frac{\Gamma(\alpha+1+l)\,\Gamma(\beta+1+l)\,\Gamma( \alpha+\beta+1+l)}{\Gamma(\alpha+1)\,\Gamma(\beta+1)\,\Gamma(\alpha+\beta+1)}\,. \tag{3.13}\] _In particular,_ \[\frac{W_{l}(\alpha,\beta)}{(2\alpha+2\beta+2)_{2l}}=\frac{(\alpha+1)_{l}\,( \beta+1)_{l}}{(\alpha+\beta+3/2)_{l}}\,. \tag{3.14}\] Proof.: Since \(W_{l}(\alpha,\beta)\) is a polynomial, it suffers to check the equality (3.13) for \(\alpha\) and \(\beta\) in an open subset in \(\mathbb{C}^{2}\). As such a subset we will take the following region \[\mathcal{O}=\{\,\alpha,\,\beta\,:\operatorname{Re}\alpha<0,\,\, \operatorname{Re}\beta<0,\,\,\operatorname{Im}\alpha>0,\,\,\operatorname{Im} \beta>0\,\}. \tag{3.15}\] For \(\alpha\) and \(\beta\) in \(\mathcal{O}\), the co-factors in terms in (3.2) may be rearranged as follows: \[\left.\begin{aligned} &\langle 2l\rangle_{k}=(-1)^{k}\,(-2l)_{k} \,,\quad\langle\alpha+l\rangle_{k}=(-1)^{k}\,(-\alpha-l)_{k}\,,\\ &\langle\beta+l\rangle_{2l-k}=(-1)^{k}\,(-\beta-l)_{2l-k}=\frac{ (-\beta-l)_{2l}}{(\beta+1-l)_{k}}\,,\\ &(2\alpha+1)_{2l-k}=\frac{(-1)^{k}(2\alpha+1)_{2l}}{(-2\alpha-2 l)_{k}}\,,\end{aligned}\right\} \tag{3.16}\] Here we have used the following elementary relation for the rising factorial powers \[(a)_{m-k}=\frac{(-1)^{k}\,(a)_{m}}{(1-a-m)_{k}}\,,\quad m\geqslant 0\,,\,\,0 \leqslant k\leqslant m\,. \tag{3.17}\] Substituting (3.16) into (3.2), we find that \[W_{l}(\alpha,\beta) =(-1)^{l}\,(2\alpha+1)_{2l}\,(-\beta-l)_{2l}\,\mathcal{F}_{l}( \alpha,\beta)\] \[=\frac{(-1)^{l}\,\Gamma(2\alpha+1+2l)\,\Gamma(-\beta+l)}{\Gamma (2\alpha+1)\,\Gamma(-\beta-l)}\,\,\mathcal{F}_{l}(\alpha,\beta)\,, \tag{3.18}\] where \[\mathcal{F}_{l}(\alpha,\beta)=\sum\nolimits_{k=0}^{2l}\frac{(-2l)_{k}\,(2 \beta+1)_{k}\,(-\alpha-l)_{k}}{(\beta+1-l)_{k}\,(-2\alpha-2l)_{k}\,k!} \tag{3.19}\] In view of the definition (3.10), we have \[\mathcal{F}_{l}(\alpha,\beta)=\,_{3}F_{2}\,(-2l,2\beta+1,-\alpha-1;\beta+1-l,-2\alpha-2l;1)\,. \tag{3.20}\] The parameters in hypergeometric series (3.20) are identical with those in (3.11) for \(a=-2l\), \(b=2\beta+1\), \(c=-\alpha-l\), and in this case, \((a+b+1)/2=2\beta+1+l\), \(2c=-2\alpha-2l\). The condition (3.12) also holds for \(\alpha\) and \(\beta\) in the region \(\mathcal{O}\), since \(\operatorname{Re}\left(2c-a-b+1\right)=\operatorname{Re}\left(-2\alpha-2 \beta\right)>0\). Therefore, Watson's theorem (3.11) can be applied to obtain \[\mathcal{F}_{l}(\alpha,\beta)=\,\frac{\Gamma(1/2)\,\Gamma(-\alpha-l-1/2)\, \Gamma(\beta+1-l)\,\Gamma(-\alpha-\beta)}{\Gamma(-l+1/2)\,\Gamma(\beta+1)\, \Gamma(-\alpha+1/2)\,\Gamma(-\alpha-\beta-l)}\,. \tag{3.21}\] Substituting the expression (3.21) into (3.18), we may write \[W_{l}(\alpha,\beta)=\,c_{0}\,\,c_{1}(\alpha)\,\,c_{2}(\beta)\,\,c_{3}(\alpha+ \beta)\,, \tag{3.22}\] where \[\left.\begin{array}{l}c_{0}=\frac{(-1)^{l}\,\Gamma(1/2)}{\Gamma(-l+1/2)}\,, \\ c_{1}(\alpha)=\frac{\Gamma(2\alpha+2l+1)\,\Gamma(-\alpha-l+1/2)}{\Gamma(2 \alpha+1)\,\Gamma(-\alpha+1/2)}\,,\\ c_{2}(\beta)=\frac{\Gamma(\beta+1-l)\,\Gamma(-\beta+l)}{\Gamma(\beta+1)\, \Gamma(-\beta-l)}\,,\\ c_{3}(\alpha+\beta)=\frac{\Gamma(-\alpha-\beta)}{\Gamma(-\alpha-\beta-l)}\, \cdot\end{array}\right\} \tag{3.23}\] Using the duplication formula (2.51) and reflection formulas, see [2, Sec. 1.2], \[\Gamma(1-z)\Gamma(z)\,=\,\frac{\pi}{\sin\pi z}\,,\qquad\Gamma(1/2-z)\Gamma(1/2 +z)\,=\,\frac{\pi}{\cos\pi z}\,, \tag{3.24}\] we may rearrange the expressions in (3.23) as follows. For \(c_{0}\), we have \[c_{0}=\frac{(-1)^{l}\,\Gamma(1/2)^{2}}{\Gamma(-l+1/2)\,\Gamma(l+1/2)}\,\frac{ \Gamma(l+1/2)}{\Gamma(1/2)}=(1/2)_{l}\,,\] since \(\Gamma(1/2)=\sqrt{\pi}\). For \(c_{1}(\alpha)\) and \(c_{2}(\beta)\), we have \[c_{1}(\alpha)= 2^{2l}\,\frac{\Gamma(\alpha+l+1)\,\Gamma(\alpha+l+1/2)\,\Gamma( -\alpha-l+1/2)}{\Gamma(\alpha+1)\,\Gamma(\alpha+1/2)\,\Gamma(-\alpha+1/2)}\] \[= 2^{2l}\,\frac{\cos\pi\alpha\,\Gamma(\alpha+l+1)}{\cos\pi(\alpha +l)\,\Gamma(\alpha+1)}=2^{2l}\,(-1)^{l}\,(\alpha+1)_{l}\] and \[c_{2}(\beta)=\frac{\Gamma(\beta+1-l)\,\Gamma(-\beta+l)}{\Gamma(\beta+1)\, \Gamma(-\beta-l)}=\frac{\sin\pi(\beta+l)\,\Gamma(\beta+1+l)}{\sin\pi(\beta-l )\,\Gamma(\beta+1)}=(\beta+1)_{l}\,.\] Finally, \[c_{3}(\alpha+\beta)=\frac{\sin\pi(\alpha+\beta)\,\Gamma(\alpha+\beta+1+l)}{ \sin\pi(\alpha+\beta+l)\,\Gamma(\alpha+\beta+1)}=(-1)^{l}\,(\alpha+\beta+1)_{ l}\,.\] Substituting these expressions into (3.22), we obtain (3.13). It follows from (2.28) and the duplication formula (2.51) that \[(2\alpha+2\beta+2)_{2l}=2^{2l}\,(\alpha+\beta+1)_{l}\,(\alpha+\beta+3/2)_{l}\,. \tag{3.25}\] Using (3.13) together with (3.25), we obtain (3.14). Now it suffers to substitute (3.14) into (3.1) to obtain the formulas (1.41). The proof of Lemma 1.2 is complete.
2309.14140
Magnetic States and Electronic Properties of Manganese-Based Intermetallic Compounds Mn$_2$YAl and Mn$_3$Z (Y = V, Cr, Fe, Co, Ni; Z = Al, Ge, Sn, Si, Pt)
We present a brief review of experimental and theoretical papers on studies of electron transport and magnetic properties in manganese-based compounds Mn$_2$YZ and Mn$_3$Z (Y = V, Cr, Fe, Co, Ni, etc.; Z = Al, Ge, Sn, Si, Pt, etc.). It has been shown that in the electronic subsystem of Mn$_2$YZ compounds, the states of a half-metallic ferromagnet and a spin gapless semiconductor can arise with the realization of various magnetic states, such as a ferromagnet, a compensated ferrimagnet, and a frustrated antiferromagnet. Binary compounds Mn$_3$Z have the properties of a half-metallic ferromagnet and a topological semimetal with a large anomalous Hall effect, spin Hall effect, spin Nernst effect, and thermal Hall effect. Their magnetic states are also very diverse: from a ferrimagnet and an antiferromagnet to a compensated ferrimagnet and a frustrated antiferromagnet, as well as an antiferromagnet with a kagome-type lattice. It has been demonstrated that the electronic and magnetic properties of such materials are very sensitive to external influences (temperature, magnetic field, external pressure), as well as the processing method (cast, rapidly quenched, nanostructured, etc.). Knowledge of the regularities in the behavior of the electronic and magnetic characteristics of Mn$_2$YAl and Mn$_3$Z compounds can be used for applications in micro- and nanoelectronics and spintronics.
V. V. Marchenkov, V. Yu. Irkhin
2023-09-25T13:51:30Z
http://arxiv.org/abs/2309.14140v1
Magnetic States and Electronic Properties of Manganese-Based Intermetallic Compounds Mn\({}_{2}\)YAl and Mn\({}_{3}\)Z ###### Abstract We present a brief review of experimental and theoretical papers on studies of electron transport and magnetic properties in manganese-based compounds Mn\({}_{2}\)_Y_Z_ and Mn\({}_{3}\)_Z_ (_Y_ = V, Cr, Fe, Co, Ni, etc.; \(Z=\) Al, Ge, Sn, Si, Pt, etc.). It has been shown that in the electronic subsystem of Mn\({}_{2}\)_Y_Z compounds, the states of a half-metallic ferromagnet and a spin gapless semiconductor can arise with the realization of various magnetic states, such as a ferromagnet, a compensated ferrimagnet, and a frustrated antiferromagnet. Binary compounds of Mn\({}_{2}\)_Z_ have the properties of a half-metallic ferromagnet and a topological semimetal with a large anomalous Hall effect, spin Hall effect, and thermal Hall effect. Their magnetic states are also very diverse: from a ferrimagnet and an antiferromagnet to a compensated ferrimagnet and a frustrated antiferromagnet, as well as an antiferromagnet with a kagome-type lattice. It has been demonstrated that the electronic and magnetic properties of such materials are very sensitive to external influences (temperature, magnetic field, external pressure), as well as the processing method (cast, rapidly quenched, nanostructured, etc.). Knowledge of the regularities in the behavior of the electronic and magnetic characteristics of Mn\({}_{2}\)_Y_Al and Mn\({}_{3}\)_Z_ compounds can be used for applications in micro- and nanoelectronics and spintronics. manganese based intermetallic compounds; antiferromagnetism; compensated ferrimagnetism; frustrated magnets; half-metallic ferromagnets; spin gapless semiconductors; topological semimetals; anomalous Hall effect; kagome lattice ## 1 Introduction The exploration and advancement of novel materials with distinct magnetic and electronic properties, along with the experimental and theoretical investigation of their electronic energy spectrum, structural configurations, and magnetic states, hold significant importance from both a fundamental and a practical standpoint. Heusler compounds, discovered over 100 years ago by German chemist F. Heusler [1], continue to be a subject of great interest in current research. These compounds exhibit a wide range of unique multifunctional properties [2], such as half-metallic ferromagnetism [3, 4, 5], spin gapless semiconductor state [6, 7, 8], topological insulator and semimetal behavior [9, 10, 11, 8], shape memory effect [12, 13], magnetocaloric effect [14, 15], and many others (see, for example, [2, 16, 17] and references therein). Among these materials, intermetallic compounds based on manganese, such as Mn\({}_{2}\)_Y_Z_ and Mn\({}_{3}\)_Z_ (_Y_ = V, Cr, Fe, Co, Ni, etc.; \(Z=\) Al, Ga, Ge, Sn, etc.), are particularly noteworthy. In addition to the functional properties mentioned above, these compounds exhibit a range of unique magnetic characteristics, including the ability to manifest states of antiferromagnetism, compensated ferrimagnetism, and frustrated magnets. They possess unconventional magnetic and electronic properties that are highly susceptible to external influences, making them promising candidates for practical applications in fields such as spintronics, microelectronics, and nanoelectronics. Manganese-based Heusler compounds, specifically Mn\({}_{2}\)_YZ_, possess several remarkable properties. They exhibit characteristics of half-metallic ferromagnets (HMF), spin gapless semiconductors (SGS), and potentially topological semimetals (TSM). These compounds also demonstrate a significant magnetocaloric effect and shape memory. Moreover, they offer the opportunity to realize unique magnetic states, including ferromagnetic (FM), antiferromagnetic (AFM), compensated ferrimagnetic (CFIM) ones, etc. (see, e.g., refs. [4, 8, 18]). In previous studies, the intermetallic compounds Mn\({}_{3}\)_Z_ (_Z_ = Ge, Sn, Ga, Ir, Rh) were found to exhibit a strong anisotropic anomalous Hall effect (AHE) and spin Hall effect in their antiferromagnetic state [19]. Researchers reported [20] a zero magnetic moment in thin films of the Mn\({}_{3}\)Al alloy, which was attributed to a compensated ferrimagnetic state. This CFIM state differs from AFM due to the distinct crystallographic positions of manganese. Another study [21] observed a zero magnetic moment in the cast Mn\({}_{3}\)Al alloy and suggested that it may also be indicative of compensated ferrimagnetism. The combination of noncollinear magnetic structure and Berry curvature can result in a nonzero topological anomalous Hall effect, as demonstrated in antiferromagnets Mn\({}_{3}\)Sn and Mn\({}_{3}\)Ge [9, 22]. These compounds, along with their noncollinear magnetic structures, also exhibit topological states in real space in the form of magnetic anti-skyrmions. The ability to manipulate the Berry curvature highlights the significance of understanding both the electronic and magnetic structures of Mn\({}_{3}\)_Z_ compounds. Furthermore, among these intermetallic compounds, there are topological systems that possess unique surface states and display anomalous transport phenomena due to their unconventional bulk electronic topology. For instance, Mn\({}_{3}\)Ge and Mn\({}_{3}\)Sn compounds with a distorted \(D0_{19}\) structure in the ab plane form a lattice of Mn atoms resembling a highly frustrated kagome lattice [22, 23]. Further theoretical and experimental investigations of such structures and their impact on electronic and magnetic properties remain pertinent and in demand. Figures 1-3 schematically show the models of an antiferromagnet and a compensated ferrimagnet (Figure 1), a half-metallic ferromagnet and a spin gapless semiconductor (Figure 2), and topological materials (Figure 3). The magnetic structure of \(D0_{3}\) compounds in the antiferromagnetic and compensated ferrimagnetic states is presented in Figure 1. As can be seen from Figure 1, the difference between antiferromagnetic and ferrimagnetic states is that the crystallographic positions of manganese with oppositely directed magnetic moments are completely different. Figure 2 shows the states of a half-metallic ferromagnet and a spin gapless semiconductor. The electronic structure of a half-metallic ferromagnet has the following features: there is a gap at the Fermi level for spin-down electronic states, which is absent for spin-up ones (Figure 2a). In the case of a spin gapless semiconductor (Figure 2b), the situation is similar, but there is a significant difference. As in a half-metallic ferromagnet, there is a finite gap for the spin-down spin projection, but the gap is zero for spin-up ones (Figure 2b). This is similar to the case of classical gapless semiconductors [24]. In topological materials (Figure 3), an inversion of the conduction and valence bands can occur because of a strong spin-orbit interaction. This leads to the appearance of a nontrivial topology of the electronic band structure, which is observed in topolog Figure 1: Schematic view of the magnetic structure of \(D0_{3}\) compounds. (**a**) G-type antiferromagnetic structure of V\({}_{3}\)Al, ions in Mn(\(X\)) positions being nonmagnetic. (**b**) Magnetic structure of compensated ferrimagnet Mn\({}_{3}\)Al [20]. Up-directed magnetic moments are shown by red arrows, and down-directed ones by blue. Figure 2: Simple models of half-metallic ferromagnets (**a**) and spin gapless semiconductors (**b**) [8]. The valence band for the electron states with spin up (blue arrows) is marked by green, and for ones with spin down (red arrows) by blue. Weyl semimetals. Topological insulators have a characteristic energy gap in bulk and "metallic" states on the surface (Figure 3a). The Dirac and Weyl semimetals also have a gap in the bulk resulting from strong spin-orbit coupling, except for the band intersection at Dirac (Figure 3b) and Weyl (Figure 3c) points, respectively. The methods used to prepare intermetallic compounds, specifically those dealing with Mn\({}_{3}\)_Z_, can have a significant impact on their structural properties (see, e.g., refs. [25, 26, 27, 28, 29]). This, in turn, influences their electronic and magnetic states. Understanding the role of the structural state in the formation and behavior of intermetallic compounds based on manganese is an important and fascinating problem. Both experimental and theoretical studies in this area hold great relevance and scientific significance (see papers [30, 31, 32] and references therein). They allow for a comprehensive description of the evolution of the electronic structure, magnetic state, and properties of Mn\({}_{3}\)_Z_ and Mn\({}_{3}\)_Z_ compounds. These studies aim to understand the characteristics and differences in the manifestation of various states, such as antiferromagnetism, compensated ferrimagnetism, frustrated magnetism, half-metallic ferromagnetism, spin gapless semiconductors, topological semimetals, etc. This work provides a concise overview of the research on electron transport and the magnetic state in compounds based on manganese, specifically Mn\({}_{2}\)_YZ_ and Mn\({}_{3}\)_Z_ (_Y_ = Sc, Ti, V, Cr, Fe, Co, etc.; \(Z\) = Al, Ge, Sn, Si, Pt, etc.). **2. Electronic and Magnetic Properties of Mn\({}_{2}\)_YZ_ (_Y_ = V, Cr, Fe, Co, Ni, etc.; \(Z\) = Al, Ge, Sn, Si, Pt, etc.) Compounds** The HMF state was theoretically predicted in 1983 in ref. [33] using band calculations of the NiMnSb half-Heusler compound. Later, the HMF theory was significantly supplemented and Figure 3: Schematic representation of the band structure of (**a**) massive Dirac, (**b**) “massless” Dirac, and (**c**) Weyl fermions. The latter arises during the decay of the Dirac point. Two-color and one-color curves and lines represent doubly degenerate and nondegenerate zones, respectively [8]. extended [3, 34]. After 40 years, band calculations are still extensively used to predict the HMF state in various compounds (see, for example, refs. [4, 5] and references therein). Electronic structure calculations performed on manganese-based Heusler alloys have shown that many of them can exhibit HMF properties. These are Mn\({}_{2}\)V\(Z\) (\(Z=\) Al, Ga, In, Si, Ge, Sn) compounds [35], Mn\({}_{2}\)Fe\(Z\) (\(Z=\) Al, Ga, Si, Ge, Sb) [36], Mn\({}_{2}\)Cr\(Z\) (\(Z=\) Al, Ga, Si, Ge, Sb) [37], Mn\({}_{2}\)Ti\(Z\) (\(Z=\) Al, As, Bi, Ga, Ge, Sb, Si, Sn) [38], Mn\({}_{2}\)Zr\(Z\) (\(Z=\) Ga, Ge, Si) [39, 40]. It is worth emphasizing that the HMF state predicted theoretically and/or as a result of band calculations is by no means always realized in real compounds. Using density functional theory, the electronic, magnetic, and structural properties of ferrimagnetic Mn\({}_{2}\)VAl and Mn\({}_{2}\)VSi Heusler alloys were studied in ref. [41]. It was shown that two states can exist in studied compounds: one state with low magnetization and a HMF behavior with almost 100% spin polarization and the second state with high magnetization and a metallic character. The temperature dependences of the electrical resistivity \(\rho(T)\) of the Mn\({}_{2}\)CrAl alloy were studied in [21]. It can be observed (see Figure 1 in ref. [21]) that the residual resistivity \(\rho_{0}\) reaches a large value of \(\approx\)250 \(\upmu\Omega\) cm, and the resistivity \(\rho\) decreases with temperature increasing up to room temperature. A similar behavior \(\rho(T)\) was observed for Mn\({}_{2}\)FeAl in [42]. The large values of \(\rho_{0}\) and the "semiconductor" dependence \(\rho(T)\) were explained by the structural disorder. There is a so-called Mooij rule [43], according to which in metallic systems with the static disorder, i.e., with resistivity \(\rho>\) (150-200) \(\upmu\Omega\) cm, a negative temperature coefficient of resistance is usually observed. The estimates of the concentration \(n\) of current carriers from measurements of the Hall effect give large values: \(n\approx\) 1.6\(\cdot\)10\({}^{22}\) cm\({}^{-3}\) for Mn\({}_{2}\)CrAl [21], and \(n\approx\) 2 \(\times\) 10\({}^{22}\) cm\({}^{-3}\) for Mn\({}_{2}\)FeAl [42]. Studies of the magnetic properties of Mn\({}_{2}\)CrAl [21] and Mn\({}_{2}\)FeAl [42, 44] alloys showed that the behavior of the magnetization \(M(H)\) indicates a total moment close to zero (see, e.g., Figure 4). Such a state can be characterized as (1) a compensated ferrimagnet, which retains the nature of a half-metallic ferromagnet state [20, 45], or as (2) an antiferromagnet. The distinction between them can be explained as follows. In the case of antiferromagnetism, the opposite magnetic moments of manganese are situated in positions that are equivalent from a crystallographic perspective. Conversely, in the case of compensated ferrimagnetism, these opposite moments of manganese occupy crystallographically distinct positions. In CFIM, the compensation of magnetic moments is not solely reliant on crystal symmetry but is also influenced by a unique band structure. The utilization of CFIM and/or AFM states in such materials holds promise for applications in spintronics as they can exhibit high spin polarization of current carriers. It is shown in ref. [44] that a frustrated antiferromagnetic state is observed in Mn\({}_{2}\)FeAl alloy with a Neel temperature \(T_{\rm N}=48\) K and a Curie-Weiss temperature \(\theta_{\rm CW}\approx-230\) K. In this case, large antiferromagnetic spin fluctuations, caused by the geometric frustrations, lead to the appearance of an unusually large electronic heat capacity. According to ref. [44], the corresponding value of the \(T\)-linear heat capacity coefficient is \(\gamma=210\) mJ/molK\({}^{2}\). The Mn\({}_{2}\)CoAl compound exhibits even higher residual resistivity values [46, 47] (Figure 5), belonging to a novel class of quantum materials known as spin gapless semiconductors. The theoretical prediction of SGS materials was made in 2008 [6]. SGS materials possess an unusual band structure, wherein there exists an energy gap for the spin-down electron subsystem near the Fermi level, while the spin-up electronic states have the top of the valence band touching the bottom of the conduction band. Under these circumstances, the kinetic properties of SGS materials are expected to resemble those of "classical" gapless semiconductors [24], including high residual resistivity with weak temperature dependence, relatively low current carrier concentration, and low thermoelectric power. Due to the prevalence of spin-up current carriers, a high degree of spin polarization, a significant magnetization, and an anomalous Hall effect are anticipated. Experimental evidence of this spin gapless semiconductor state with unusual magnetic and magnetotransport properties has been observed in Mn\({}_{2}\)CoAl [46, 47] and Ti\({}_{2}\)MnAl Heusler alloys [48]. Figure 5 shows the temperature dependencies of the electrical resistivity, the Seebeck coefficient, and the concentration of current carriers of the Mn\({}_{2}\)CoAl compound [47]. It can be seen that the residual resistivity is very large and amounts to \(\sim\)445 \(\upmu\Omega\) cm, and the resistivity itself slightly decreases with temperature, reaching a value of \(\sim\)410 \(\upmu\Omega\) cm at room temperature. Figure 4: Field dependence of magnetization for Mn\({}_{2}\)FeAl at different temperatures [42]. In this case, the thermoelectric power is small and weakly changes with temperature. Finally, the concentration of charge carriers is relatively low, monotonically increasing with temperature: \(n\approx 1.65\times 10^{20}\) cm\({}^{-3}\) at 2 K and \(\sim\)3 \(\times\) 10\({}^{20}\) cm\({}^{-3}\) at room temperature. Finally, measurements of the Hall effect in Mn\({}_{2}\)CoAl [47] show that the value of the anomalous Hall conductivity \(\sigma_{xy}\) is relatively small and at 2 K is \(\sigma_{xy}=21.8\) S cm\({}^{-1}\), which can be explained by the symmetry features of the Berry curvature (Figure 5). These facts confirm the realization of the state of a spin gapless semiconductor in the Mn\({}_{2}\)CoAl compound. It is important to highlight that the characteristics of a spin gapless semiconductor (SGS) are not limited to electron transport but also extend to other properties, including optical characteristics. In a study referenced as [49], the optical properties of the Mn\({}_{2}\)CoAl compound, which exhibits SGS behavior, were investigated. The study also involved the calculation of its electronic structure. The findings revealed distinct optical properties of Mn\({}_{2}\)CoAl, including positive values of the real part of the dielectric constant (\(\varepsilon_{1}\)) and the absence of the Drude contribution to the optical conductivity in the infrared (IR) region of the spectrum within the examined range. These observations suggest a slight deviation or "deterioration" of the metallic properties of Mn\({}_{2}\)CoAl. Figure 5: Mn\({}_{2}\)CoAl compound [47]. (**a**) Temperature dependence of the electroresistivity \(\rho\), Seebeck coefficient \(S\), and charge carriers concentration \(n\). (**b**) Field dependence of Hall conductivity at \(T=2\) K. Besides, the authors of the study observed [49] intense interband absorption in the IR range and concluded that the band spectrum had a complex structure with a high density of \(d\)-states near the Fermi level \(E_{\rm F}\). The observed features of the optical properties allowed the authors [49] to explain the band spectrum characteristic of spin gapless semiconductors. The electronic structure and magnetic properties of Heusler alloys, specifically Mn\({}_{2}\)_YZ_, can be significantly altered by modifying their composition through the substitution of atoms and applying external influences, like hydrostatic pressure and mechanical processing. In ref. [50], the authors investigated the structure, mechanical, electronic, and magnetic properties of Mn\({}_{2-x}\)Fe\({}_{1+x}\)Si Heusler alloys (where \(x=0\), 0.25, 0.5, 0.75, 1) using density functional theory. By replacing Mn atoms with Fe atoms, it was observed that all the studied alloys maintain mechanical and dynamic stability, adopting the Hg\({}_{2}\)CuTi structure for \(x=0\), 0.25 and 0.5 and the Cu\({}_{2}\)MnAl structure for \(x=0.75\) and 1. Furthermore, as the iron content increases, the alloys exhibit improved ductility. All the studied alloys display half-metallic ferromagnetic behavior, with the Fermi levels gradually shifting toward the middle of the bandgap in the spin-down direction with increasing iron content. In the study [51], researchers successfully synthesized the Mn\({}_{2}\)FeSi Heusler alloy using a high-energy planetary ball mill. The resulting compound was found to be an inverse Heusler alloy with an X\({}_{\rm A}\) structure. Through the use of Mossbauer spectroscopy and magnetization measurements, they discovered that the synthesized material exhibits a heterogeneous magnetic structure at room temperature. This structure is composed primarily of the paramagnetic phase, with small contributions from ferro- and ferrimagnetic phases. The Neel temperature \(T_{\rm N}=67\) K was determined from the temperature dependences of the magnetization. Using ab initio calculations and the Monte Carlo simulation method in [52], the structural, magnetic, and electronic properties of Mn\({}_{2}\)YSn (\(Y=\) Sc, Ti, and V) Heusler alloys were studied under applied hydrostatic pressure. The coexistence of two magnetic states with a low and a high magnetic moment was found for a small and large volume of the crystal lattice, correspondingly. These states coexist together due to almost equal energy at an applied pressure of 3.4, -2.9, and -3.25 GPa for Mn\({}_{2}\)ScSn, Mn\({}_{2}\)TiSn, and Mn\({}_{2}\)VSn, respectively. A positive pressure corresponds to a uniform compression of the lattice, and a negative one corresponds to a uniform expansion of the lattice. It was demonstrated that for the studied compounds, the low-magnetic state (LMS) was characterized by an almost half-metallic behavior, while the high-magnetic state (HMS) acquired a metallic character. It was shown that the parameters of magnetic exchange and Curie temperatures were significantly higher for the HMS than for the LMS. The authors proposed a mechanism for switching between the half-metallic state with the LMS and the metallic state with the HMS using applied pressure. In addition to the electronic and magnetic states discussed above, Heusler alloys based on manganese can exhibit other thermodynamic phenomena, such as the shape memory effect and the magnetocaloric effect. Moreover, unlike the well-known Heusler-like alloys based on Ni-Mn-\(Z\) (\(Z=\) Ga, In, Sn, etc.) with strong deviations from stoichiometry (of the type Ni\({}_{2\times 3}\)Mn\({}_{1\times 3}\)Z), SME and MCE can also be observed in full Heusler alloys Mn\({}_{2}\)\(Y\)Z, where \(Y=\) Sc, Ti, V, Co, Ni, etc.; \(Z=\) Ga, Ge, Sn, etc. (see, e.g., [53, 54, 55] and references therein). Similar magnetic and electronic states can be realized in binary intermetallic compounds based on manganese Mn\({}_{3}\)\(Z\) (\(Z=\) Al, Ge, Ga, etc.) as well. Features of the Electronic Transport and Magnetic State of Mn\({}_{3}\)\(Z\) (\(Z=\) Al, Ge, Si, Sn, Pt, etc.) Compounds Compensated ferrimagnets possess a magnetic moment that totals zero, but their density of electronic states for opposite spin directions allows for a significant spin polarization of current carriers. A study [20] indicates that the compensated ferrimagnetic state is present in Mn\({}_{3}\)Al, and this half-metallic state can persist at room temperature. In the same study, the magnetic properties of thin films of Mn\({}_{3}\)Al were examined. The results reveal that the films demonstrate the state of a compensated ferrimagnet, with a total magnetization value of \(M=0.11\)\(\pm\) 0.04 u\({}_{\rm B}\)/f.u. Results [20] are in good agreement with refs. [28, 29] where the electronic transport and magnetic characteristics of cast and rapid melt-quenched (RMQ) Mn\({}_{3}\)Al alloys were measured and compared. In Figure 6, the field dependence of the magnetization (\(M=\) f(\(H\))) is shown for both the cast and RMQ alloy Mn\({}_{3}\)Al at a temperature of 4.2 K, as described in reference [29]. It can be observed from Figure 6a that the cast alloy exhibits a small magnitude of magnetization, which increases linearly with the applied field. At a field strength of 70 kOe, the magnetization reaches approximately 1.1 emu/g. In contrast, the Mn\({}_{3}\)Al RMQ alloy (Figure 6b) displays a significantly different shape of the _M(H)_ dependence. In this case, even in weak magnetic fields, there is an increase in magnetization, followed by a tendency toward saturation. In ref. [29], the difference in \(M\)(\(H\)) behavior for Mn\({}_{3}\)Al is explained as follows. In the case of a cast compound having the \(\beta\)-Mn structure, a frustrated antiferromagnetic state can arise, which manifests itself in the \(M\)(\(H\)) dependences (Figure 6a) and in the temperature dependences of the susceptibility (Figure 7). The results obtained in [29] are in good agreement with the conclusions of ref. [42] obtained on Mn\({}_{2}\)FeAl with the same \(\beta\)-Mn structure. According to ref. [29], a decrease in grain size and an increase in "disorder" are observed in Mn\({}_{3}\)Al after RMQ treatment. As shown in [56], the magnetic state of the Mn\({}_{3}\)Al alloy is very sensitive to the filling of interstices and degree of ordering. According to first-principles calculations [18] in Mn\({}_{3}\)Al with the \(D0_{3}\) structure, a compensated ferrimagnetic state with a zero Figure 6: Field dependence of magnetization for Mn\({}_{3}\)Al at \(T=4.2\) K: (**a**) cast and (**b**) RMQ samples [29]. Figure 7: The magnetic susceptibility of the Mn\({}_{3}\)Al alloy measured at a magnetic field strength of 100 Oe during the cooling process. The results are represented by two sets of data points: (1) open circles that indicate measurements conducted without the application of a magnetic field and (2) solid red circles that correspond to the presence of a magnetic field. The inset shows the temperature dependence of the inverse magnetic susceptibility \(\chi^{-1}\)[29]. The Curie–Weiss law with the temperature \(\theta_{\rm CW}\approx-710\) K is shown by dashed line. moment can arise, which is observed in the RMQ compound Mn\({}_{3}\)Al (Figure 6b) according to [29]. Figure 7 displays the temperature dependence of the susceptibility for the Mn\({}_{3}\)Al alloy [29]. The inset graph illustrates the temperature dependence of the inverse susceptibility at a magnetic field strength of 100 Oe. At high temperatures, the data approximately follow the Curie-Weiss law with a temperature \(\theta_{\rm CW}\) of around \(-\)700 K. This behavior suggests the possible presence of an antiferromagnetic state. The Neel temperature \(T_{\rm N}\) can be estimated by observing the point where the magnetic susceptibility curve breaks. In our case, \(T_{\rm N}\) is determined to be 35 K (as shown in Figure 7). The large ratio \(|\theta_{\rm CW}|/T_{\rm N}\) indicates a frustrated antiferromagnetism in the cast Mn\({}_{3}\)Al compound [29]. The cast and RMQ alloys investigated in [28, 29] have the \(\beta\)-Mn structure. This consists of two distinct sublattices, one of which is made up of triangles arranged perpendicular to the [111] axes and forming a three-dimensional kagome-like lattice. Recent experimental studies have shown that in systems with strong frustration caused by competing exchange interactions, not only a quantum state of spin liquid can arise, but also antiferromagnetism with a significantly reduced, but still finite, Neel temperature can occur. These systems are characterized by a frustration parameter, the ratio \(|\theta_{\rm CW}|/T_{\rm N}\); in the intermediate temperature range \(T_{\rm N}<T<|\theta_{\rm CW}|\), the system can exhibit unusual spin-liquid properties. High values of the frustration parameter have been observed in the PdCrO\({}_{2}\) compound where \(T_{\rm N}=37\) K and \(\theta_{\rm CW}\approx-500\) K [57]. This behavior is not explained by the standard Heisenberg model and is thought to be due to correlation effects in the subsystem of itinerant electrons [58]. A similar behavior of the magnetic susceptibility was recently discovered in the Mn\({}_{2}\)FeAl compound with the \(\beta\)-Mn structure in [42, 44] (\(T_{\rm N}=42\) K, \(\theta_{\rm CW}\approx-230\) K according to [44]). According to calculations [56], the Mn\({}_{3}\)Al compound in the \(\beta\)-Mn structure exhibits a ferrimagnetic state, where the magnetic moment of the sublattices is not significantly compensated. However, ab initio calculations [20, 59] in the \(D0_{3}\) structure suggest that a compensated ferrimagnetic state with a zero moment and a half-metallic structure arises. The results from [60] are not fully supported by experimental data [28, 29]. These data suggest that both the cast and RMQ Mn\({}_{3}\)Al alloys exhibit a frustrated antiferromagnetic state and an almost compensated ferrimagnetic state, respectively. In the case of a cast Mn\({}_{3}\)Al alloy with the \(\beta\)-Mn structure, the absence of a long-range magnetic order can manifest itself as the Hall effect in the form of a zero anomalous contribution. On the contrary, in the case of the RMQ alloy with the \(\beta\)-Mn structure in the state of a compensated ferrimagnet, an anomalous Hall effect should be observed. Figure 8 illustrates the temperature dependence of the electrical resistivity \(\rho(T)\) for cast and rapidly melt-quenched Mn\({}_{3}\)Al alloys [29]. In the cast alloy (Figure 8a), it can be observed that the residual resistivity \(\rho_{0}\) is relatively high, reaching a value of approximately 307 \(\upmu\Omega\) cm. The resistivity follows a semiconductor-like trend, decreasing with increasing temperature. However, after quench hardening (Figure 8b), there is a significant reduction in the residual electrical resistivity \(\rho_{0}\) by more than an order of magnitude, down to 12 \(\upmu\Omega\) cm. Additionally, a minimum point appears on the temperature dependence \(\rho(T)\) at around 60 K. Typically, rapid melt quenching results in the formation of fine-grained structures, which can lead to increased resistivity due to the scattering of current carriers at grain boundaries. However, this phenomenon was not observed in the Mn\({}_{3}\)Al alloy studied [28, 29]. Instead, the presence of manganese sulfide (MnS) precipitates at the grain boundaries occurred, explaining the high electrical resistivity values in the cast alloy. The rapid melt-quenching process caused the dissolution of manganese sulfide within the volume of the grains, resulting in grain boundaries free of MnS. As a result, the resistivity of the rapidly melt-quenched alloy decreased compared with the cast alloy. Similar behavior was also observed in the Hall effect measurements [28, 29]. Figure 9 shows the dependence of the Hall resistivity \(\rho_{xy}=\) f(\(H\)) at \(T\) = 4.2 K for the cast and RMQ Mn\({}_{3}\)Al alloy. In the first case, a linear increase in \(\rho_{xy}\) is observed, and in the second case, the behavior of \(\rho_{xy}(H)\) is typical for alloys with an anomalous Hall effect [28, 29]. These data also indicate the absence of spontaneous magnetization in the case of cast Mn\({}_{3}\)Al and the realization of a compensated ferrimagnet state in the RMQ Mn\({}_{3}\)Al alloy [28, 29]. Traditionally, it was believed that the anomalous Hall effect (AHE) was exclusive to ferromagnetic materials [61]. However, it has been discovered that the AHE can manifest in Figure 8: Temperature dependence of the resistivity for cast (**a**) and RMQ (**b**) Mn\({}_{3}\)Al [29]. various magnetic materials as a result of broken time-reversal symmetry. In reference [62], ab initio calculations were conducted, taking into consideration the symmetry properties of the nontrivial AHE in the Mn\({}_{3}\)Al compensated ferrimagnet. The nonzero elements of the anomalous Hall conductivity were determined based on the magnetic space group of Mn\({}_{3}\)Al. The calculations revealed that the value of the anomalous Hall conductivity was \(\sigma_{xy}=-320\) (\(\Omega\) cm)\({}^{-1}\). The study also delved into the nature of the Berry curvature, which is responsible for the internal origin of the AHE, using group theory. Furthermore, the overall behavior of the Berry curvature across the entire Brillouin zone was illustrated. The observation of a nonzero topological anomalous Hall effect in antiferromagnets Mn\({}_{3}\)Sn and Mn\({}_{3}\)Ge [9] can be attributed to their noncollinear magnetic structure and Berry curvature. A study conducted in [63] focused on the magnetization and anomalous Hall effect of a single-crystal hexagonal chiral antiferromagnet Mn\({}_{3}\)Ge. Remarkably, it was discovered that this material exhibited a significant anomalous Hall conductivity of approximately 60 (\(\Omega\) cm)\({}^{-1}\) at room temperature and \(\sim\)380 (\(\Omega\) cm)\({}^{-1}\) at 5 K in zero field, approaching half the anticipated value for a quantum Hall effect on an atomic layer with a Chern number of one. Furthermore, the sign of the anomalous Hall effect was found to change with a magnetic field vector direction change of less than 0.1 T or with a rotation of a small magnetic field of less than 0.1 T. This intriguing behavior could have implications in the development of switching and storage devices based on antiferromagnets. It is worth noting that recent studies have reported new antiferromagnetic materials with rapid response times, low power consumption, and high immunity to external magnetic fields. These materials show promise in the field of spintronics [64, 65, 66, 67, 68, 69]. Figure 9: Field dependence of Hall resistivity for Mn\({}_{3}\)Al at \(T\) = 4.2 K [29]: (**a**) cast and (**b**) RMQ samples. In materials exhibiting noncollinear antiferromagnetism, the magnetic moments of atoms are not strictly aligned along a single axis. One such material is Mn\({}_{3}\)Ge, which falls into the category of noncollinear antiferromagnetic materials. In Mn\({}_{3}\)Ge, the manganese atoms form a hexagonal sublattice, and their magnetic moments arrange themselves in a kagome lattice structure (Figure 10). This unique arrangement gives rise to a noncollinear antiferromagnetic state and several other unusual effects, including a large anomalous Hall effect, a new spin Hall effect, and a significant spin Nernst effect [70, 71, 72]. A study conducted in [73] focused on the magnetic structure of antiferromagnetic Mn\({}_{3}\)Ge crystals with a kagome lattice, using polarized neutron diffraction. The study has revealed that in Mn\({}_{3}\)Ge, the magnetic order is characterized by a coplanar state represented by \(\mathbf{k}=0\), which belongs to the two-dimensional irreducible representation of the double group \(\Gamma_{9}\), which is the only irreducible representation consistent with the observed diffraction pattern of Mn\({}_{3}\)Ge [73]. This coplanar state exhibits a perfect 120\({}^{\circ}\) antichiral structure with an angle of 120\({}^{\circ}\) and a magnetic moment of 2.2 \(\upmu_{\text{B}}\)/Mn. Additionally, a weak collinear ferromagnetism is observed. A phenomenological spin Hamiltonian is proposed to describe the manganese-based magnetism, incorporating exchange interactions, Dzyaloshinskii-Moriya interactions, and single-ion crystal field terms. These interactions contribute to spin wave damping and an extended range of magnetic interactions indicate an itinerant magnetism compatible with anomalies of transport properties. In [74], a study was conducted on the electrical, magnetic, galvanomagnetic, and thermal properties of a single crystal of the noncollinear antiferromagnet Mn\({}_{3}\)Ge. The researchers discovered that at very low temperatures, the Wiedemann-Franz law, which establishes a connection between the electronic and thermal Hall effects, holds true. However, deviations from this law were observed at temperatures above 100 K. The investigation revealed that the carriers in Mn\({}_{3}\)Ge have a short mean free path, similar to the distance between Mn antisite defects. As a result, the elastic scattering of current carriers becomes the dominant factor. The researchers Figure 10: \(D0_{19}\) structure for Mn\({}_{3}\)Ge. Red arrows show the Mn magnetic moments [23]. proposed that the deviation from the Wiedemann-Franz law is attributed to a disparity between the thermal and electrical contributions of the Berry curvature along the Fermi surface. Theoretical calculations supported this interpretation, demonstrating that the Berry spectra in the two systems are not identical. Importantly, the study also confirmed that the Bridgman relation, which links the anomalous Nernst and Ettingshausen coefficients, is valid across the entire temperature range investigated. External pressure and doping can alter the Berry curvature, as well as the corresponding physical properties. In a study [75], the anomalous Hall effect in a single crystal of antiferromagnetic Mn\({}_{3}\)Ge was investigated under hydrostatic pressure up to 2.85 GPa. It was observed that the Hall signal becomes zero at a pressure of 1.53 GPa, and changes sign at higher pressures. Although the sign of the Hall conductivity changes with increasing pressure, its saturation value at room temperature remains relatively high, around 23 (\(\Omega\) cm)\({}^{-1}\) at 2.85 GPa, which is comparable to the saturation value at atmospheric pressure 40 (\(\Omega\) cm)\({}^{-1}\). The authors suggest that this change in the Hall conductivity may be attributed to gradual modifications in the in-plane components of the Mn moments within a noncollinear triangular magnetic lattice. These findings provide insights into the possibility of manipulating and controlling the anomalous Hall effect in chiral antiferromagnetic Mn\({}_{3}\)Ge through pressure-induced changes. Authors of [76] investigated the magnetic and electronic structure of the (Mn\({}_{0.78}\)Fe\({}_{0.22}\))\({}_{3}\)Ge hexagonal single crystal, which is known as a Weyl semimetal, using electron transport, magnetic properties, and neutron diffraction experiments. The researchers conducted temperature measurements of the magnetization and observed two magnetic transitions at \(T_{\rm N1}=242\) K and \(T_{\rm N2}=120\) K. In this case, the anomalous Hall effect is observed in the intermediate range at temperatures between \(T_{\rm N1}\) and \(T_{\rm N2}\), disappearing at \(T<T_{\rm N2}\). Neutron diffraction experiments were carried out to determine the magnetic structures of the (Mn\({}_{0.78}\)Fe\({}_{0.22}\))\({}_{3}\)Ge single crystal. Neutron diffraction studies made it possible to conclude that the sample has a collinear antiferromagnetic structure below the temperature \(T_{\rm N2}\). In this case, the magnetic structure is noncollinear antiferromagnetic in the intermediate temperature range. The authors conclude that the observation of an anomalous Hall effect and a noncollinear magnetic structure in (Mn\({}_{0.78}\)Fe\({}_{0.22}\))\({}_{3}\)Ge in this temperature range indicates the existence of Weyl points. At temperatures below \(T_{\rm N2}\), there is no anomalous Hall effect, and the magnetic structure changes to a collinear antiferromagnetic one. This indicates a strong coupling between the magnetic and electronic structures of the (Mn\({}_{0.78}\)Fe\({}_{0.22}\))\({}_{3}\)Ge compound. The influence of pressure, reaching up to 2.2 GPa, on the electrical resistivity \(\rho\) and thermoelectric power \(S\) of the Mn\({}_{3}\)Si single-crystal compound was investigated in a study referenced as [77]. The Neel temperature \(T_{\rm N}\) was determined by analyzing the temperature dependencies of \(\rho\) and \(S\), revealing an increase in \(T_{\rm N}\) with rising pressure. Notably, the resistivity and thermoelectric power displayed significant changes at around \(P\approx 1\) GPa at a temperature of 2 K, indicating the occurrence of a phase transition. Another research, referenced as [78], presents findings on the structural and magnetic properties of noncollinear antiferromagnetic Mn\({}_{3}\)Sn films that adopt the \(D0_{19}\) hexagonal structure. These films were observed to exhibit weak ferromagnetism, characterized by an uncompensated in-plane magnetization of 34 kA/m and a coercive force \(\mu_{0}H_{\rm c}\) of 4.0 mT at room temperature. Additionally, the study investigated the phenomenon of exchange bias in Mn\({}_{3}\)Sn/Py bilayers, revealing the potential to achieve exchange bias fields of up to \(\mu_{0}H_{\rm EB}=12.6\) mT at 5 K. These results highlight the attractiveness of Mn\({}_{3}\)Sn films for applications in antiferromagnetic spintronics. A study in [79] reports on the unusual behavior of the anomalous Hall effect in the compound Mn\({}_{3}\)Sn. The authors discovered a linear dependence on the magnetic field, which they attributed to the Berry curvature of the wave function. Interestingly, the magnitude of the Hall signal in this case exceeds what would be expected in the semiclassical model. The authors propose that the magnetic field induces a nonplanar spin canting, resulting in a nontrivial chirality of the spins on the kagome lattice. This leads to changes in the band structure, specifically gapping out previously unknown Weyl nodal lines, which explains the observed behavior of the anomalous Hall effect. The findings suggest a connection between the Berry phase in real space, arising from spin chirality, and the curvature of the Berry momentum space in the kagome lattice. The paper [80] investigates the electronic transport and magnetic properties of Mn\({}_{3}\)Sn films. The findings reveal that these films exhibit a weak uncompensated magnetic moment of approximately 0.12 \(\upmu_{\rm B}\)/f.u. and an electrical resistance of about 3.8 \(\upmu\Omega\) m at room temperature. These results closely match the data obtained for bulk Mn\({}_{3}\)Sn, indicating high purity and perfection in the synthesized films, with a resistance ratio RRR of 3.92. The Mn atoms in the Mn\({}_{3}\)Sn compound are arranged in a kagome-type lattice. The study demonstrates that at room temperature when the external magnetic field is perpendicular to the kagome planes, a weak anomalous Hall effect is observed along with a Hall resistance that varies linearly with the field. The researchers identify three distinct magnetic phases in the investigated films of the chiral antiferromagnet Mn\({}_{3}\)Sn: an inverse triangular spin state occurring above 250 K, a helical phase stabilized between 100 and 250 K, and a spin glass phase formed below 100 K. Based on their findings, the authors suggest that these films may support topologically protected skyrmions, where the fictitious effective magnetic field is estimated to be around 4.4 T. This indicates the potential for unique magnetic phenomena in the Mn\({}_{3}\)Sn films. Currently, there is active research on the physics of Berry curvature and related nontrivial magnetic transport in two-dimensional spin-lattice kagome structures of noncollinear antiferromagnets. The focus has mainly been on hexagonal chiral antiferromagnets like Mn\({}_{3}\)Sn, while studies on face-centered cubic (fcc) noncollinear antiferromagnetism are limited. One such example of fcc noncollinear antiferromagnets is Mn\({}_{3}\)Pt. In reference [81], the researchers examined single crystals of Mn\({}_{3}\)Pt. They discovered that applying uniaxial stress to Mn\({}_{3}\)Pt crystals decreased their coercive force. This allowed them to observe the anomalous Hall effect in bulk materials of the Mn\({}_{3}Z\) family with a cubic structure. Interestingly, the anomalous Hall effect remained even after the stress was removed, suggesting that stress-induced ferromagnetic moments had minimal impact on the effect. The study found that longitudinal stress reduced the coercive force more than transverse stress, indicating that the dominant response to applied stress was a rotation of spins within the plane. However, further investigation is needed to verify this assumption, particularly through direct observation of piezomagnetism. In reference [82], the authors systematically studied the effect of deformation on the magnetic properties and the anomalous Hall effect of fcc noncollinear antiferromagnetic Mn\({}_{3}\)Pt films. They found that the ferromagnetic characteristics of Mn\({}_{3}\)Pt films showed a similar response to strain, in contrast to hexagonal chiral antiferromagnets. The ferromagnetic signal in these films was attributed to the magnetic moment slope of Mn atoms in kagome structures, while the anomalous Hall resistivity was linked to the nonzero Berry curvature. By adjusting the thickness of the Mn\({}_{3}\)Pt film, they achieved an anomalous Hall conductivity exceeding 100 (\(\Omega\) cm)\({}^{-1}\). Additionally, the relationship between the Berry phase, magnetic properties, and AHE in Mn\({}_{3}\)Pt was confirmed through a study on crystal growth orientation. The researchers concluded that this compound holds promise as a candidate for room-temperature antiferromagnetic spintronics. Physical Mechanisms of the Magnetic States Formation in Mn-Based Compounds: Half-Metallic Ferromagnets and Spin Gapless Semiconductors The existence of distinct spin-up and spin-down states in half-metallic ferromagnets presents a complex challenge to the general theory of itinerant magnetism [34]. The process of achieving a half-metallic state in Heusler alloys like \(X\)Mn\(Z\) and \(X_{2}\)Mn\(Z\), which have \(C1_{b}\) and \(L2_{1}\) structures, can be explained as follows [33, 34, 83, 84]. When we disregard the hybridization of atomic states \(X\) and \(Z\), the \(d\) band of manganese displays a significant energy gap between bonding and antibonding states. In a ferromagnetic state, the strong intra-atomic exchange (Hund's exchange) among manganese ions results in a notable separation of spin subbands for up and down spins. One of these spin subbands closely interacts with the ligand's \(p\) band, leading to a partial or complete blurring of the corresponding energy gap due to \(p\)-\(d\) hybridization. Meanwhile, in the other spin subband, the energy gap remains intact and has the potential to align with the Fermi level, thus giving rise to the HMF state. The \(C1_{b}\) structure exhibits a real energy gap, whereas the \(L2_{1}\) structure has a pronounced pseudogap. This difference is attributed to significant alterations in \(p\)-\(d\) hybridization, particularly between the \(p\) and \(t_{2g}\) states, in the absence of an inversion center, which is a characteristic feature of the \(C1_{b}\) structure. As a result, the \(C1_{b}\) structure is more favorable for the formation of the HMF state. The stability of the ferromagnetic state is a result of variations in \(p\)-\(d\) hybridization between states with opposite spin orientations, as elaborated in [84]. In a study [85], researchers conducted calculations for 54 ternary Heusler compounds with the composition \(X_{2}YZ\). In this context, \(X\) represents a \(3d\) transition metal (specifically Mn, Fe, Co), while \(Y\) and \(Z\) represent elements Y, Zr, Nb, Mo, Tc, Ru, Rh, Pd, Ag, and Al and Si, respectively. The findings from this study revealed that seven of these compounds-namely, Mn\({}_{2}\)NbAl, Mn\({}_{2}\)ZrSi, Mn\({}_{2}\)RhSi, Co\({}_{2}\)ZrAl, Co\({}_{2}\)NbAl, Co\({}_{2}\)YSi, and Co\({}_{2}\)ZrSi-displayed 100% spin polarization, which classifies them as half-metallic ferromagnets (HMFs). Furthermore, five other alloys, specifically Mn\({}_{2}\)TcAl, Mn\({}_{2}\)RuAl, Mn\({}_{2}\)NbSi, Mn\({}_{2}\)RuSi, and Fe\({}_{2}\)NbSi, exhibited high spin polarization levels (above 90%) with a gap present for one of the spin directions near the Fermi level. These alloys were categorized as "almost HMF" in the study [85]. Importantly, their Fermi levels were found to shift under pressure, resulting in the alignment of the Fermi level with the gap and the emergence of the HMF state. An intriguing development in the field of half-metallic magnetism is represented by electron-deficient full Heusler alloys. Reducing the number of valence electrons to 24 per formula unit leads to either a nonmagnetic semiconductor or a half-metallic antiferromagnet. Remarkably, the reduction in the number of valence electrons can continue, entering a range of half-metals but with a bandgap for the majority spin direction. This is best exemplified by the case of Mn\({}_{2}\)VAl, which is a half-metallic ferrimagnet, as calculated using the generalized gradient exchange-correlation potential [86]. In addition to the mentioned study, other works (referenced as [35, 86, 87]) explored the exchange interactions in Ni\({}_{2}\)Mn\(Z\) (\(Z\) = Ga, In, Sn, Sb) and Mn\({}_{2}\)V\(Z\) (\(Z\) = Al, Ge) alloys. Ni\({}_{2}\)Mn\(Z\) was found to be non-half-metallic, while Mn\({}_{2}\)V\(Z\) was identified as half-metallic. These studies emphasized the significance of intersublattice exchange interactions. In the case of Mn\({}_{2}\)V\(Z\) (\(Z\) = Al, Ge), it was observed that the ferrimagnetic coupling between V and Mn moments stabilized the ferromagnetic alignment of Mn moments. The V-based Heusler alloys Mn\({}_{2}\)V\(Z\) (\(Z\) = Al, Ga, In, Si, Ge, Sn) were predicted to exhibit half-metallic ferrimagnetism in these studies [35, 86, 88]. Half-metallicity in the Mn\({}_{2}\)VAl ferrimagnet was detected using resonant inelastic soft X-ray scattering (SX-RIXS) in the presence of a magnetic field [89]. When V \(L\)-edge excitation was applied, the findings revealed that the partial density of states for the V 3\(d\) states around the Fermi energy was minimal, and these states exhibited a relatively localized character. Conversely, when Mn \(L\)-edge excitation was used, the spectra were dominated by fluorescence and displayed clear magnetic circular dichroism, which showed significant dependence on the excitation photon energy. These experimental results were compared with theoretical predictions based on density-functional-theory band structure calculations, confirming the itinerant, spin-dependent nature of the Mn 3\(d\) states and the decay of the Mn 2\(p\) core states. This consistency in the experimental data aligns with the half-metallic behavior of the Mn 3\(d\) states. The electronic structure and magnetic state of SGS materials are quite similar to HMF compounds in many ways. "Transition" from the HMF to the SGS state can occur during doping. Using the first-principles calculations within density functional theory, the authors of ref. [90] described such a transition by studying the magnetism and electronic structure of Co\({}_{2\text{-}3}\)Mn\({}_{1\text{+}8}\)Al (0 \(\leq x\leq\) 1) alloys, i.e., during the transition from HMF Co\({}_{2}\)MnAl to SGS Mn\({}_{2}\)CoAl. As reported in [90], the electronic spectrum of the Co\({}_{2}\)MnAl compound (\(x=0\)) exhibits an intersection of its spin-up and spin-down bands with the Fermi level \(E_{\text{F}}\), indicating metallic properties for both spin directions. However, Co\({}_{2}\)MnAl displays a low density of states for spin-down electrons, resulting in a pseudogap rather than a true gap. The calculated spectra show a spin polarization of approximately 75%. As the manganese content increases, the spin-down valence band of Co\({}_{2}\)MnAl shifts downward, causing the density of states to approach zero. Consequently, the degree of spin polarization increases. In the case of the Co\({}_{1\text{-}3}\)Mn\({}_{1\text{-}5}\)Al compound, as Mn content further increases, the top of the spin-down band aligns with the Fermi level, leading to the emergence of a genuine gap and achieving a maximum spin polarization of 100%. Despite that, Co\({}_{1\text{-}5}\)Mn\({}_{1\text{-}5}\)Al faces a vulnerability as its Fermi level is situated at the edge of the spin-down bandgap, making it susceptible to external influences, especially structural defects. As Mn content continues to rise in Co\({}_{2\text{-}3}\)Mn\({}_{1\text{+}8}\)Al, the spin-down valence band moves further away from the Fermi level, resulting in an enlarged band gap. At \(x=1.875\), the width of the spin-down gap reaches approximately 0.4 eV, with the Fermi level nearly positioned at the midpoint of the gap. This leads the authors of reference [90] to classify the Co\({}_{1\text{-}125}\)Mn\({}_{1\text{-}875}\)Al compound as an "ideal" half-metallic ferromagnet. With increasing Mn content to \(x=2\), the compound transforms into an inverse Heusler alloy Mn\({}_{2}\)CoAl. In this compound, a genuine energy gap is preserved for spin-down states. The electronic structure of Mn\({}_{2}\)CoAl exhibits a unique characteristic: for spin-up electronic states, the conduction band and valence band edges closely approach the Fermi level, resulting in a minimal energy gap. This distinctive feature categorizes the inverse Mn\({}_{2}\)CoAl compound as a spin gapless semiconductor. Table 1 presents information on Mn\({}_{2}\)\(Y\)\(Z\) and Mn\({}_{3}\)\(Z\) (\(Y\) = Cr, Fe, Co; \(Z\) = Al, Ge, Si, Sn, Pt), for which the electronic state, magnetic state, and anomalous Hall conductivity are given. ## 5 Conclusions Intermetallic compounds based on manganese, specifically Mn\({}_{2}\)\(Y\)\(Z\) and Mn\({}_{3}\)\(Z\) (where (\(Y\) = Sc, Ti, V, Cr, Fe, Co, etc.; \(Z\) = Al, Ge, Sn, Si, Pt, etc.), exhibit unique characteristics of the electronic structure. These compounds have the potential to exhibit various electronic and magnetic states, such as antiferromagnet, compensated ferrimagnet, topological semimetal, and frustrated antiferromagnet. Furthermore, their magnetic and electronic properties are highly sensitive to external influences. In the electronic subsystem of Mn\({}_{2}\)\(Y\)Al compounds, different magnetic states can lead to the emergence of half-metallic ferromagnet and spin gapless semiconductor states. For instance, \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Compound**} & \multirow{2}{*}{**Electronic State**} & \multicolumn{2}{c}{**Anomalous Hall**} \\ & & & **Conductivity \(\sigma^{\text{AHE}}\) (Ohm** \\ & & & **cm\({}^{-1}\)** \\ \hline Mn\({}_{2}\)VAl & HMF [1][41] & FIM [2][41] & \\ \hline Mn\({}_{2}\)VSi & HMF [41] & FIM [41] & \\ \hline Mn\({}_{2}\)CrAl & HMF [91, 92] & CFIM [3][21] & \\ \hline Mn\({}_{2}\)FeAl & HMF [36] & \begin{tabular}{c} CFIM [42] \\ Frustrated AFM [4][44] \\ \end{tabular} & \\ \hline Mn\({}_{2}\)NiAl & & AFM [93] & \\ \hline Mn\({}_{2}\)CoAl & SGS [47] & FM [6][47] & 21.8 [47] \\ \hline Mn\({}_{3}\)Al & \begin{tabular}{c} CFIM (cast); frustrated \\ \end{tabular} & 320 [62] \\ & & AFM [94] & \\ \hline Mn\({}_{3}\)Ge & \begin{tabular}{c} HM-FIM [94] \\ TSM [10][9] \\ \end{tabular} & \begin{tabular}{c} FIM [94] \\ Noncollinear AFM [9, 63] \\ \end{tabular} & 60 at RT; \(\sim\)380 at 5 K [63] \\ \hline Mn\({}_{3}\)Si & \begin{tabular}{c} HM-FIM [94] \\ \end{tabular} & \begin{tabular}{c} FIM [94] \\ \end{tabular} & \\ \hline Mn\({}_{3}\)Sn & \begin{tabular}{c} TSM [79] \\ \end{tabular} & \begin{tabular}{c} Noncollinear AFM [78, 81] \\ kagome AFM [79] \\ \end{tabular} & [95] \\ \hline Mn\({}_{3}\)Pt & \begin{tabular}{c} TSM [82] \\ \end{tabular} & kagome AFM [82] & 100 [82] \\ \hline Mn\({}_{3}\)Ir & \begin{tabular}{c} AFM [96] \\ \end{tabular} & 40 at RT [96] \\ \hline \hline \end{tabular} * \({}^{1}\) HMF is a half metallic ferromagnet. \({}^{2}\) FIM is a ferrimagnet. \({}^{3}\) CFIM is a compensated ferrimagnet. \({}^{4}\) AFM is an antiferromagnet. \({}^{5}\) SGS is a spin gapless semiconductor. \({}^{6}\) FM is a ferromagnet. \({}^{7}\) HM-AFM is a half-metallic antiferromagnet. \({}^{8}\) RMQ is rapid melt quenched. \({}^{9}\) HM-FIM is a half-metallic ferrimagnet. \({}^{10}\) TSM is a topological semimetal. \end{table} Table 1: Manganese-based compounds of Mn\({}_{2}\)\(Y\)\(Z\) and Mn\({}_{3}\)\(Z\) (\(Y\) = V, Cr, Fe, Co, Ni; \(Z\) = Al, Ge, Si, Sn, Pt, Ir). Electronic and magnetic states and anomalous Hall conductivity. Mn\({}_{2}\)CoAl can exhibit a ferromagnetic spin gapless semiconductor state, while Mn\({}_{2}\)FeAl can display a frustrated antiferromagnet state. Compounds of Mn\({}_{3}Z\) manifest as a half-metallic ferromagnet or a topological semimetal, displaying notable phenomena such as a large anomalous Hall effect, spin Hall effect, spin Nernst effect, and thermal Hall effect. The magnetic subsystem of these compounds can exhibit states such as ferrimagnetism, antiferromagnetism, compensated ferrimagnetism, frustrated antiferromagnetism, and antiferromagnetism with a kagome-type lattice. Overall, Mn\({}_{3}Z\) compounds offer a rich spectrum of electronic and magnetic properties. When comparing the properties of Mn compounds Mn\({}_{2}Y\)Al and Mn\({}_{3}Z\), it becomes evident that there are both similarities and differences between them, as shown in Table 1. In the electronic structure of Mn\({}_{2}Y\)Al alloys, the predominant state is that of a half-metallic ferromagnet, whereas in the Mn\({}_{2}\)CoAl compound, it exhibits characteristics of a spin gapless semiconductor. It is worth noting that Mn\({}_{2}\)CoAl is among the first Heusler alloys where the SGS state is prominently observed. In terms of magnetism, Mn\({}_{2}\)YAl alloys display a range of behaviors, including ferromagnetism, compensated ferrimagnetism, and frustrated AFM state. On the other hand, the Mn\({}_{3}Z\) compounds exhibit more diverse electronic and magnetic states. Many of them demonstrate the characteristics of a topological semimetal with significant anomalous Hall conductivity, even at room temperature. Similar to Mn\({}_{2}Y\)Al alloys, they also exhibit a variety of magnetic states. Moreover, due to their richer electronic structure, they can realize states such as noncollinear AFM states and AFM states typical for a kagome-type lattice. It should be emphasized that in actual materials, the presence and behavior of electronic and magnetic states, as well as transitions between them, can vary depending on various factors. These factors include the composition of the sample, external parameters such as temperature, magnetic field, and external pressure, as well as the dimensionality of the material (bulk, film, nanostructure), and the processing method used (cast, rapidly quenched, nanostructured samples, etc.). The electronic and magnetic states discussed, as well as the transitions between them, can be implemented and put into practice. Besides the practical applications, all the states discussed, especially frustrated antiferromagnetism and half-metallic ferromagnetism, are of general scientific interest. Some systems considered can demonstrate anomalous electronic properties (particularly, large \(T\)-linear heat capacity) and are highly intriguing from a physical standpoint, posing an exciting challenge for future theoretical advancements. It should be noted that a significant number of studies on manganese-based Heusler compounds exploit band calculations, which may not always be reliable. This is especially true when treating the size of the energy gap and the degree of spin polarization in the case of possible HMF and SGS compounds, thereby overestimating the stability of these states. However, there are ongoing improvements in electronic structure calculation methods, and new modern techniques are emerging. It is essential to constantly compare these calculations with experimental data. For example, in the case of the HMF, this could involve comparing the results of band calculations and direct experiments to determine the gap and spin polarization in situ by ultraviolet-photoemission spectroscopy, taking advantage of a multichannel spin filter [97]. NMR experiments could also be conducted to search for the vanishing of a linear contribution to the temperature dependence of the nuclear spin-lattice relaxation rate \(1/T_{1}\) (violation of Korringa's law), which is theoretically predicted for the HMF [34]. Finally, a promising direction for exploring new electronic effects and magnetic state features is a comprehensive study of quaternary Heusler alloys containing manganese atoms. Although the present review did not cover such compounds, there are many studies in this area, which can be found in related reviews [31, 98, 99] and references therein. This opens up further opportunities for fine control of the magnetic and electronic characteristics of such compounds for their possible practical use in spintronics and micro- and nanoelectronics. **Author Contributions:** Conceptualization, V.V.M. and V.Y.I.; methodology, V.V.M. and V.Y.I.; formal analysis, V.V.M. and V.Y.I.; investigation, V.V.M.; resources, V.V.M. and V.Y.I.; data curation, V.V.M. and V.Y.I.; writing--original draft preparation, V.V.M. and V.Y.I.; writing--review and editing, V.V.M. and V.Y.I.; visualization, V.V.M. and V.Y.I.; supervision, V.V.M. and V.Y.I.; funding acquisition, V.V.M. All authors have read and agreed to the published version of the manuscript. **Funding:** The research was carried out within the state assignment of Ministry of Science and Higher Education of the Russian Federation (themes <<Spin>>, No 122021000036-3 and <<Quantum>>, No 122021000038-7). Section 3 "Features of the electronic transport and magnetic state of Mn\({}_{3}Z\) (\(Z\) = Al, Ge, Si, Sn, Pt etc.) compounds" was prepared with the financial support of the Russian Science Foundation within the framework of research project No. 22-22-00935. **Acknowledgments:** The authors consider it their pleasant duty to thank their colleagues and co-authors Lukoyanov, A.V., Skryabin, Y.N., Marchenkova, E.B. for valuable discussions, and Perevozchikova, Y.A., Perevalova, A.N., Fominykh, B.M., Semiannikova, A.A., Emelyanova, S.M. for help with the design of the review. **Conflicts of Interest:** The authors declare no conflict of interest.
2301.00717
Robust Consensus Clustering and its Applications for Advertising Forecasting
Consensus clustering aggregates partitions in order to find a better fit by reconciling clustering results from different sources/executions. In practice, there exist noise and outliers in clustering task, which, however, may significantly degrade the performance. To address this issue, we propose a novel algorithm -- robust consensus clustering that can find common ground truth among experts' opinions, which tends to be minimally affected by the bias caused by the outliers. In particular, we formalize the robust consensus clustering problem as a constraint optimization problem, and then derive an effective algorithm upon alternating direction method of multipliers (ADMM) with rigorous convergence guarantee. Our method outperforms the baselines on benchmarks. We apply the proposed method to the real-world advertising campaign segmentation and forecasting tasks using the proposed consensus clustering results based on the similarity computed via Kolmogorov-Smirnov Statistics. The accurate clustering result is helpful for building the advertiser profiles so as to perform the forecasting.
Deguang Kong, Miao Lu, Konstantin Shmakov, Jian Yang
2022-12-27T21:49:04Z
http://arxiv.org/abs/2301.00717v1
# Robust Consensus Clustering and its Applications for Advertising Forecasting ###### Abstract Consensus clustering aggregates partitions in order to find a better fit by reconciling clustering results from different sources/executions. In practice, there exist noise and outliers in clustering task, which, however, may significantly degrade the performance. To address this issue, we propose a novel algorithm - robust consensus clustering that can find common ground truth among experts' opinions, which tends to be minimally affected by the bias caused by the outliers. In particular, we formalize the robust consensus clustering problem as a constraint optimization problem, and then derive an effective algorithm upon alternating direction method of multipliers (ADMM) with rigorous convergence guarantee. Our method outperforms the baselines on benchmarks. We apply the proposed method to the real-world advertising campaign segmentation and forecasting tasks using the proposed consensus clustering results based on the similarity computed via Kolmogorov-Smirnov Statistics. The accurate clustering result is helpful for building the advertiser profiles so as to perform the forecasting. ## Introduction Consensus clustering reconciles clustering information from different sources (or different executions) to be better fit than the existing clustering results. The consensus clustering results, however can be biased due to different types of features, different executions of algorithms, different definitions of distance metrics and even different parameter settings, as is sharply observed by existing studies [12]. All these factors may lead to disparity in clustering results. The major consensus clustering algorithms include Hypergraph partitioning [23], [3], voting approach [1], [17], mutual information [26], co-association approach [24], mixture model [2] (2), correlation consensus [25], ensemble clustering [27], [10], [11], etc. One key observation is that in consensus clustering [23], if there exist noise and outliers in one source of features in any execution of algorithm, the clustering result might be significantly affected due to the least square loss function used in most of the clustering methods (such as k-means, Gaussian mixture model), because the errors are squared. Even worse, most of the time, the users have little prior knowledge of noise, which makes the clustering result more unstable and much harder to interpret with different initializations and parameter settings. For example, if one information source that we used for consensus clustering is not accurate, when we align the consensus clustering results against this "inaccurate" source, we will suffer from these inaccurate annotations. The inaccurate common characteristics extracted from the samples due to the biased clustering results, are in fact, however, less generalizable to those unseen ones. To address these issues, this paper proposes a robust consensus clustering schema that is minimally affected by the outliers/noise. In particular, we combine multiple experts' opinions on data clustering using the robust \(\ell_{1}\)-loss function that aims to find the maximum consistency on the experts' opinions with minimum conflict. Our work can be viewed as an effective method of data clustering from heterogeneous information sources. The proposed method is practically feasible because it is independent of parameter settings of each clustering algorithm before aggregating the experts' opinions. Driven by advertising applications, we apply the concensus clustering algorithm to cluster advertiser profiles [10] into different clusters so as to accurately perform performance (e.g., Click, Conversions) forecasting. The main contribution of this paper is summarized as follows. * To address the issue of consensus clustering performance degradation in existence of noise and outliers, we rigorously formulate the problem of robust consensus clustering as an optimization problem using the robust loss function. * To find the best solution for robust consensus clustering, we develop an effective algorithm upon ADMM to derive the optimal solution, whose convergence can be rigorously proved. * We experimentally evaluate our proposed method on both benchmarks and real-world advertiser segmentation and forecasting tasks, and show that our method is effective and efficient in producing the better clustering results. As an application, the proposed algorithm is applied for advertising forecasting tasks with more stable and trustful solutions. ## Robust consensus clustering Model **Problem setting** Assume we have \(n\) data points, each data point can generate features from different views (total view number=V). For example, an image can generate features using different descriptors, such as SIFT1, HOG2, CNN features [1]. More formally, let \(\mathbf{x}_{i}^{v}\in\Re^{p_{v}}\) be \(v\)-th (\(1\leq v\leq V\)) view/modal feature of a data point \(i\), \(p_{v}\) is the dimensionality of feature extracted from \(v\)-th view. Consider all data points, Footnote 1: [https://en.wikipedia.org/wiki/Scale-invariant_feature_transform](https://en.wikipedia.org/wiki/Scale-invariant_feature_transform) Footnote 2: [https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients](https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients) \[\mathbf{X}^{v}=[\mathbf{x}_{1}^{v},\mathbf{x}_{2}^{v},\cdots,\mathbf{x}_{n}^{ v}],\] where each data column vector is \(\mathbf{x}_{i}^{v}\in\Re^{p_{v}\times 1}\). Each data point has the ground-truth label \(y_{i}\in\Re^{K}\). Simple concatenation of all data views gives \[\mathbf{X}=\left[\begin{array}{c}\mathbf{X}^{1}\\ \mathbf{X}^{2}\\ \vdots\\ \mathbf{X}^{V}\end{array}\right]=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots, \mathbf{x}_{n}],\quad\mathbf{x}_{i}=\left(\begin{array}{c}\mathbf{x}_{j}^{1} \\ \mathbf{x}_{i}^{2}\\ \vdots\\ \mathbf{x}_{i}^{V}\end{array}\right).\] Suppose we are given the clustering/partition results (\(\mathcal{P}\)) for data points \(\{\mathbf{X}^{1},\mathbf{X}^{2},\cdots,\mathbf{X}^{V}\}\) from \(V\) different views, _i.e._, \(\mathcal{P}=\{\mathbf{G}^{1},\mathbf{G}^{2},\cdots,\mathbf{G}^{V}\}\), where \(\mathbf{G}^{v}\in\{0,1\}^{n\times K}\) (for K clusters) is the clustering result from view \(v\). The clusters/partitions assignment might be different for different views. Let the connectivity matrix (a.k.a co-association matrix) \(\mathbf{M}\in\{0,1\}^{n\times n}\), where \(M_{ij}(\mathbf{G}^{V})\) for view \(v\) between data point \(i,j\) is: \[M_{ij}(\mathbf{G}^{v})=\left\{\begin{array}{cc}1;&\text{(i,j) belongs to the same clustering}\\ 0.&\text{otherwise}\end{array}\right.\] Co-association consensus clusteringStandard consensus clustering looks for a consensus clustering (based on majority agreement) \(\mathbf{G}^{*}\in\{0,1\}^{n\times K}\), such that \(\mathbf{G}^{*}\) is closest to the all the given partitions, _i.e.,_ \[\min_{\mathbf{G}^{*}}J(\mathbf{G}^{*})=\frac{1}{V}\sum_{v=1}^{V}\sum_{i=1}^{n} \sum_{j=1}^{n}D(M_{ij}(\mathbf{G}^{v})-M_{ij}(\mathbf{G}^{*})), \tag{1}\] where \(D(.)\) is a distance function between the optimal solution and the solution from each view. Generally, least square loss is used for function \(D(.)\): \[\min_{\mathbf{G}^{*}}J(\mathbf{G}^{*})=\frac{1}{V}\sum_{v=1}^{V}\sum_{i=1}^{n} \sum_{j=1}^{n}\Big{(}M_{ij}(\mathbf{G}^{v})-M_{ij}(\mathbf{G}^{*})\Big{)}^{2}, \tag{2}\] Notice that \(M_{ij}(\mathbf{G}^{v})=0,1\), therefore, Eq.(2) can be equivalently written as: \[\min_{\mathbf{G}^{*}}J_{2}(\mathbf{G}^{*})=\frac{1}{V}\sum_{v=1}^{V}\sum_{i= 1}^{n}\sum_{j=1}^{n}|M_{ij}(\mathbf{G}^{v})-M_{ij}(\mathbf{G}^{*})|\] \[=\frac{1}{V}\sum_{v=1}^{V}\|M(\mathbf{G}^{v})-M(\mathbf{G}^{*})\|_{1}, \tag{3}\] with \(\|X\|_{1}:=\sum_{i,j}|X_{ij}|\). Let \[\mathbf{G}_{\ell_{2}}=\operatorname*{arg\,min}_{\mathbf{G}^{*}}J (\mathbf{G}^{*}) \tag{4}\] \[\mathbf{G}_{\ell_{1}}=\operatorname*{arg\,min}_{\mathbf{G}^{*}}J _{2}(\mathbf{G}^{*}). \tag{5}\] Clearly, \[\mathbf{G}_{\ell_{2}}=\mathbf{G}_{\ell_{1}}. \tag{6}\] **Analysis** Given little (or no) prior information of clustering result from each view, one natural way to compute the associate between data points \(i\) and \(j\) is to get the expected value of average association \(\tilde{M}=[\tilde{M}_{ij}]\), _i.e.,_ \[\tilde{M}_{ij}=\frac{1}{V}\sum_{v=1}^{V}M_{ij}(\mathbf{G}^{v}). \tag{7}\] Back to model of Eq.(3), the upper bound of \(J_{2}\) is given by: \[J_{2}\leq\frac{1}{V}\sum_{v=1}^{V}\|\mathbf{M}(\mathbf{G}^{v})- \tilde{\mathbf{M}}\|_{1}+\|\tilde{\mathbf{M}}-\mathbf{M}(\mathbf{G}^{*})\|_{1}\] \[=C+\|\tilde{\mathbf{M}}-\mathbf{M}(\mathbf{G}^{*})\|_{1}\] \[=C+J_{3}(\mathbf{G}^{*}) \tag{8}\] where \[J_{3}(\mathbf{G}^{*})=\|\tilde{\mathbf{M}}-\mathbf{M}(\mathbf{G}^{*})\|_{1} \tag{9}\] \[C=\frac{1}{V}\sum_{v=1}^{V}\|\mathbf{M}(\mathbf{G}^{v})-\tilde{\mathbf{M}}\|_{1} \tag{10}\] Note the first term is a constant \(C\) because it measures the average difference between the final clustering assignment and the average consensus association3\(\tilde{\mathbf{M}}\). The smaller value of this term gives the closer distance between the final clustering assignment and current partition from a specific view. Footnote 3: The weighted average using attention can be adopted as well with learned weights. On the other hand, \[J_{3}\leq\frac{1}{V}\sum_{v=1}^{V}\Big{(}\|\mathbf{M}(\mathbf{G }^{v})-\mathbf{M}(\mathbf{G}^{*})\|_{1}+\|\tilde{\mathbf{M}}-\mathbf{M}( \mathbf{G}^{v})\|_{1}\Big{)}\] \[=J_{2}(\mathbf{G}^{*})+C \tag{11}\] This analysis indicates that the optimization of \(J_{2}\) can be achieved by \(J_{3}\). **Clustering indicator** One way to optimize \(J_{3}\) of \(\|\tilde{\bf M}-{\bf M}({\bf G}^{*})\|_{1}\) of Eq.(9) is to assign the optimal clustering indicator \({\bf G}\in\{0,1\}^{n\times K}\), with the constraint that \(\sum_{k}G_{ik}=1\), i.e., there is only one '1' in each row of \({\bf G}\), and the rest are zeros. The connection between the averaged connectivity matrix \({\bf M}({\bf G}^{*})\) and \({\bf G}\) is: \[{\bf M}({\bf G}^{*})={\bf G}^{*}{\bf G}^{*\intercal}. \tag{12}\] Therefore the objective of Eq.(9) becomes \(J_{4}\), _i.e.,_ \[\min_{{\bf G}}J_{4}({\bf G})=\|\tilde{\bf M}-{\bf G}{\bf G}^{ \intercal}\|_{1}\] \[s.t.,\ \ {\bf G}\in\{0,1\}^{n\times K},\ \ \sum_{k}G_{ik}=1 \tag{13}\] **Relaxation in continuous domain** However, the major difficulty of solving Eq.(13) is that is involves the discrete clustering indicator \({\bf G}\) and the algorithm is NP-complete [10]. Also the objective involves non-smooth \(\ell_{1}\) norm, which generally, is hard to handle. Thus, as most of the spectral clustering solvers, we do normalization on cluster indicators. In particular, we use \({\bf H}\in\Re^{n\times K}\) to denote the new clustering indicator, _i.e.,_ \[{\bf H}={\bf G}({\bf G}^{\intercal}{\bf G})^{-\frac{1}{2}}.\] Notice that \[{\bf G}^{\intercal}{\bf G}=\left(\begin{array}{ccccc}n_{1}&&&&\\ &n_{2}&&&&\\ &&...&&\\ &&&...&\\ &&&&n_{k}\end{array}\right), \tag{14}\] where \(n_{k}\) is the number of data points that falls in category \(k\). Let diagonal matrix \({\bf D}\in\Re^{k\times k}\), _i.e.,_ \[{\bf D}:={\bf G}^{\intercal}{\bf G}.\] then \[{\bf G}{\bf G}^{\intercal}={\bf H}{\bf D}{\bf H}^{\intercal}; \tag{15}\] \[{\bf H}^{\intercal}{\bf H}={\bf G}({\bf G}^{\intercal}{\bf G})^{ -1}={\bf I}_{k}, \tag{16}\] where \({\bf I}_{k}\) is an identity matrix with size kkk. Therefore the objective function to be solved becomes: \[\min_{{\bf H},{\bf D}}J_{5}({\bf H},{\bf D})=\|\tilde{\bf M}-{\bf H }{\bf D}{\bf H}^{\intercal}\|_{1}\] \[s.t.,\ \ {\bf H}^{\intercal}{\bf H}={\bf I}_{K},\ \ {\bf D}\geq 0;\ \ {\bf D}\ \ \mbox{is diagonal}, \tag{17}\] where \({\bf G}={\bf H}{\bf D}^{\frac{1}{2}}\) can be obtained in continous domain. In practice, the solution obtained from \(\hat{\bf M}\) may not be accurate due to noise or outliers. For this reason, we term our method as "robust" because we use \(\ell_{1}\) distance to measure the differences between the consensus optimal solution and the solution computed from each view. The errors are not squared. Therefore, our model is capable of handling any noises/outliers in some view/execution in data clustering. Further, we have: **Lemma 1**: _Set \(\hat{\bf M}\) to be the normalized pairwise similarity matrix \(\hat{SS}={\bf D}^{-\frac{1}{2}}SS{\bf D}^{-\frac{1}{2}}\)\((SS\in\Re^{n\times n}\) and \(S_{ij}\) is the similarity between data point \(i\) and \(j\)), using robust \(\ell_{1}\)-norm as the loss function, i.e.,_ \[\min_{{\bf H}}J_{0}({\bf H})=\|\hat{SS}-{\bf H}{\bf H}^{\intercal }\|_{F}^{2}\] \[s.t.,\ \ \ {\bf H}^{\intercal}{\bf H}={\bf I}_{k}. \tag{18}\] _This is identical to normalized cut spectral clustering (Shi and Malik 2000)._ **Proof** Recall that in standard normalized cut spectral clustering, the objective is: \[\min_{{\bf H}}{\rm Tr}({\bf H}^{\intercal}({\bf I}-\hat{SS}){\bf H})\] \[s.t.,\ \ \ {\bf H}^{\intercal}{\bf H}={\bf I}_{k}, \tag{19}\] where \(\hat{SS}={\bf D}^{-\frac{1}{2}}SS{\bf D}^{-\frac{1}{2}}\). This is equivalent to optimizing: \[\max_{{\bf H}}{\rm Tr}({\bf H}^{\intercal}\hat{SS}{\bf H})\] \[s.t.,\ \ \ {\bf H}^{\intercal}{\bf H}={\bf I}_{k}. \tag{20}\] Notice that \[\|\hat{SS}-{\bf H}{\bf H}^{\intercal}\|_{F}^{2}={\rm Tr}(\hat{SS}^{\intercal} \hat{SS}+{\bf H}{\bf H}^{\intercal}{\bf H}^{\intercal}{\bf H}-2\hat{SS}^{ \intercal}{\bf H}{\bf H}^{\intercal}),\] \({\rm Tr}({\bf H}{\bf H}^{\intercal}{\bf H}^{\intercal}{\bf H})={\rm const}\), and therefore \(\min_{{\bf H}}\|\hat{SS}-{\bf H}{\bf H}^{\intercal}\|_{F}^{2}\) is equivalent to \(\max_{{\bf H}}{\rm Tr}({\bf H}^{\intercal}\hat{SS}{\bf H})\). This completes the proof. In this paper next, we discuss how to solve the robust consensus clustering model of Eq.(17). ## Optimization Algorithm Eq.(17) seems difficult to solve since it involves non-smooth loss and high-order matrix optimization. We show how to apply alternating direction method of multipliers (ADMM) to solve it. ADMM method decomposes a large optimization problem by breaking them into smaller pieces, each of which are then easier to handle. ADMM combines the benefits of dual decomposition and augmented Lagrangian method for constrained optimization problem [1] and has been applied in many applications. The problem solved by ADMM method usually has the following general forms4, i.e., Footnote 4: Here \({\bf X},{\bf Z}\) can be vectors or matrices. \[\min_{{\bf X},{\bf Z}}f({\bf X})+g({\bf Z}),\ \ s.t.\ \ \ \ \ {\bf A}{\bf X}+{\bf B}{\bf Z}={\bf C} \tag{21}\] After enforcing the Lagrangian multiplier while introducing more variables, the problem can be solved alternatively, i.e., \[\ell({\bf X},{\bf Z},\mu)=f({\bf X})+g({\bf Z})+\Omega^{\top}({ \bf A}{\bf X}+{\bf B}{\bf Z}-{\bf C})\] \[+\frac{\mu}{2}||{\bf A}{\bf X}+{\bf B}{\bf Z}-{\bf C}||_{F}^{2}, \tag{22}\] \[{\bf X}^{t+1}:=\mathop{\arg\min}_{{\bf X}}\ell({\bf X},{\bf Z}^{t},\mu^{t}),\] \[{\bf Z}^{t+1}:=\mathop{\arg\min}_{{\bf Z}}\ell({\bf X}^{t+1},{\bf Z },\mu^{t}),\] \[\Omega^{t+1}:=\Omega^{t}+\mu({\bf A}{\bf X}^{t+1}+{\bf B}{\bf X}^{t +1}-{\bf C}).\] and \(\Omega\) is the augmented Lagrangian multiplier, and \(\mu\) is the non-negative step size. **Input:** clustering results from different views \(\tilde{\mathbf{M}}\), parameter \(\rho>1\). **Output:** final clustering indicator \(\mathbf{H}\). **Procedure:** ``` 1: Initialize \(\mathbf{E}^{0}\), \(\mathbf{H}^{0}\), \(\mathbf{D}^{0}\), \(\Omega^{0}\), \(\mu^{0}>0\), \(t=0\) 2:while Not converge do 3: Update \(\mathbf{E}\) via Eq.(26) 4: Update \(\mathbf{H}\) via Eq.(28) 5: Update D via Eq.(29) 6:\(\mu^{t+1}:=\rho\mu^{t}\) 7:\(\Omega:=\Omega+\mu(\mathbf{E}-(\tilde{\mathbf{M}}-\mathbf{HDH}^{\top}))\) 8:\(t:=t+1\) 9:endwhile ``` **Algorithm 1** Solving robust consensus clustering model of Eq.(17) ### Optimization Algorithm to solve Eq.(17) According to ADMM algorithm, by imposing constraint variable \[\mathbf{E}=\tilde{\mathbf{M}}-\mathbf{HDH}^{\top},\] the problem of Eq.(17) is equivalent to solving, \[\min_{\mathbf{E},\;\mathbf{H},\;\mathbf{D}}\|\mathbf{E}\|_{1},\] \[s.t.\qquad\mathbf{E}-(\tilde{\mathbf{M}}-\mathbf{HDH}^{\top})=0;\] \[s.t.,\mathbf{H}^{\top}\tilde{\mathbf{H}}=\mathbf{I}_{K},\;\; \mathbf{D}\geq 0;\;\;\mathbf{D}\;\;\text{ is diagonal} \tag{23}\] In ADMM algorithm, to solve \(f(\mathbf{E})=\|\mathbf{E}\|_{1}\), under the constraint \[h(\mathbf{E})=\mathbf{E}-(\tilde{\mathbf{M}}-\mathbf{HDH}^{T}),\] the ADMM function can be formulated as follows, \[\ell(\mathbf{E},\mathbf{H},\mathbf{D},\Omega,\mu)=f(\mathbf{E})+\Omega^{\top}h (\mathbf{E})+\frac{\mu}{2}||h(\mathbf{E})||_{F}^{2}.\] where Lagrange multiplier is \(\Omega\) and \(\mu\) is the penalty constant. We solve a sequence of subproblems \[\min_{\mathbf{E},\mathbf{H},\mathbf{D}}\|\mathbf{E}\|_{1}+\Omega^{\top}( \mathbf{E}-(\tilde{\mathbf{M}}-\mathbf{HDH}^{\top}))\] \[+\frac{\mu}{2}\|\mathbf{E}-(\tilde{\mathbf{M}}-\mathbf{HDH}^{\top })\|_{F}^{2}. \tag{24}\] with \(\Omega\) and \(\mu\) updated in a specified pattern: \[\Omega\leftarrow\Omega+\mu(\mathbf{E}-(\tilde{\mathbf{M}}- \mathbf{HDH}^{\top})),\] \[\mu\leftarrow\rho\mu.\] To solve Eq.(24), we search for optimal \(\mathbf{E}\), \(\mathbf{H},\mathbf{D}\) iteratively until the algorithm converges. Now we discuss how to solve \(\mathbf{E}\), \(\mathbf{H}\), \(\mathbf{D}\) in each step. Alg.1 summarizes the complete algorithm. ``` 1:Input: clustering results from different views \(\tilde{\mathbf{M}}\), parameter \(\rho>1\). 2:Output: final clustering indicator \(\mathbf{H}\). 3: 4:Initialize \(\mathbf{E}^{0}\), \(\mathbf{H}^{0}\), \(\mathbf{D}^{0}\), \(\Omega^{0}\), \(\mu^{0}>0\), \(t=0\) 5:while Not converge do 6: Update \(\mathbf{E}\) via Eq.(26) 7: Update \(\mathbf{H}\) via Eq.(28) 8: Update D via Eq.(29) 9:\(\mu^{t+1}:=\rho\mu^{t}\) 10:\(\Omega:=\Omega+\mu(\mathbf{E}-(\tilde{\mathbf{M}}-\mathbf{HDH}^{\top}))\) 11:\(t:=t+1\) 12:endwhile ``` **Algorithm 2** Solving robust consensus clustering model of Eq.(17) Update \(\mathbf{H},\mathbf{D}\)To update \(\mathbf{H},\mathbf{D}\) while fixing \(\mathbf{E}\), we minimize the relevant part of Eq.(24) which is \[\min_{\mathbf{H},\mathbf{D}}\frac{\mu}{2}\|\mathbf{B}-\mathbf{HDH}^{T}\|_{F}^{2},\] \[s.t.,\mathbf{H}^{T}\mathbf{H}=\mathbf{I}_{K},\;\;\mathbf{D}\geq 0;\;\; \mathbf{D}\;\;\text{ is diagonal} \tag{27}\] where \[\mathbf{B}=\tilde{\mathbf{M}}-\mathbf{E}+\frac{\Omega}{\mu}.\] It is easy to see the optimal solution \(\mathbf{H}\) is given by the \(k\)-largest eigenvectors of \(\mathbf{B}\), i.e., \(\mathbf{H}=[\mathbf{h}_{1},\mathbf{h}_{2},\cdot\cdot\cdot,\mathbf{h}_{k}]\), \[\mathbf{B}\mathbf{h}_{k}=\lambda_{k}\mathbf{h}_{k}, \tag{28}\] where \(\lambda_{k}\) is the associated eigen-value with respect to eigenvector \(\mathbf{h}_{k}\), and \(\Lambda\) is a diagonal matrix, and \[\Lambda=diag([\lambda_{1},\lambda_{2},\cdot\cdot\cdot,\lambda_{k}]).\] Then the optimal solution of \(\mathbf{D}\) is given by \[\mathbf{D}=\Lambda. \tag{29}\] The completes the algorithm for solving Eq.(17). **Time Complexity Analysis** Since Eq.(17) is not a convex problem, in each iteration, given \(\mu\), \(\Sigma\), \(\rho\), the algorithm will find its local solution. The convergence of ADMM algorithm has been proved and widely discussed in [1]. The overall time cost for the algorithm depends on iterations and time cost for each variable updating. The computation of \(\mathbf{E}\) takes \(\mathcal{O}(nk^{2})\) time. The major burden is from the updating of \(\mathbf{D}\) and \(\mathbf{H}\), which requires computation of top \(k\) eigen-vector of \(\mathbf{B}\),i.e., \(\mathcal{O}(n^{3})\) time. Overall, the cost of the algorithm 1 is \(\mathcal{O}(T(n^{3}+nk^{2}))\), where \(T\) is the iteration number before convergence. To make the solution scalable for large-scale dataset, We can accelerate this via Hessenberg reduction and QR iteration with multi-thread solver using GPU acceleration [1] and also use divide-and-conquer [11] to accelerate the execution. ## 5 Experimental Results on Benchmarks In order to validate the effectiveness of our method, we perform experiments on benchmark datasets. In particular, we use six datasets downloaded from UCI machine learning repository5, including Glass, Ionosphere, Iris, Soybean, Wine, zoo. We also adopted two widely used text datasets for document clustering, including CSTR6 and Reuters. The \begin{table} \begin{tabular}{c|c|c|c} \hline \hline datasets & \#data points & \# feature & \#class \\ \hline \hline CSTR & 475 & 1000 & 4 \\ \hline Glass & 214 & 9 & 7 \\ \hline Ionosphere & 351 & 34 & 2 \\ \hline Iris & 150 & 4 & 3 \\ \hline Reuters & 2900 & 1000 & 10 \\ \hline Soybean & 47 & 35 & 4 \\ \hline Wine & 178 & 13 & 3 \\ \hline Zoo & 101 & 18 & 7 \\ \hline \hline \end{tabular} \end{table} Table 1: dataset descriptions features we adopt are represented using vector space model after removing the stop words and unnecessary tags and headers. In Reuter dataset, we use the ten most frequent categories. In our experiment, the consensus co-association matrix is obtained by running k-means algorithm for 40 times and make an average of results using different initializations. All datasets have the ground-truth of clustering labels. We also compare against the following baseline methods: * k-means on original feature space * KC: k-means on the consensus similarity matrix; * NMFCC [11]: NMF-based consensus clustering; * CSPA [1]: cluster based similarity partition algorithm * HPGA [15]: Hypergraph partitioning algorithm * WCC [11]: weighted consensus clustering * L2CC: using standard \(\ell_{2}\) loss for solving the consensus clustering on the consensus matrix; * RCC: proposed robust consensus clustering algorithm * Correlation Consensus (CorC) [16]: Correlation based consensus clustering by exploiting the ranking of different views; * Ensemble Consensus(ES) [17]: ensemble consensus via exploiting Kullback-Leibler divergence of different aspects. **Evaluation metrics** As the other clustering tasks, we use the _clustering accuracy_ as the metric to evaluate the performance of our algorithms. **Clustering accuracy(ACC)** is defined as, \[ACC=\frac{\sum_{i=1}^{n}\delta(l_{i},map(c_{i}))}{n}, \tag{30}\] where \(l_{i}\) is the true class label and \(c_{i}\) is the obtained cluster label of \(x_{i}\), map(.) is the best mapping function, \(\delta(x,y)\) is the delta function where \(\delta(x,y)=1\) if \(x=y\), and \(\delta(x,y)=0\) otherwise. The mapping function map(.) matches the true class label and the obtained clustering label, where the best mapping is solved by Hungarian algorithm. The larger, the better. **Experiment result analysis** We report clustering results in Table. 2. Clearly, our method is very robust and outperforms the other methods. The accuracy gain is very significant especially on dataset Soybean, Wine and zoo. In some cases, the clustering result from one execution might be severely biased. This can be addressed by running clustering multiple times to reduce the variance. The Correlation Consensus algorithm [16] is designed for multi-modal data, which did not show very promising performance for the multiple execution of datasets with different initializations. ### Experiment results on multi-view datasets **Dataset** Our algorithm can be easily extended for processing multi-view data when \(\widehat{\text{M}}\) of Eq.(17) is computed using multi-view features. We evaluate this over several public datasets: Caltech101 [18] and Microsoft Research Cambridge Volume 1 (MSRCV1) [15]. MSRCV1 has images from 7 classes, i.e., airplane, bicycle, building, car, cow, face, and tree. Each class has 30 images. Caltech-101 is an image dataset of 8677 images and 101 object classes. We follow the same experimental procedure as [10] and extract 7 and 20 classes for each experimental setup. We extract 1984-dimensional LBP (OJA 1996) features, 4000-dimensional GIST [14] features, 768-dimensional HOG [1] features, 2659-dimensional classemes [13] and Fitzgibbon 2010) features from each image. **Comparison results** We compare our methods against clustering using each view of features, i.e., LBP, HOG, GIST and Classemes, respectively. We also compare several multi-view clustering methods: Spectral clustering using simple multi-view feature concatenation (denoted as FC), multi \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline datasets & k-means & KC & CSPA & HPGA & NMFC & WC & L2CC & RCC (our method) & ES & CorC \\ \hline \hline CSTR & 0.45 & 0.38 & 0.50 & 0.62 & 0.56 & 0.64 & 0.61 & **0.65** & 0.64 & 0.61 \\ \hline Glass & 0.38 & 0.45 & 0.43 & 0.40 & 0.49 & 0.49 & 0.49 & **0.52** & 0.52 & 0.50 \\ \hline Ionosphere & 0.70 & 0.71 & 0.68 & 0.52 & 0.71 & 0.71 & 0.71 & **0.72** & 0.71 & 0.70 \\ \hline Iris & 0.83 & 0.72 & 0.86 & 0.69 & **0.89** & **0.89** & 0.86 & **0.89** & 0.88 & 0.86 \\ \hline Reuters & **0.45** & 0.44 & 0.43 & 0.44 & 0.43 & 0.44 & 0.43 & **0.45** & 0.44 & 0.43 \\ \hline Soybean & 0.72 & 0.82 & 0.70 & 0.81 & 0.89 & 0.91 & 0.76 & **1.00** & 0.98 & 0.95 \\ \hline Wine & 0.68 & 0.68 & 0.69 & 0.52 & 0.70 & 0.72 & 0.65 & **0.96** & 0.84 & 0.89 \\ \hline Zoo & 0.61 & 0.59 & 0.56 & 0.58 & 0.62 & 0.70 & 0.80 & **0.84** & 0.79 & 0.82 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy on 8 benchmark datasets \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline datasets & LBP & HOG & GIST & Classemes & FC & MVKmean & AP & LCC & RCC (Our method) \\ \hline \hline MSRC-v1 & 0.4731 & 0.6367 & 0.6283 & 0.5431 & 0.7423 & 0.7871 & 0.5369 & 0.7542 & **0.8017** \\ \hline Caltech-7 & 0.5236 & 0.5561 & 0.5473 & 0.4983 & 0.6123 & 0.6640 & 0.5359 & 0.6643 & **0.6819** \\ \hline Caltech-20 & 0.3378 & 0.3679 & 0.3925 & 0.3660 & 0.5489 & 0.5619 & 0.3421 & 0.6287 & **0.6374** \\ \hline \hline \end{tabular} \end{table} Table 3: Clustering accuracy on three multi-view datasets view k-means (denoted as MVKmean) [10], affinity propagation using multi-view features ( denoted as AP) [14], low-rank consensus clustering (denoted as LCC) [13]. Table. 3 presents the clustering accuracy results, which demonstrates the superiority of using our method for solving multi-view clustering problems. Moreover, our method is flexible for incorporating any view of features after properly feature extraction. Essentially, our method learns the low-rank subspace using eigen-decomposition using co-association matrix from multi-view data, which plays the similar role as those in [13], [10]. that \[\Pr(dist(i,j|m,n)>C_{\alpha})=\alpha,\] then a band of width \(C_{\alpha}\) around the distribution of \(dist(i,j|m,n)\) will entirely contain the estimated value with probability \((1-\alpha)\). eCPM cost similarity is computed using the same way. ### Experimental Results The ad campaign data are collected from a major web portal. In this study, we consider 1,268 advertisers over a week. As is introduced in Kolmogorov-Smirnov statistics, we set \(p\)-value = \(\{0.01,0.05,0.1\}\) (\(p=0.01\) lower bound and \(p=0.1\) upper bound). Table 4 shows the clustering number using the win rate c.d.f and eCPM cost c.d.f features respectively. Clearly, when \(p\)-value increases, the critical \(C_{\alpha}\)-value for determination of similar campaigns decreases because there are smaller chances for advertisers falling in the range of \(P(dist(i,j|m,n))>C_{\alpha}\) using Eq.(34). **Consistency of clustering** For any pairwise advertisers, if both win rate c.d.f and eCPM cost c.d.f cluster them to the same group (or the different group), then they are viewed as true consistency, otherwise, if one method clusters them in the same group while the other clusters them in different groups, then viewed as inconsistency (FPN). More mathematically, let \(C_{i},C_{j}\) be the cluster label obtained using eCPM cost for \(i\), \(j\) and let \(\ell_{i},\ell_{j}\) be the cluster using win rate for \(i\) and \(j\), then \[TPN =\sum_{i<j}\left[\left(I_{C_{i}=C_{j}}\right)\bigcap\left(I_{I_{i }=\ell_{j}}\right)\right]\bigcup\left[\left(I_{C_{i}\neq C_{j}}\right)\bigcap \left(I_{\ell_{i}\neq\ell_{j}}\right)\right]\] \[FPN =\sum_{i<j}\left[\left(I_{C_{i}=C_{j}}\right)\bigcap\left(I_{\ell _{i}\neq\ell_{j}}\right)\right]\bigcup\left[\left(I_{C_{i}\neq C_{j}}\right) \bigcap\left(I_{\ell_{i}=\ell_{j}}\right)\right]\] Table 5 shows the consensus clustering results using these two views of features. Clearly, around three fourth advertisers share very similar results even using different features. **Clustering performance** Given the fact that we do not have ground-truth for advertiser clustering results, we cannot directly evaluate the performance of our method. Instead, we view the robust consensus clustering result as the advertiser segmentation result, and use it to re-generate the eCPM cost and win-rate c.d.ds using all the information from the same advertiser clusters. With the updated eCPM cost [10] and win-rate c.d.f.s, we forecast the clicks and spend, and compute the relative percentage errors to validate the performance of updated eCPM cost and win-rate distributions. In particular, if the clustering result is better, then the re-generated eCPM and win-rate distributions are more accurate.
2310.20386
Quantifying Hierarchical Selection
At what level does selective pressure effectively act? When considering the reproductive dynamics of interacting and mutating agents, it has long been debated whether selection is better understood by focusing on the individual or if hierarchical selection emerges as a consequence of joint adaptation. Despite longstanding efforts in theoretical ecology there is still no consensus on this fundamental issue, most likely due to the difficulty in obtaining adequate data spanning sufficient number of generations and the lack of adequate tools to quantify the effect of hierarchical selection. Here we capitalise on recent advances in information-theoretic data analysis to advance this state of affairs by investigating the emergence of high-order structures -- such as groups of species -- in the collective dynamics of the Tangled Nature model of evolutionary ecology. Our results show that evolutionary dynamics can lead to clusters of species that act as a selective group, that acquire information-theoretic agency. Overall, our findings provide quantitative evidence supporting the relevance of high-order structures in evolutionary ecology, which can emerge even from relatively simple processes of adaptation and selection.
Hardik Rajpal, Clem von Stengel, Pedro A. M. Mediano, Fernando E. Rosas, Eduardo Viegas, Pablo A. Marquet, Henrik J. Jensen
2023-10-31T11:53:05Z
http://arxiv.org/abs/2310.20386v2
# Quantifying Hierarchical Selection ###### Abstract At what level does selective pressure effectively act? When considering the reproductive dynamics of interacting and mutating agents, it has long been debated whether selection is better understood by focusing on the individual or if hierarchical selection emerges as a consequence of joint adaptation. Despite longstanding efforts in theoretical ecology there is still no consensus on this fundamental issue, most likely due to the difficulty in obtaining adequate data spanning sufficient number of generations and the lack of adequate tools to quantify the effect of hierarchical selection. Here we capitalise on recent advances in information-theoretic data analysis to advance this state of affairs by investigating the emergence of high-order structures -- such as groups of species -- in the collective dynamics of the _Tangled Nature_ model of evolutionary ecology. Our results show that evolutionary dynamics can lead to clusters of species that act as a selective group, that acquire information-theoretic agency. Overall, our results provide quantitative evidence supporting the relevance of high-order structures in evolutionary ecology. ## 1 Introduction The dynamics of life around us is characterised by a plethora of complex interdependent relationships. These relations span across different scales from cells to organisms, and even the biotic and abiotic environment in which they exist. Adaptation through selection is widely agreed to be the motor behind both macro-evolution, as documented in the fossil record [1], and micro-evolution, as e.g. studied in microbial experiments such as [2]. However, the collective and mutually interdependent nature of evolutionary dynamics raises a critical question: what is the level at which selection _effectively_ operates on a set of entangled co-adapting entities? Since Darwin, the standard view is to assume individual -- and later genes -- as the drivers of evolutionary change. An extreme version of this view is to regard individual genes as the selective unit, as is often advocated by the so-called'selfish gene' position [3; 4] (for a insightful discussion, see Ref. [5]). Alternative perspectives, where selection also acts at the level of higher-order entities (such as groups of species, ecosystems, or even the whole biosphere) on a hierarchical fashion have been the subject of long debates [1; 5; 6; 7]. Crucially, the possibility that non-reproducing higher-order systems may be subjected to a different kind of selection (e.g. persistence selection [8; 9; 10; 11]) amounts to a drastic shift in the way we conceive selection and evolution more broadly. Such views raise important questions regarding how group-level dynamics may make the fate of individuals not only depend on their genetic information, but also on how this is integrated into a larger system of interacting entities. Put simply, these views suggest to shift the focus from the singer (i.e. species) to the song [9] (i.e. relationships among the species). A way to advance these questions is to avoid reducing them to a dichotomous choice between selection at the basic level of reproduction versus at some higher collective level, and instead consider that different types of selection pressure may be working in tandem at different levels of ecosystem organisation. However, in order to pursue such a view, it is crucial to have quantitative tools that are capable of identifying and differentiating degrees of (possibly time- and scale-dependent) cooperative selective structure, which could be used to investigate these ideas on empirical or simulated data. In this paper we address these questions by developing information-theoretic tools to investigate hierarchical selection in data of co-evolving species. Recent theoretical advancements have introduced promising methods based on information storage and predictive information to quantify individuality [12]. The basic hypothesis behind these approaches is that if a group of individuals can enhance the prediction of their joint future, they can better adapt and thus survive. Building on these ideas, here we present analyses of simulations of co-evolution dynamics based on the well-studied Tangled Nature model [13], and investigate if there are conditions under which selection effectively acts at the level of groups of species instead of single species. We explore this question from various complementary angles, including analyses of information-theoretic 'individuality' [12], integrated information [14, 15], and other measures of information dynamics [16, 17]. Overall, our results provide quantitative evidence suggesting that groups of species act as a unit of selection for biologically plausible mutation rates. Crucially, these higher-order phenomena are observed in the evolutionary dynamics arising from a simple underlying mechanism, which does not includes group level interactions [18] or interaction delays [19]. Thus, these results highlight the spontaneous emergence of groups of cooperating species as a natural consequence of relatively simple processes of adaptation and selection pressure. ## 2 Results Our analyses are based on evolutionary trajectories generated by the Tangled Nature (TaNa) model [13], a well-known computational model that has been extensively used to investigate multiple aspects of evolutionary dynamics, including the observed species abundance curves [20], entropy of species distribution [21], hierarchical organization of ecosystems [22], and the statistics of mass extinctions [23]. The TaNa model establishes the dynamics of the population of multiple species co-evolving over time. In the model, the fitness of each species depends upon the population of species it interacts with as well as the total population of the ecosystem. The results presented in this section are obtained from 10,000 simulations for different values of the mutation rates, and calculating ensemble averages over the results (see _Methods_). ### Error threshold and population diversity As a first step in our analyses, we investigated how the total population and diversity of species is affected by the mutation rate of the evolving agents. Special attention is paid to the dynamics observed in the vicinity of the 'error threshold,' which is the limit on the mutation rate for species beyond which any biological information needed for continual survival is destroyed in subsequent generation [24, 25]. As expected, our simulations show that the total population progressively decreases with mutation rate, with a sudden drop after mutation rate 0.04 (see Figure 1). In contrast, the diversity of species increases with mutation rate and peaks at 0.04. This helps us identify the error thresholds associated with the model, beyond which no stable species are observed. To better understand the effect of the mutation rate, we explored its effect on the average fitness and distance between extant species in the genome space. Results show that low mutation rates support the existence of a handful of very fit species with a cloud of mutants around (see Figure 1). In contrast, higher mutation rates allow more non-trivial combinations of species existing near the error-threshold, which is confirmed by the increasing hamming distance among the species and the decreasing gap of fitness between the top few species and the others. ### Information Individuality After identifying the error threshold, we investigated the potential presence of hierarchical selection by estimating the organismal 'individuality scores' for different group sizes of species via the framework introduced in Ref. [12]. Briefly, this approach proposes an organismal 'individuality score' that represents the degree to which a group behaves as a single entity, in the sense that it is maximally self-predictive (i.e. the group's future evolution is maximally predicted from knowledge of its own past). According to this framework, if a group of species achieves a greater individuality score than a single species, then the group is able to reduce its collective future uncertainty. This enables the group to adapt better and persist for longer as an evolutionary unit, as compared to single species alone. Organismal individuality scores were calculated for over 10,000 different combinations for each group size (which we refer to as _scale_) -- singlets (scale = 1), dyads (scale = 2), triplets (scale = 3), and so on. These scores were then normalised based on the size of the group. Results reveal distinct behaviour as the range of mutation rates changes from below the error transition region to the transition region itself (see Figure 2). At low mutation rates, organisation into cliques of higher scales becomes apparent and individuality scores peak for scales between 5 and 9. Though the peak starts to flatten out for mutation rates 0.03 and 0.04, higher-order organisation still persists. For mutation rates in the transition range, the higher order organisation is lost and single species level becomes the most optimally self-predicting scale. information [27] or information storage [28]. All of these definitions refer to the amount of information in the past of the system that can be used to predict its future. Having identified scale 6 as the optimal scale of organismal individuality we can compare how its individuality changes with mutation rate as compared to single species, i.e. scale 1 (see Figure 3). It can be seen that individuality score of species at scale 6 peaks for low mutation rates and then decreases monotonically through the transition region before going to zero near the error threshold. However, at the scale of single species peak individuality is observed in the transition region. A clear crossover can then be observed in the transition region where higher Figure 3: Mean AIS for the two scales of organization – Individual (Scale 1) and Higher Order (Scale 6) – varying with mutation rates. Figure 2: Average organismal individuality scores observed at different scales and mutation rates. A peak of information individuality is observed for intermediate mutation range for higher scales (between 5 and 9). Conversely, in the transition region (mutation rates of 0.042 and 0.045) single species (scale = 1) have the highest individuality scores. order organisation looses individuality while the singular species gain organismal individuality. In the following section we explore the species-environment information transfer and integration, to further highlight the role of the error-transition region. ### Interaction between species and their environment As a last step of our analysis, we characterise the species-environment interactions using measures of information transfer and integration. As discussed before, we compare these interdependencies at the level of single species (i.e. scale 1) against scale 6, which our previous analysis highlighted as the optimal scale of individuality for most intermediate mutation rates. We estimate these measures for various mutation rates in order to identify how a species (or a group of species) interact with the environment near the error threshold. For the purposes of this analysis, we use the population of a single species (for scale 1), a vector of population of species (for scale 6) or the total population of the environment (all remaining species) as random variables to estimate the information measures discussed below. The population of the environment is calculated as the difference between the total population of the ecosystem and the total population of the species (or the group of species). Further details about these estimations are provided in Section 4.2. First, we focus on the information transfer as measured by Transfer Entropy (TE) [29] between species and environment. This directed measure estimates the information that a _source_ variable about the future state of a _target_ variable, over and above the information contained in the past state of the target itself. The information transfer to and from the environment also varies differently at the two levels of organization (see Figure 4). For scale 1, information flow is predominantly from environment to the single species but decreases with increasing mutation rate till the transition region. However, in the transition region information flow peaks in both directions but at either ends of the region. The trends of information-flow look different for the higher order organisation of species (Figure 4). For scale 6, there exists a significant amount of information flowing in both directions, though environment to species information flow is larger up-till the transition region. In this case, the information flow peaks in both directions together and stays almost equal throughout the transition region. Thus implying a near symmetric information flow during the error transition. The transfer entropy from the environment to the group of species, in case of the Tangled Nature model is equivalent to the _Environment determined_ individuality [12]. This measure quantifies how much of the persistence of the group is predicted by the environment beyond what the group can predict. In essence information stored in the environment and its interactions with the group are able to predict the future of the group. When we compare this quantity across the two scales, we can see that in contrast to organismal individuality, higher-order organisation still possesses environment determined individuality in the error-transition region. Whereas the single species exclusively peaks in both individuality scores in the transition region. Finally we look at the integrated information as measured using \(\Phi^{\text{R}}\) which captures the average overall coherence among the species and the environment at the single species level (scale 1). Here we also recover a peak of integrated information during the transition region (see Figure 5), for scale 1. These peaks in different modes of information processing during an order-disorder transition is consistent with the literature on criticality in complex systems [15, 30, 31, 32]. Overall these results highlight two major findings. First, the species-environment interactions at the level of single species differ from the higher-levels of organisation. Particularly near the error threshold, where the higher-order groups of species loose organismal individuality, while still being environmentally determined. Whereas, at the level of single species, both organismal and environment determined individuality peak during the error-transition region. Which highlights the second finding of this analysis. Peaking of information measures during the error-transitions are in line with the literature on criticality in complex systems [16, 30]. It suggests that the systems operating close to an order-disorder transition can access many different modes of information processing and thus afford more adaptability and dynamic range. We find that information storage, transfer and integration peaks during this transition at the level of single species, while the higher-order organisation dissolves. In summary, as fluctuations increase, the robustness and resource sharing afforded by the higher-order organisation declines. Therefore, in order to maintain persistence more information is processed at the level of individual species. For instance, RNA viruses are known to operate near the error transition [33] for optimal survival. ## 3 Discussion This paper investigated hierarchical selection in the Tangled Nature (TaNa) model of ecological evolution [13]. In contrast to prior work, our work leverages recent advances in information theory to provide analyses that are quantitative and mathematically rigorous, which allows us to objectively estimate the degree of high-order individuality in this self-organising evolutionary system [12]. Crucially, these tools provided evidence of how relatively simple processes of adaptation and selection pressure can result on the emergence of groups of cooperating species that acts as effective units within the evolutionary process. Specifically, our results identified signatures of hierarchical selection in the simulated ecosystems, with groups of 5 to 9 species acting as Figure 4: Information flow between species and environment as measured using TE for various mutation rates. The top panel shows the variation at the scale of individual species (scale 1) and the bottom panel for the optimal higher order grouping (scale 6). individual evolutionary units. Interestingly, the dominance of multi-species evolutionary units breaks down when mutation rates are close to the error threshold, suggesting evolutionarily relevant interactions between hierarchical selection and mutation rates. The peak in each of the information theoretic measures for groups of 5-8 species at intermediate mutation rates is highly suggestive of hierarchically organised units of selection. In different biological contexts, this could be interpreted as selection of ecological communities [34], holobionts [35], or viral quasispecies [24]. The results in this paper suggest that it is highly plausible that emergent units of selection such as these could arise, and the information theoretic measures developed provide a quantitative toolkit for demonstrating this. That said, more work is needed to translate these methods to real-world datasets. Clarifying how the proposed measures of hierarchical selection are affected by mutation rates is useful to deepen our understanding of the mechanisms driving ecosystems from an ecological point of view. Increasing mutation rates are generally related, among other things, to harsher environmental conditions or adversity [36]. Mutations during times of adversity is a strategy to adapt and survive in a changing environment. Therefore, we can understand the mutation rates in the TaNa model as a proxy to varying environmental selection pressure. Error transition - that marks the limit of mutation rate for existence of stable species - shows some interesting information processing properties. Firstly, as discussed above the emergent macroscopic organisation breaks down during this region. Simultaneously, individuality scores as well as other information processing measures peak for individual species. Suggesting a trade-off between collective resource sharing (at the macro level) for increased information processing abilities (at the micro level) for continued survival. Overall, this work provides an important first step towards enabling quantitative investigations about hierarchical selection, establishing formal methods of analysis that can be readily applied to real data or other models in the future. It is worth emphasising that any ecosystem has a a variety of different species interacting with each other and the environment in unique ways. While the present study focused on average species-environment properties, future investigations could consider more dedicated species-level analyses. Finally, another interesting extension of this work could be to apply similar methods to applications of the TaNa model on social scenarios, to investigate if high-order phenomena also take a central role within the dynamics of cultural [37], organisational [38], and opinion [39] changes. ## 4 Methods Here we discuss the details of the model and the information-theoretic measures used for the simulation and subsequent analysis of the model. We provide a brief overview of the Tangled Nature model [13], followed by the information individuality framework [12] and other information dynamics measures presented above. ### The model As an agent-based model, species - represented by a binary genome - form the dynamical units of Tangled Nature model. These individual species are subject to three stochastic processes: replication, mutations, and annihilation. No groups or hierarchies are defined _a priori_. The evolutionary dynamics takes place in a space of genomes in which random interactions connect different species. The probability that an agent reproduces is determined by a sum over influences from other co-existing species the agent is interacting with. Despite the model being extremely simple, it captures a very broad range of evolutionary phenomena. Starting with a few existing species, the model dynamics evolves to a quasi stable configuration (also known as Figure 5: Average Integrated Information between the specie and the environment at the single species level. evolutionary stable strategies_) where only a select group of species exist. These stable species are disrupted by mutations in the system that drive them to extinction and a new group of species emerges as a result of the reorganization. Over time, the system evolves to more and more stable configurations, thus avoiding these mass extinction events and developing resilience. Although the model has evolved into different variations over the years, for the purposes of this study we consider the model as defined in the original paper [40]. Each species is defined using a unique binary genome of length \(L\), comprising an ecosystem of a total of \(M=2^{L}\) possible species. Interactions between species are encoded in an interaction matrix \(J_{M\times M}\). All entries in the interaction matrix are sampled at random from a uniform distribution, \(J_{i,j}\sim\mathscr{U}(-1,1)\), thus allowing potentially symbiotic, competitive or predator-prey relationships between any pair of species. However, of all possible interactions, the number of permitted interactions is controlled by coupling probability \(\Theta\). The population of a given species at a given time is represented as \(n_{i}(t)\) and the total population of the ecosystem is represented as \(N(t)\). Starting from a random set of populations of existent species, each timestep starts with an annihilation step where a member of a species, selected uniformly at random, is killed with a probability \(p_{\text{kill}}\). This is followed by asexual reproduction, where a member of a species, selected uniformly at random, creates an offspring with probability \(p_{\text{off}}\). Each gene of the created offspring's genome undergoes a mutation with a probability \(p_{\text{mut}}\). These mutations introduce new species in the ecosystem which then compete with the existing species. The state of the system is recorded after each generation, \(N(t)/p_{\text{kill}}\) timesteps, which is the average number of timesteps required to kill all currently existing species. The reproduction probability \(p_{\text{off}}\) depends upon the fitness of the species at the given timestep. The fitness function, which is a weighted sum of interactions with all other species, is defined as \[\mathscr{H}(n_{i},t)=\frac{k}{N(t)}\sum_{j=1}^{M}J_{i,j}n_{j}(t)-\mu N(t)\, \tag{1}\] where \(\mu\) represents the carrier capacity or the resource constraints driven by increasing population, and which has a negative contribution to the fitness. Whereas, \(k\) is a scaling parameter for the strength of the couplings. Thus, the fitness of a given species depends not only on how it interacts with other neighbouring species but with the rest of the environment as well. The fitness function is related to the reproduction probability \(p_{\text{off}}\) for a given species \(i\) at timestep \(t\) as \[p_{\text{off}}(n_{i},t)=\frac{1}{1+\exp^{-\mathscr{H}(m_{i},t)}}. \tag{2}\] Note that the probability of reproduction is non-linearly related to the fitness function. Although the probability of reproduction is higher for species with positive fitness, some non-zero probability of reproduction exists for negative fitness values, which enables non-performing species to reproduce and mutate towards fitter species. For the purposes of our study, the fixed parameters used for the model are, \(L=10\), \(\Theta=0.25\), \(p_{\text{kill}}=0.2\), \(w=33\) and \(\mu=1/143\). These parameters are chosen based on the standard parameter ranges used in previous studies [40]. We study the changes observed in the dynamics of the model when a key parameter \(p_{\text{mut}}\) is varied. This parameter represents the selection pressure introduced by the ecosystem. Since changing environmental conditions lead to more mutations, \(p_{\text{mut}}\) can be considered as a proxy for controlling environmental selection pressures [36]. This parameter has significant impact on the dynamics of the system: visually, it can be observed (see Figure 6) that the dynamics is more selective and stable with fewer transitions at very low mutation rates (\(p_{\text{mut}}=0.001\)). In an intermediate range (\(p_{\text{mut}}=0.01\)), more species are observed during the intermittent stable states or the q-ESSs (quasi evolutionary stable states), along with more transitions. Finally, for very high mutation rates (\(p_{\text{mut}}\geq 0.05\)), new species emerge and old species die every generation and q-ESS are non-existent (i.e. no stable species emerge). Thus, varying the mutation rate provides two interesting transition points: a first one where more interesting combinations of species start to emerge as we move from very low to intermediate range; and a second transition where the system moves from order to disorder between the range of \(p_{\text{mut}}\in(0.4,0.5)\). Computationally efficient Rust code was used to simulate the Tangled Nature Model. The code is available with documentation on GitHub. ### Information-theoretic measures In this paper we use tools from information-theory to estimate the various measures presented in the Results. Primarily, we use multivariate mutual information (MI) to quantify interdependencies between time series of species populations obtained from the simulations. Typically, numerical estimation of MI requires stationary probability distributions of the random variables. Since the tangled nature model exhibits non-stationary evolution, we use ensembles of simulations to estimate mutual information. Details of this ensemble method of estimating mutual information is provided in Appendix B. Below, we briefly discuss the measure of information individuality, as well as other measures used to quantify species-environment interactions. #### 4.2.1 Information individuality In recent work, Krakauer and others [12] put forward an information-theoretic solution to identifying the boundary between an individual and its environment. This definition is based in principles of optimal self-prediction -- i.e. if a subsystem can predict its future better than any of its parts, and any addition to the subsystem hinders its predictability, then that subsystem is deemed an information individual. This optimal self-predictability enables biological entities to control and navigate their environment [41], implying that selection for persistence [10, 11] could be a putative explanation for their emergence. For the analyses above, let \(\mathbf{S}(t)\) be a joint vector representing the population of a subset of \(K\) species, \(\mathbf{S}(t)=(n_{1}(t),n_{2}(t),\ldots,n_{K}(t))\), at a given time \(t\). If \(N(t)\) is the total population of the ecosystem, the corresponding environment \(E(t)\) can be then written as \[E(t)=N(t)-\sum_{i=1}^{K}n_{i}(t). \tag{3}\] Then, based on the properties laid out in the original paper [12], Krakauer _et al._ write three different individuality measures as follows: \[\text{Organismal\ Individuality}\,A^{*} =I(\mathbf{S}(t);\mathbf{S}(t+1))\,\] \[\text{Colonial\ Individuality}\,A =I(\mathbf{S}(t);\mathbf{S}(t+1)\mid E(t))\,\] \[\text{Environmental\ determined\ Individuality}\,nC =I(E(t);\mathbf{S}(t+1)\mid\mathbf{S}(t))\.\] Figure 6: The figure shows TaNa evolution recorded for 5000 generations, for standard parameters \(L=10\), \(\Theta=0.25\), \(p_{\text{kill}}=0.2\), \(w=33\) and \(R=143\), for three different mutation rates. The bright dots indicate the existing species out of the \(2^{10}\) available species. Stable q-ESS are present for \(p_{\text{mut}}<0.05\). Here we focus on the organismal individuality which links closely in definition with persistence selection [10, 11]. This individuality measure includes both the collective predictive information of the group of species, as well as the redundancy they share with the environment (see [12] for more details). Such information is useful for the species to share resources as well as respond to the environment they live in. For our analyses, we first compute an ensemble of Tangled Nature simulations as described in Appendix B. We then randomly sample many different subsets of \(K\) species, and take the average organismal individuality of each subset by estimating the mutual information between the current and future populations of the species. This process is repeated for a \(K\) values ranging from \(K=1\) to 15. To account for the bias that is introduced by increasing dimensions as we calculate multivariate mutual information, we normalize using the size of the group. Thus, the normalized individuality score as shown in Figure 2 can be written as \[\text{Individuality score}=\frac{I(\mathbf{S}(t);\mathbf{S}(t+1))}{K}. \tag{4}\] #### 4.2.2 Information transfer and integration Finally, we briefly describe the measures shown in Section 2.3 to quantify species-environment interaction. We keep the same notation as above, using \(\mathbf{S}(t)\) to denote the vector representing the population of \(K\) species at time \(t\), and \(E(t)\) to denote the state of her environment at time \(t\). Transfer Entropy (TE) is a conditional mutual information (CMI) based measure of Granger causality. TE quantifies information transfer from a source variable to the target as CMI between the past of the source and the future of the target conditioned on the past of the target. For instance, TE from the a group of \(K\) species \(\mathbf{S}\) to their environment \(E\) can be written under the Markov condition as, \[\text{TE}(\mathbf{S}\to E)=I(\mathbf{S}(t);E(t+1)\mid E(t)) \tag{5}\] Measures of integrated information (generally denoted by \(\Phi\)), were first introduced by Tononi _et al._[42] to measure integration among different regions in the brain. Since then, multiple related measures have been proposed, with some adapted to more practical scenarios [14, 43] and applied to quantify interactions across a broad range of complex systems [15]. In essence, these measures quantify the extent to which the interactions between parts of a system drive the joint temporal evolution of the system as a whole -- a system has high integrated information if its dynamics strongly depend on the interactions between its parts. Here, we estimate two measures of integrated information (whole-minus-sum integrated information, \(\Phi^{\text{WMS}}\)[14], and its revised version, \(\Phi^{\text{R,44}}\)) between a single species and its environment jointly evolving over time. Denoting the population of species \(i\) and its environment \(E\) by the joint random variable \(\mathbf{X}=(n_{i},E)\), \(\Phi^{\text{WMS}}\) is given by \[\Phi^{\text{WMS}}=I(\mathbf{X}(t);\mathbf{X}(t+1))-\sum_{i=1}^{2}I(X_{i}(t);X _{i}(t+1))\, \tag{6}\] where \(X_{i}\) denotes the \(i^{\text{th}}\) element of \(\mathbf{X}\). Despite its intuitive formulation, \(\Phi^{\text{WMS}}\) as defined above has one important disadvantage: it can become negative in systems where the parts are highly correlated [44, 45]. To address this problem, Mediano _et al._ proposed a revised measure of integrated information, \(\Phi^{\text{R}}\), based on the mathematical framework of integrated information decomposition (\(\Phi\)ID) [44]. This revised measure simply adds a new term to \(\Phi^{\text{WMS}}\) correcting for the correlation, or redundancy [46], between the parts of the system: \[\Phi^{\text{R}}=\Phi^{\text{WMS}}+\min_{i,j}I(X_{i}(t);X_{j}(t+1)). \tag{7}\] In the main text we report results using \(\Phi^{\text{R}}\), due to its better interpretability. For completeness, we provide a comparison between the two measures of integrated information in Appendix C.1. ## Acknowledgments H.R. and H.J.J are supported by the Statistical Physics of Cognition project funded by the EPSRC (Grant No. EP/W024020/1). H.R. is also supported by the Ecosystem Leadership Award under the EPSRC Grant EP/X03870X/1; the Economic and Social Research Council under Grant ES/T005319/2; & The Alan Turing Institute. F.R. was supported by the Fellowship Programme of the Institute of Cultural and Creative Industries of the University of Kent, and the DIEP visitors programme at the University of Amsterdam. P.A.M. acknowledges support from grants Fondecyt 1200925, Proyecto Exploracion 13220184 and through Centro de Modelamiento Matematico (CMM), Grant FB210005, BASAL funds for Centers of Excellence from ANID-Chile. HJJ also thanks EPSRC for supporting this work as part of the Quantifying Agency in time evolving complex systems project (Grant no. EP/W007142/1).
2302.14232
Quantum phases in spin-orbit-coupled Floquet spinor Bose gases
We propose a spin-orbit-coupled Floquet spinor Bose-Einstein condensate (BEC) which can be implemented by Floquet engineering of a quadratic Zeeman field. The Floquet spinor BEC has a Bessel-function-modulated Rabi frequency and a Floquet-induced spin-exchange interaction. The quantum phase diagram of the spin-orbit-coupled Floquet spinor BEC is investigated by considering antiferromagnetic or ferromagnetic spin-spin interactions. In comparison with the usual spin-orbit-coupled spin-1 BEC, we find that a stripe phase for antiferromagnetic interactions can exist in a large quadratic Zeeman field regime, and a different stripe phase with an experimentally favorable contrast for ferromagnetic interactions is uncovered.
Yani Zhang, Yuanyuan Chen, Hao Lyu, Yongping Zhang
2023-02-28T01:30:44Z
http://arxiv.org/abs/2302.14232v2
# Quantum phases in spin-orbit-coupled Floquet spinor Bose gases ###### Abstract We propose a spin-orbit-coupled Floquet spinor Bose-Einstein condensate (BEC) which can be implemented by a Floquet engineering of quadratic Zeeman field. The Floquet spinor BEC has a Bessel-function-modulated Rabi frequency and a Floquet-induced spin-exchange interaction. Quantum phase diagram of a spin-orbit-coupled Floquet spinor BEC is investigated by considering antiferromagnetic or ferromagnetic spin-spin interactions. In comparison with a usual spin-orbit-coupled spin-1 BEC, we find that a stripe phase for antiferromagnetic interactions can exist in a large quadratic Zeeman field regime, and a new stripe phase with an experimentally favorable contrast for ferromagnetic interactions is uncovered. ## I Introduction Ultracold neutral atoms provides a fertile playground for engineering artificial gauge fields [1; 2; 3; 4]. Synthetic spin-orbit coupling, utilizing atomic hyperfine levels as pseudo-spins, can be realized by coupling these states via Raman lasers [5; 6; 7]. Spin-orbit-coupled Bose-Einstein condensates (BECs) open a new route in exploring exotic superfluid states and simulating topological matters [8; 9; 10; 11; 12; 13]. One of interesting features is that the spin-orbit coupling modifies dispersion relation of a BEC. The spin-orbit-coupled dispersion may have multiple energy minima. Condensations into these energy minima present exotic quantum phases, such as plane-wave (PW) phase and stripe phase [14; 15; 16; 17; 18; 19]. The PW phase occupies one of the minima and possesses a nonzero quasimomentum, which breaks the time-reversal symmetry [15]. The phase transition and excitations of PW states have been experimentally observed [8; 20]. The stripe phase, condensing at least two minima, represents a combination of spatial density modulation and superfluidity and is identified to have supersolid properties [21]. Realization of the stripe phase requires miscibility of the two spin components and a low Rabi frequency of the Raman lasers [19; 22]. This is quite a challenge in \({}^{87}\)Rb atoms experiments, since atomic interactions are insensitive to the hyperfine states [23; 24; 25]. Recently, the spin-orbit-coupling-induced stripe phase has been observed in atoms loaded into superlattices [26], in which the sub-lattice sites are treated as pseudo-spins. A spinor BEC has more degrees of freedom and intriguing interactions which lead to a rich ground-state phase diagram [27]. A spin-orbit-coupled spin-1 BEC has been experimentally realized [28]. Quantum phases in spin-orbit-coupled spin-1 BECs depend on antiferromagnetic and ferromagnetic spin-spin interactions and show salient features [29; 30; 31; 32; 33]. Three different kinds of stripe phases are revealed to exist and phase transitions between different phases are so abundant that tricritical points emerge [31; 32]. One of outstanding features is that the quadratic Zeeman field plays an important role in the existence of stripe phases. Especially, in a ferromagnetic spinor BEC, stripes appear in the limited regime of low Rabi frequency and quadratic Zeeman field [28]. On the other hand, Floquet engineering is a powerful tool in quantum physics for controlling system parameters and manipulating quantum states [34; 35; 36]. In a periodically driven system, an effective static Hamiltonian can be tailored which depends on the driving parameters. The driving could lead to dramatic changes of the system properties. Ultracold atoms provide an ideal platform for Floquet engineering due to the tunability and purity of the system, which has been used to explore artificial gauge fields, topological insulators, and soliton dynamics [37; 38; 39; 40; 41; 42; 43; 44; 45]. In spin-orbit-coupled ultracold atoms, a coherently periodic modulation of Raman laser intensities can produce a tunable spin-orbit coupling strength [46; 47; 48], which provides a practical way for dynamical control. A periodic modulation of Raman laser frequencies is employed to manipulate the emergence of the Dirac point in Floquet bands, thus to change band topology [49]. A shaking Raman lattice that generates high dimensional spin-orbit coupling is implemented to tune Floquet topological bands [50]. Recently, a Floquet spinor BEC has been proposed by a periodically driven quadratic Zeeman field [51]. Compared with a usual spinor BEC, the Floquet spinor BEC has an additional spin-exchange interaction which is induced by the high-frequency driving. Such an induced spin-exchange interaction can have a profound effect in ferromagnetic condensates and generate unconventional quantum phases [51]. In this paper, we study a Floquet spin-1 BEC with spin-orbit coupling. In spin-1 spin-orbit coupling experiments, three external Raman lasers are used to couple three hyperfine states and the quadratic Zeeman effect is proportional to the two-photon detunings between Raman lasers and hyperfine states [28]. We propose to drive the quadratic Zeeman effect periodically around a constant value via a periodic modulation of Raman laser frequencies. Under high-frequency driving, the spin-orbit-coupled spinor BEC is effectively described by a static Floquet spinor BEC, in which the Rabi coupling is modulated by a Bessel function and a unique spin-exchange interaction emerges. Quantum ground phases are investigated in such a spin-orbit-coupled Floquet spinor BEC with antiferromagnetic or ferromagnetic spin-spin interactions. Our main results are the following. (i) Due to the Bessel-function-modulated Rabi frequency, each quantum phase can exist in a broad region of the Rabi frequency. The previous studies show that the stripe phases in antiferromagnetic and ferromagnetic spinor BECs exist in a small regime of the Rabi frequency, saying \(\Omega_{c1}<\Omega<\Omega_{c2}\), where \(\Omega\) is the Rabi frequency and \(\Omega_{c1,c2}\) are two critical values with \(\Omega_{c2}-\Omega_{c1}\) being a small quantity [28; 31; 32]. In the Floquet spinor BEC, the Rabi frequency is modulated as \(\Omega J_{0}\) with \(J_{0}\) the zero-order Bessel function of the first kind. We find that the corresponding phases appear in \(\Omega_{c1}/J_{0}<\Omega<\Omega_{c2}/J_{0}\). Since \(J_{0}\) is tunable and less than 1, the \(\Omega\) region for the existence of the stripe phase is enlarged significantly. Such extension of the Rabi frequency for the stripe phases is beneficial for their experimental observations. (ii) For antiferromagnetic interactions, the appearance of the Floquet-induced spin-exchange interaction extends the second stripe phase to broaden quadratic Zeeman field domain, which exists in an extremely narrow region of the quadratic Zeeman field in a usual spin-orbit-coupled spinor BEC. (iii) For ferromagnetic interactions, a new stripe phase is induced by the combined effects of the Floquet-induced spin-exchange interaction and the Rabi coupling. These stripes have a very high density contrast. Their Bogoliubov excitations are identified to have two gapless Nambu-Goldstone modes. This paper is organized as follows. In Sec. II, we present the theoretical model for a spin-orbit-coupled Floquet spinor BEC. It features the Floquet-induced spin-exchange interaction and the Bessel-function-modulated Rabi frequency. In Sec. III, phase diagram of a noninteracting spin-orbit-coupled Floquet spinor BEC is analyzed. In Sec. IV, phase diagrams in antiferromagnetic and ferromagnetic spin-spin interactions are studied separately. Finally, the conclusion follows in Sec. V. ## II Model We consider an experimentally realizable spin-orbit-coupled spin-1 BEC. The spin-orbit coupling is implemented by coupling three hyperfine states with total angular momentum \(F=1\) (\(m_{F}=0,\pm 1\)) via three Raman laser propagating along the \(x\)-axis [28]. Adjusting two-photon detunings between Raman lasers and hyperfine states to be equal can mimic an effective quadratic Zeeman field. We propose to periodically drive it by a periodic oscillation of the Raman laser frequencies. The mean-field energy functional of the oscillating system is \[E[\Phi] =\int d\mathbf{r}\Phi^{\dagger}\left[H_{\text{SOC}}+\xi(t)F_{z}^{2} \right]\Phi\] \[\quad+\int d\mathbf{r}\Phi^{\dagger}\left[\frac{c_{0}}{2}\Phi^{ \dagger}\Phi+\frac{c_{2}}{2}\Phi^{\dagger}\mathbf{F}\Phi\cdot\mathbf{F}\right]\Phi, \tag{1}\] with \(\Phi=(\Phi_{1},\Phi_{2},\Phi_{3})\) being the spin-1 spinor to describe three-component wave functions. \(\mathbf{F}=(F_{x},F_{y},F_{z})\) are the spin-1 Pauli matrices. \(H_{\text{SOC}}\) is the single-particle spin-orbit-coupled Hamiltonian, \[H_{\text{SOC}}=\left(-i\frac{\partial}{\partial x}+2F_{z}\right)^{2}+\varepsilon F _{z}^{2}+\frac{\Omega}{\sqrt{2}}F_{x}, \tag{2}\] where \(\Omega\) is the Rabi frequency depending on the laser intensities, and \(\varepsilon\) is a constant quadratic Zeeman shift which is induced by the detunings of the Raman lasers [28]. The spin-1 spin-orbit coupling is represented by the second term in Eq. (2) with a fixed coupling strength due to the experimentally chosen gauge. In our dimensionless equations, the units of momentum, length, and energy are \(\hbar k_{\text{Ram}}\), \(1/k_{\text{Ram}}\), and \(\hbar^{2}k_{\text{Ram}}^{2}/2m\), respectively. Here, \(m\) is the atom mass, and \(k_{\text{Ram}}=2\pi/\lambda_{\text{Ram}}\) is the wave number of the Raman lasers with \(\lambda_{\text{Ram}}\) being the wavelength. The quadratic Zeeman field is periodically driven, \[\xi(t)=\alpha\cos(\omega t), \tag{3}\] with \(\omega\) and \(\alpha\) being the frequency and amplitude of the driving, respectively. \(c_{0}\) and \(c_{2}\) in Eq. (1) denote density-density and spin-spin interactions, respectively, which depend on the \(s\)-wave scattering lengths in the total spin channels. In this work, we assume a repulsive density-density interaction (\(c_{0}>0\)), while the spin-spin interaction \(c_{2}\) can be either positive (antiferromagnetic) or negative (ferromagnetic). For a high-frequency driving, we can derive an effective static Hamiltonian by averaging the time-dependent Hamiltonian over one modulation period [35]. We transform the system into an oscillating frame by using the transformation, \[U(t)=\exp\left(-i\frac{\alpha}{\omega}\sin(\omega t)F_{z}^{2}\right). \tag{4}\] After applying the transformation \(\Phi=U(t)\Psi\), resultant time oscillating terms are dropped due to the average in a period. At last, we end up with a time-independent energy functional as, \[E[\Psi] =\int d\mathbf{r}\Psi^{\dagger}\left[H_{\text{SOC}}^{\prime}+\frac{c _{0}}{2}\Psi^{\dagger}\Psi+\frac{c_{2}}{2}\Psi^{\dagger}\mathbf{F}\Psi\cdot\mathbf{F }\right]\Psi\] \[\quad+c_{f}\int d\mathbf{r}\left(\Psi_{1}^{\dagger}\Psi_{3}^{ \dagger}\Psi_{2}^{2}+\Psi_{1}\Psi_{3}\Psi_{2}^{\dagger 2}\right). \tag{5}\] The energy functional describes a spin-orbit-coupled Floquet spinor BEC with the spinor \(\Psi=(\Psi_{1},\Psi_{2},\Psi_{3})\). The modulated single-particle Hamiltonian is \[H_{\text{SOC}}^{\prime}=\left(-i\frac{\partial}{\partial x}+2F_{z}\right)^{2} +\varepsilon F_{z}^{2}+\frac{\Omega}{\sqrt{2}}J_{0}\left(\frac{\alpha}{\omega }\right)F_{x}. \tag{6}\] Note that the only difference between the Floquet spin-orbit coupled Hamiltonian \(H_{\rm SOC}^{\prime}\) and the original one \(H_{\rm SOC}\) is that the Rabi frequency is modulated by the zero-order Bessel function of the first kind \(J_{0}(\alpha/\omega)\). The density-density and spin-spin interactions in Eq. (6) are the same as a usual spinor BEC. Nevertheless, there is a new spin-exchange interaction with the coefficient \(c_{f}\) which is a pure effect of Floquet modulation [51], \[c_{f}=c_{2}\left[1-J_{0}\left(2\alpha/\omega\right)\right]. \tag{7}\] The spin-orbit-coupled Floquet spinor BEC is reduced back to a usual spin-orbit-coupled spinor BEC if the driving disappears, i.e., \(\alpha/\omega=0\). ## III Phase diagram of the noninteracting spin-orbit-coupled Floquet spinor BEC We study quantum phases in a spin-orbit-coupled Floquet spinor BEC. First, we analyze the single-particle phase diagram which has been addressed in Refs. [29; 30; 31]. The analysis of the single-particle phase diagram can provide an insight to ground states in the interacting system. The dispersion of \(H_{\rm SOC}^{\prime}\) can be calculated by a direct diagonalization. Depending on spin-orbit-coupling parameters, the lowest band in the dispersion may have three, two or one local minima. Ground states choose one of minima to occupy. Therefore, a general ground-state wave function should be \[\Psi=\sqrt{\bar{n}}e^{ikx}\left(\begin{array}{c}\cos\theta\cos\varphi\\ -\sin\theta\\ \cos\theta\sin\varphi\end{array}\right), \tag{8}\] where \(\bar{n}=N/V\), with \(N\) being the total atom number and \(V\) the volume of the system, \(k\) is the quasimomentum, and \(\theta\) and \(\varphi\) are two parameters. By substituting Eq. (8) into Eq. (5) (with \(c_{0}=c_{2}=0\)), we obtain the energy per particle, \[E_{k}=k^{2}-\left(\frac{A_{k}^{\prime}}{54}\right)^{\frac{3}{2}}-A_{k}\left( \frac{2}{27A_{k}^{\prime}}\right)^{\frac{3}{2}}+\frac{2}{3}A_{0}, \tag{9}\] with \[A_{k} =48k^{2}+(\varepsilon+4)^{2}+\frac{3}{2}J_{0}^{2}\left(\frac{ \alpha}{\omega}\right)\Omega^{2},\] \[A_{k}^{\prime} =(\varepsilon+4)A_{k}^{\prime\prime}+\sqrt{(\varepsilon+4)^{2}A_ {k}^{\prime\prime 2}-4A_{k}^{3}},\] \[A_{k}^{\prime\prime} =-288k^{2}+2(\varepsilon+4)^{2}+\frac{9}{2}J_{0}^{2}\left(\frac{ \alpha}{\omega}\right)\Omega^{2}.\] Then the quasimomentum can be determined by solving \(\partial E_{k}/\partial k=0\). The occupation of \(k=0\) is the zero momentum (ZM) state, and the occupation of a nonzero quasimomentum is the plane-wave (PW) state. Fig. 1 shows ground-state phase diagram in the \((\Omega,\varepsilon)\) plane, in which the tensor magnetization \(\langle F_{z}^{2}\rangle=\cos^{2}\theta\) is chosen as the order parameter. The dotted lines are the transition line between PW and ZM phases, above which is the ZM phase and below is the PW phase. We also show the lowest band of \(H_{\rm SOC}^{\prime}\) in Fig. 1. The dashed line in the ZM regime is a separation, above which the lowest band has only one minimum at \(k=0\) and below which it has three local minima but the lowest one at \(k=0\). In the PW regime, the lowest band may have three local minima or two. The separation between these two cases is demonstrated by the blakc dashed lines. Two dashed lines merge together with the phase transition line at the so-called tricritical point, which is labeled by the red star in Fig. 1. The location of the tricritical point can be analytically calculated from \(\partial^{2}E_{k}/\partial k^{2}=0\) and the equal energy between the PW and ZM states [30; 31]. The calculated value for the tricritical point is \((\Omega^{*},\varepsilon^{*})=(30.14,-1.66)\). When \(\Omega<\Omega^{*}\) the PW-ZM transition is first-order and when \(\Omega>\Omega^{*}\) the phase transition is second-order. ## IV Phase diagram of the interacting spin-orbit-coupled Floquet spinor BEC For a spin-orbit-coupled spin-1 BEC, the previous works revealed ground states including PW, ZM and stripe phases and rich phase transitions between them [30; 31; 28; 23]. The single-particle spin-orbit-coupled Figure 1: Quantum ground-state phase diagram of a noninteracting spin-orbit-coupled Floquet spinor BEC in the space of the Rabi frequency \(\Omega\) and the quadratic Zeeman field \(\varepsilon\). The driving is \(\alpha/\omega=2\) (\(J_{0}(\alpha/\omega)=0.224\)). The background corresponds to values of the tensor magnetization \(\langle F_{z}^{2}\rangle\). The black and white dotted lines represent first-order and second-order phase transitions, respectively. Below the dotted lines is the plane-wave phase, and beyond is the zero momentum phase. The red star denotes a tricritical point. Insets show the lowest bands of the single-particle dispersion. The black dashed lines separate different regions where the lowest band of the dispersion has three, two or one local energy minima. dispersion provides diverse arrangements of energy-minima, and interactions determine the way to condense into these minima. Since the dispersion of \(H_{\mathrm{SOC}}^{\prime}\) have three energy-minima at most, we construct ground-state wave functions as a superposition of the spinors at these minima, which can be assumed as \[\Psi =\sqrt{\bar{n}}C_{+}e^{ikx}\left(\begin{array}{c}\cos\theta_{1} \cos\varphi\\ -\sin\theta_{1}\\ \cos\theta_{1}\sin\varphi\end{array}\right)+\sqrt{\bar{n}}C_{0}\left(\begin{array} []{c}\sin\theta_{2}/\sqrt{2}\\ -\cos\theta_{2}\\ \sin\theta_{2}/\sqrt{2}\end{array}\right)\] \[\quad+\sqrt{\bar{n}}C_{-}e^{-ikx}\left(\begin{array}{c}\cos \theta_{1}\sin\varphi\\ -\sin\theta_{1}\\ \cos\theta_{1}\cos\varphi\end{array}\right). \tag{10}\] The superposition coefficients satisfy normalization condition, \(|C_{+}|^{2}+|C_{0}|^{2}+|C_{-}|^{2}=1\). The spinors are the eigenstates of \(H_{\mathrm{SOC}}^{\prime}\) with the concrete parameters \(\theta_{1,2}\) and \(\varphi\) to be specified. The second state in Eq. (10) is the spinor at \(k=0\), and the first and third ones are spinors modulated by the plane waves at \(\pm k\). The symmetry of \(H_{\mathrm{SOC}}^{\prime}\) requires that the first and third states have the same \(\theta_{1}\) and \(\varphi\). We substitute the above variational wave functions Eq. (10) into the energy functional in Eq. (5). The minimization of the resultant energy functional gives the values of parameters \(k\), \(C_{0,\pm}\), \(\theta_{1,2}\) and \(\varphi\). From these parameters, we can classify ground states: the ZM phase has \(C_{\pm}=0\); the PW phase has a nonzero \(k\) and \(C_{0}=0\) with one of \(C_{\pm}\) being nonzero; the stripe phase requires \(k\neq 0\) and at least two of \(C_{\pm,0}\) are nonzero. The stripe phases can be further classified according to relative values of \(C_{\pm,0}\)[31; 32]. Considering that the classification of ground states depends strongly on \(C_{\pm,0}\), we use the tensor magnetization \(\left\langle F_{z}^{2}\right\rangle\) as the order parameter to identify different phases, \[\left\langle F_{z}^{2}\right\rangle=\left(\left|C_{+}\right|^{2}+\left|C_{-} \right|^{2}\right)\cos^{2}\theta_{1}+\left|C_{0}\right|^{2}\sin^{2}\theta_{2}. \tag{11}\] We find that antiferromagnetic and ferromagnetic spin-spin interactions have very different ground-state phase diagrams, which are studied separately. ### Antiferromagnetic interactions The antiferromagnetic interaction is \(c_{2}>0\), which is typical for the \({}^{23}\)Na BEC. Fig. 2 demonstrates the phase diagram for antiferromagnetic interactions with a driving \(\alpha/\omega=2\) in the space of the quadratic Zeeman field \(\varepsilon\) and the Rabi frequency \(\Omega\). When \(\varepsilon\) is negative, the single-particle dispersion has two lowest minima locating at \(\pm k_{m}\) [see the inset in Fig. 1], the antiferromagnetic interaction allows atoms to simultaneously occupy these two minima to form a stripe for a low \(\Omega\). This stripe phase labeled as S1 in Fig. 2 has \(|C_{+}|=|C_{-}|=1/\sqrt{2}\) and \(C_{0}=0\). Using the wave functions in Eq. (10) with \(C_{0}=0\) and considering the single-particle spinors at \(\pm k_{m}\) having \(\varphi=\pi/2\), we get the energy of antiferromagnetic interaction \(\left\langle E\right\rangle_{c_{2}}\) and Floquet-induced spin-exchange interaction \(\left\langle E\right\rangle_{c_{f}}\), \[\left\langle E\right\rangle_{c_{2}}+\left\langle E\right\rangle_{c_{ f}} =\frac{c_{2}\bar{n}^{2}}{2}\cos^{4}\theta_{1}+c_{2}\bar{n}^{2}|C_{-}|^{2}|C_{+}|^{2}\] \[\quad\cdot\left[(1+\frac{c_{f}}{c_{2}})\sin^{2}(2\theta_{1})-2 \cos^{4}\theta_{1}\right]. \tag{12}\] For a low \(\Omega\), we have \(\theta_{1}\approx 0\) and the minimization of \(\left\langle E\right\rangle_{c_{2}}+\left\langle E\right\rangle_{c_{f}}\) leads to \(|C_{+}|=|C_{-}|=1/\sqrt{2}\), corresponding to the S1 phase, the tensor magnetization of which is \(\left\langle F_{z}^{2}\right\rangle\approx 1\), as shown in Fig. 2. \(\theta_{1}\) prefers to be nonzero for a large \(\Omega\). Meanwhile, the first term \(c_{2}\bar{n}^{2}/2\cos^{4}\theta_{1}\) in \(\left\langle E\right\rangle_{c_{2}}+\left\langle E\right\rangle_{c_{f}}\) allows \(\theta_{1}\) approaching to \(\pi/2\) at which it is minimized, so that \(\theta_{1}\) can grows from zero to \(\pi/2\) as \(\Omega\) increases. Consequently, for a high \(\Omega\), we may have \((1+c_{f}/c_{2})\sin^{2}(2\theta_{1})-2\cos^{4}\theta_{1}>0\). Then the minimization of \(\left\langle E\right\rangle_{c_{2}}+\left\langle E\right\rangle_{c_{f}}\) requires one of \(C_{\pm}\) to be zero. Even though the single-particle dispersion has two minima, the antiferromagnetic interaction chooses one of them to occupy, generating the PW phase shown in Fig. 2. The phase transition between the S1 and PW phases is first order. Physically, \(\langle E\rangle_{c_{2}}+\langle E\rangle_{c_{f}}\) is proportional to \(c_{2}\bar{n}^{2}|C_{+}|^{2}|C_{-}|^{2}[(1+c_{f}/c_{2})\langle F_{x}\rangle_{+} \langle F_{x}\rangle_{-}+\langle F_{z}\rangle_{+}\langle F_{z}\rangle_{-}]\), where \(\langle F_{x}\rangle_{\pm}\) and \(\langle F_{z}\rangle_{\pm}\) are the \(x\) and \(z\) polarization of the spinors at \(\pm k_{m}\). The antiferromagnetic interaction generates \(\langle F_{z}\rangle_{+}\langle F_{z}\rangle_{-}<0\) and the Rabi coupling is in favor of \(\langle F_{x}\rangle_{+}\langle F_{x}\rangle_{-}>0\). The competition between these two effects gives rise to the S1-PW transition and we have \(\langle F_{z}^{2}\rangle<1\) in the PW phase [see Fig. 2]. The emergence of the ZM phase in Fig. 2 is due to the fact that the lowest minimum of the single-particle dispersion lays at \(k=0\). There is a second stripe phase labeled as S2 which Figure 2: Quantum ground-state phase diagram of a spin-orbit coupled Floquet spinor BEC with an antiferromagnetic spin-spin interaction (\(\bar{n}c_{0}=1\) and \(\bar{n}c_{2}=0.1\)). The background corresponds to values of the tensor magnetization \(\langle F_{z}^{2}\rangle\) defined in Eq. (11). The black and white dotted lines represent the first-order and second-order phase transitions. The different tricritical points are denoted by the red and purple stars. The driving is \(\alpha/\omega=2\) (\(J_{0}(\alpha/\omega)=0.224\) and \(J_{0}(2\alpha/\omega)=-0.397\)). is unique only for antiferromagnetic interactions. The S2 phase is featured with \(|C_{-}|=|C_{+}|\neq 0,|C_{0}|\neq 0\) and \(\Theta\equiv\arg(C_{-})+\arg(C_{+})-2\arg(C_{0})=\pi\). At first glance, the phase diagram shown in Fig. 2 is similar to that of a usual spin-orbit-coupled BEC demonstrated in Refs. [31; 32] (i.e., Fig. 1(a) in [31] and Fig. 1 in [32]). There are two tricritical points represented by stars in Fig. 2. The first (second) order phase transitions between different phases are shown by black-dotted (white-dotted) lines. However, there are two new features in our system. (i) All phases exist in a broadened region of the Rabi frequency. This is a straightforward consequence of the Bessel-function modulation \(\Omega J_{0}\). (ii) The existence of the S2 phase is also extended in the \(\varepsilon\) domain. In the usual spin-orbit-coupled antiferromagnetic BEC the S2 phase exists in an extremely narrow region of \(\varepsilon\) (see Fig. 1(a) in [31] and Fig. 1 in [32]). Our Floquet system has a large extension, which benefits from the Floquet-induced interaction. To reveal the extension of the stripe regions more clearly, we study the phase diagram as a function of the driving \(\alpha/\omega\). The results are shown in Fig. 3 for \(\Omega=2\). It is clear from the figure that without the driving (\(\alpha/\omega=0\)) the S2 phase exists in a extremely narrow region of \(\varepsilon\). This gives rise to a challenge for its experimental implementations. The upper boundary of the S2 phase corresponds to the degeneracy of three minima of the single-particle dispersion, i.e., \(E_{-k_{m}}=E_{k=0}=E_{k_{m}}\), and beyond the boundary \(E_{k=0}\) becomes the lowest one such that ground state is the ZM phase. Below the boundary, we have \(E_{-k_{m}}=E_{k_{m}}<E_{k=0}\). For a low \(\Omega\), the spinors at \(\pm k_{m}\) have \(\theta_{1}=0\), \(\varphi=\pi/2\) and the spinor at \(k=0\) has \(\theta_{2}=0\). The general wave function becomes \[\Psi=\sqrt{\tilde{n}}\begin{pmatrix}C_{-}e^{-ikx}\\ -C_{0}\\ C_{+}e^{ikx}\end{pmatrix}, \tag{13}\] which is a good approximation for a low \(\Omega\). By using Eq. (13), we find that the antiferromagnetic energy can be minimized as \(\langle E\rangle_{c_{2}}=0\) in both the S1 phase (\(|C_{\pm}|=1/\sqrt{2},C_{0}=0\)) and the S2 phase (\(|C_{-}|=|C_{+}|<1/\sqrt{2}\), \(C_{0}\neq 0\), \(\Theta=\pi\)). However, the S2 phase is not a minimization of the quadratic Zeeman energy \(\langle E\rangle_{\varepsilon}=\varepsilon\tilde{n}(|C_{-}|^{2}+|C_{+}|^{2})\) for \(\varepsilon<0\), so that the ground state is the S1 phase. A dominant Rabi frequency \(\Omega\) causes a small deviation \(\theta_{1}\) from 0, i.e., \(\theta_{1}=\delta\theta\), where \(\delta\theta>0\) is a very small quantity. This term leads to \(\langle E\rangle_{c_{2}}=8c_{2}\tilde{n}^{2}|C_{+}|^{4}(\delta\theta)^{2}\) for both the S1 and S2 phases. Considering the S1 phase with \(|C_{+}|^{2}=1/2\) and the S2 phase with \(|C_{+}|^{2}<1/2\), this extra antiferromagnetic energy prefers the S2 phase as the ground state if the quadratic Zeeman energy is weak. If the quadratic Zeeman energy exceeds this extra energy, the S1 phase is back as the ground state. Since the extra energy is a small quantity of second order, the S2 ground state exists in a very small \(\varepsilon\) domain. In the presence of the driving, the region of the S2 phase is dramatically extended around \(\varepsilon=0\) [see Fig. 3]. The upward shift of the region is due to the Bessel-function-modulated Rabi frequency. As the driving \(\alpha/\omega\) increasing from 0, \(\Omega J_{0}(\alpha/\omega)\) decreases towards zero. As shown in Fig. 2, for a small \(\Omega\), the S2 phase locates around \(\varepsilon=0\). The dramatic expansion of the existence area is the consequence of the Floquet-induced spin-exchange interaction. The S2 phase can greatly minimize the spin-exchange-interaction energy, which can be easily seen from the approximate wave function for a low \(\Omega\) in Eq. (13). With the wave function, the spin-exchange-interaction energy becomes \(\langle E\rangle_{cf}=2c_{f}\tilde{n}^{2}|C_{-}||C_{+}||C_{0}|^{2}\cos(\Theta)\). The S2 phase, having \(0<|C_{-}|=|C_{+}|<1/\sqrt{2}\) and \(\Theta=\pi\), makes the spin-exchange energy minimized. Other phases, such as the ZM phase (\(C_{0}=1\)), the PW phase (\(C_{0}=0\), \(|C_{+}|+|C_{-}|=1\)), and the S1 phase (\(C_{0}=0\), \(|C_{\pm}|=1/\sqrt{2}\)), lead to \(\langle E\rangle_{cf}=0\), so that the Floquet-induced spin-exchange energy cannot be minimized. Meanwhile, the S2 phase also minimizes the antiferromagnetic interaction energy, \(\langle E\rangle_{c_{2}}=c_{2}\tilde{n}^{2}/2(|C_{-}|^{2}-|C_{+}|^{2})^{2}+c_{ 2}\tilde{n}^{2}[|C_{-}|^{2}|C_{0}|^{2}+|C_{+}|^{2}|C_{0}|^{2}+2|C_{-}||C_{+}||C_ {0}|^{2}\cos(\Theta)]=0\). The only obstacle for the existence of the S2 phase is the quadratic Zeeman energy \(\langle E\rangle_{\varepsilon}=\varepsilon\tilde{n}(|C_{-}|^{2}+|C_{+}|^{2})\). If \(\varepsilon>0\), the quadratic Zeeman energy prefers the ZM phase and when \(\varepsilon<0\) it prefers the S1 phase. Therefore, the competition between the Floquet-induced spin-exchange interaction and the quadratic Zeeman field leads to the existence region for the S2 phase which is dramatically extended in comparison with the usual case with \(\alpha/\omega=0\). The S2-ZM (white-dotted) and S2-S1 (black-dotted) transition lines oscillate as a function of \(\alpha/\omega\). It is noted that the maxima of the transition lines correspond to the zeros of Figure 3: Phase diagram for an antiferromagnetic interaction (\(\tilde{n}c_{0}=1\) and \(\tilde{n}c_{2}=0.1\)) as a function of the driving \(\alpha/\omega\). The Rabi frequency is \(\Omega=2\). The background corresponds to values of the tensor magnetization \(\langle F_{a}^{2}\rangle\). The black and white dotted lines represent the first-order and second-order phase transitions. \(J_{0}(\alpha/\omega)\), therefore the oscillations come from \(\Omega J_{0}(\alpha/\omega)\). It is also interesting that without the driving the S2 phase always exists in the negative \(\varepsilon\) area, but with the driving it can exist even in positive \(\varepsilon\) areas. ### Ferromagnetic interactions The ferromagnetic interaction is \(c_{2}<0\). We consider \(c_{2}/c_{0}=-0.5\), which is typical of \({}^{7}\)Li atoms [32]. Fig. 4 demonstrates the phase diagram for ferromagnetic interactions with a driving \(\alpha/\omega=1.6\). In the low \(\Omega\) region, there is a third stripe phase, which is labeled as S3 in Fig. 4. It has \(|C_{-}|=|C_{+}|\neq 0,|C_{0}|\neq 0\), and \(\Theta=0\). Using the approximate wave function in Eq. (13), we know that the S3 phase only minimizes the second term in the ferromagnetic interaction energy \(\langle E\rangle_{c_{2}}=c_{2}\bar{n}^{2}/2(|C_{-}|^{2}-|C_{+}|^{2})^{2}+c_{2} \bar{n}^{2}[|C_{-}|^{2}|C_{0}|^{2}+|C_{+}|^{2}|C_{0}|^{2}+2|C_{-}||C_{+}||C_{0}| ^{2}\cos(\Theta)]\) (\(c_{2}<0\)) and it cannot minimize the first term \(c_{2}\bar{n}^{2}/2(|C_{-}|^{2}-|C_{+}|^{2})^{2}\) which is minimized by the PW phase. With the effect of the quadratic Zeeman field, the S3, PW and ZM phases are distributed in the way shown in Fig. 4. These three phases are similar to the previous studies [31; 32] (i.e., Fig. 1(b) in [31] and Fig. 2 in [32]), but with an outstanding feature that every phase exists in a broaden region of \(\Omega\) due to the Bessel-function modulation. Different from the case of \(\alpha/\omega=0\) in Refs. [28; 31; 32], we find in the Floquet spinor BEC that there exists a new stripe phase, which is labeled as S4. The S4 phase locates inside the region where the single-particle dispersion has two energy minima at \(\pm k_{m}\), and they are equally occupied by the S4 phase with \(|C_{\pm}|=1/\sqrt{2}\) and \(C_{0}=0\). This condition is exactly same to the S1 phase with antiferromagnetic interactions. Nevertheless, the S1 phase exists in the low \(\Omega\) region [see Fig. 2] while the S4 phase is in the high \(\Omega\) region [see Fig. 4]. Such a difference of existence region related to \(\Omega\) brings new features to the S4 phase. With \(C_{0}=0\), the minimization of the ferromagnetic energy and the Floquet-induced energy demonstrated in Eq. (12) leads to \(|C_{-}|=0\) or \(|C_{+}|=0\) for low \(\Omega\) (\(\theta_{1}\approx 0\)). In this case, the ground state is the PW phase with \(\left<F_{z}^{2}\right>\approx 1\), as shown in Fig. 4. For a high \(\Omega\), one may have \(\theta_{1}\neq 0\) and \((1+c_{f}/c_{2})\sin^{2}(2\theta_{1})-2\cos^{4}\theta_{1}>0\). The minimization of \(\left<E\right>_{c_{2}}+\left<E\right>_{c_{f}}\) requires \(|C_{\pm}|=1/\sqrt{2}\), so that the ground state is the S4 phase. Due to the existence of the S4 phase, there are two tricritical points labeled by stars in Fig. 4. We want to emphasize that without the driving (\(c_{f}=0\)) the S4 phase cannot exist [28; 31; 32]. In absence of the driving, Eq. (12) becomes \(\langle E\rangle_{c_{2}}=c_{2}\bar{n}^{2}/2\cos^{4}\theta_{1}+c_{2}\bar{n}^{2} |C_{-}|^{2}|C_{+}|^{2}\left[\sin^{2}(2\theta_{1})-2\cos^{4}\theta_{1}\right]\). For \(c_{2}<0\), the first term \(c_{2}\bar{n}^{2}/2\cos^{4}\theta_{1}\) prefers \(\theta_{1}=0\). According to the second term, the realization of the S4 phase needs a nonzero \(\theta_{1}\) satisfying \(\sin^{2}(2\theta_{1})-2\cos^{4}\theta_{1}>0\), which can be achieved by increasing the Rabi frequency. Besides, the negative Rabi coupling energy is also beneficial for solving the total energy. However, the single-particle dispersion with a large \(\Omega\) will have the energy minimum at \(k=0\) lower than the energy minima at \(k=\pm k_{m}\) and the ground state prefers the ZM phase. Thus, there is no way for the S4 phase to exist. The Floquet-induced interaction has the nature of spin-exchange. It has two ef Figure 5: Phase diagram in a ferromagnetic interaction (\(\bar{n}c_{0}=1\) and \(\bar{n}c_{2}=-0.5\)) as a function of the driving \(\alpha/\omega\). The Rabi frequency is \(\Omega=8\). The background corresponds to values of the tensor magnetization \(\langle F_{z}^{2}\rangle\). The black and white dotted lines represent the first-order and second-order phase transitions. The different tricritical points are denoted by red and yellow stars. Figure 4: Quantum ground-state phase diagram of a spin-orbit coupled Floquet spinor BEC with a ferromagnetic spin-spin interaction \(\bar{n}c_{0}=1\) and \(\bar{n}c_{2}=-0.5\). The background corresponds to values of the tensor magnetization \(\langle F_{z}^{2}\rangle\). The black and white dotted lines represent the first-order and second-order phase transitions. The different tricritical points are denoted by red and yellow stars. The driving is \(\alpha/\omega=1.6\) (\(J_{0}(\alpha/\omega)=0.455\) and \(J_{0}(2\alpha/\omega)=-0.320\)). fects: the spin-exchange interaction causes a direct competition with the first term since it prefers \(\theta_{1}=\pi/4\) so that three components having equal populations in each spinor; according to Eq. (12), the S4 phase requires \((1+c_{f}/c_{2})\sin^{2}(2\theta_{1})-2\cos^{4}\theta_{1}>0\), and the positive \(c_{f}/c_{2}\) as a prefactor also increases the possibility for \(\theta_{1}\) satisfying the requirement. Therefore, combined effects of the Rabi coupling and the Floquet-induced interaction makes the possible existence of the S4 phase. In order to know how the S4 phase emerges in the presence of the driving, we analyze the phase diagram as a function of the driving \(\alpha/\omega\), which is demonstrated in Fig. 5. The Rabi frequency is fixed as \(\Omega=8\). For \(\alpha/\omega=0\), the ground state is the ZM phase, as shown by the focused area of \(\varepsilon\) in Fig. 5, which is consistent with the results in Refs. [31; 32; 28]. As \(\alpha/\omega\) increasing, the S3, S4 and PW phases appear and have an interesting distribution shown in Fig. 5. Transition (white-dotted and back-dotted) lines have an oscillating behavior with the maxima matching with the zeros of \(J_{0}(\alpha/\omega)\). The S3 and S4 phases locate between two transition lines. Furthermore, the S4 phase exists in limited regions. The changing of \(\alpha/\omega\) is equivalent to scan \(\Omega\). A high \(\alpha/\omega\) leads to \(\Omega J_{0}(\alpha/\omega)\) confining around 0. According to Fig. 4, the ground state around \(\Omega=0\) is the S3 phase. Therefore, for a high \(\alpha/\omega\) there is no S4 phase anymore [see Fig. 5]. In Fig. 6(a), we show density distributions \(n_{i}=|\Psi_{i}|^{2}\) of a typical S4 state. The outstanding feature is that the second component \(n_{2}\) is comparable with other components \(n_{1}=n_{3}\). This is completely different from the S1 phase with antiferromagnetic interactions, where \(n_{2}\ll n_{1}=n_{3}\). This is due to the low \(\Omega\) region for the S1 phase. For a low \(\Omega\), the spinors at \(\pm k_{m}\) can be physically approximated as \(e^{ik_{m}x}(\delta^{2},\delta,1)^{T}\) and \(e^{-ik_{m}x}(1,\delta,\delta^{2})^{T}\) respectively, where \(\delta\) is a small quantity. The S1 phase is an equal superposition of the two spinors and we have \(n_{1}=n_{3}=1+2\delta^{2}\cos(2k_{m}x)\) and \(n_{2}=4\delta^{2}\cos^{2}(k_{m}x)\). Therefore, the S1 phase has \(n_{2}\ll n_{1}=n_{3}\) and a very low contrast for \(n_{1}\) and \(n_{2}\) which is proportional to a small quantity of second order. The contrast is defined as \((n_{\rm max}-n_{\rm min})/(n_{\rm max}+n_{\rm min})\) with \(n_{\rm max}\) (\(n_{\rm min}\)) the density maximum (minimum). The low contrast of the S1 phase is unfavorable for experimental observations. However, the S4 phase with ferromagnetic interactions exists in the high \(\Omega\) region, and with the further help from the Floquet-induced spin-exchange, \(\delta\) is not a small quantity anymore. Therefore, the contrast of \(n_{1}\) and \(n_{3}\) is obviously high for the S4 phase. The excellence of the second component is that its contrast is always maximized (which is equal to one). The dominated occupation in the second component makes it perfect for directly experimental observations. In Fig. 6(b), we show the contrast in the full \(\Omega\) region. The contrast of \(n_{1}\) and \(n_{3}\) increases with the increase of \(\Omega\), and it is always one as expected for the second component \(n_{2}\). A closely related study of ground states is their elementary excitations. Excitation spectrum of each phase in usual spin-orbit-coupled spin-1 BECs has been investigated [30; 31; 33]. The new S4 phase only exists in Floquet spinor BECs and we study its Bogoliubov excitation. The stripe wave function ansatz in Eq. (10) only includes low-order plane waves. It has been known that such ansatz cannot precisely capture Bogoliubov excitation and high-order planes waves should be involved [21; 23; 52; 53; 54]. Therefore, we use the ansatz with high-order modes [33], \[\Psi=\sqrt{\bar{n}}\sum_{j=-L}^{L}e^{ijKx}\begin{pmatrix}\varphi_{1}^{(j)}\\ \varphi_{2}^{(j)}\\ \varphi_{3}^{(j)}\end{pmatrix}, \tag{14}\] with the normalization condition \(\sum_{\sigma,j}|\varphi_{\sigma}^{(j)}|^{2}=1\). Here, \(L\) is the cutoff of plane waves, and \(K\) relates to the period of the stripes. Spinors \((\varphi_{1}^{(j)},\varphi_{2}^{(j)},\varphi_{3}^{(j)})^{T}\) and \(K\) are determined by minimizing the energy function in Eq. (5) using Eq. (14). In the S4 phase parameter region, we first get Figure 7: Bogoliubov excitation spectrum \(\zeta(q_{x})\) of a typical S4 state. The parameters are \(\bar{n}c_{0}=1\), \(\bar{n}c_{2}=-0.5\), \(\Omega=7\) and \(\varepsilon=-1.1\). The lowest two bands are gapless corresponding to two Nambu-Goldstone modes. the stripe wave function by the minimization procedures, and then we use the ground state to solve Bogoliubov-de Gennes equation to get the elementary excitation energy \(\zeta\)[33]. A typical excitation spectrum \(\zeta(q_{x})\), i.e., the relation between excitation energy \(\zeta\) and excitation quasi-momentum \(q_{x}\), is demonstrated in Fig. 7, in which only three lowest bands are shown. The size of Brillouin zone is \(2K\) which means that the period of stripes is \(\pi/K\). The lowest two bands are gapless, corresponding to two Nambu-Goldstone modes. The physical origin of these two gapless modes is that stripes spontaneously break the continuously translational symmetry and gauge symmetry [21]. ## V Conclusion Spin-orbit-coupled spin-1 BECs have been realized in experiments. Based on the experimental platform, we propose a spin-orbit-coupled Floquet spinor BEC by periodically driving the quadratic Zeeman field with a high-frequency. In the Floquet spinor BEC, the Rabi frequency is modulated by a Bessel function and a Floquet-induced spin-exchange interaction emerges. We study quantum ground-state phase diagram of a spin-orbit-coupled Floquet spinor BEC considering antiferromagnetic and ferromagnetic spin-spin interactions separately. A general result is that due to the Bessel-function modulation, every phase in diagram can exist in a broadened Rabi frequency region. For antiferromagnetic interactions, we find that the existence of a stripe phase can be dramatically extended in the \(\varepsilon\) domain due to the Floquet-induced spin-exchange interaction. For ferromagnetic interactions, a new stripe phase is revealed, and its features, including high contrast and Bogoliubov excitations, are identified. In all previous studies of spin-1/2 and spin-1 spin-orbit-coupled BECs, stripes have a very low contrast, since they exist in low \(\Omega\) regime and the contrast is proportional to the Rabi frequency \(\Omega\)[23]. The new stripe phase in the Floquet spinor BEC exists in high \(\Omega\) region and its high contrast is in favor of experimental observations. ###### Acknowledgements. We appreciate Prof. Peter Engels for stimulating discussions. This work is supported by the National Natural Science Foundation of China with Grants No. 11974235 and No. 11774219. H.L acknowledges supports from Okinawa Institute of Science and Technology Graduate University.
2309.07882
Scalable Model-Based Gaussian Process Clustering
Gaussian process is an indispensable tool in clustering functional data, owing to it's flexibility and inherent uncertainty quantification. However, when the functional data is observed over a large grid (say, of length $p$), Gaussian process clustering quickly renders itself infeasible, incurring $O(p^2)$ space complexity and $O(p^3)$ time complexity per iteration; and thus prohibiting it's natural adaptation to large environmental applications. To ensure scalability of Gaussian process clustering in such applications, we propose to embed the popular Vecchia approximation for Gaussian processes at the heart of the clustering task, provide crucial theoretical insights towards algorithmic design, and finally develop a computationally efficient expectation maximization (EM) algorithm. Empirical evidence of the utility of our proposal is provided via simulations and analysis of polar temperature anomaly (\href{https://www.ncei.noaa.gov/access/monitoring/climate-at-a-glance/global/time-series}{noaa.gov}) data-sets.
Anirban Chakraborty, Abhisek Chakraborty
2023-09-14T17:28:49Z
http://arxiv.org/abs/2309.07882v1
# Scalable Model-Based Gaussian Process Clustering ###### Abstract Gaussian process is an indispensable tool in clustering functional data, owing to it's flexibility and inherent uncertainty quantification. However, when the functional data is observed over a large grid (say, of length \(p\)), Gaussian process clustering quickly renders itself infeasible, incurring \(O(p^{2})\) space complexity and \(O(p^{3})\) time complexity per iteration; and thus prohibiting it's natural adaptation to large environmental applications (Paton and McNicholas, 2020; Vanhatalo et al., 2021). To ensure scalability of Gaussian process clustering in such applications, we propose to embed the popular Vecchia approximation (Vecchia, 1988) for Gaussian processes at the heart of the clustering task, provide crucial theoretical insights towards algorithmic design, and finally develop a computationally efficient expectation maximization (EM) algorithm. Empirical evidence of the utility of our proposal is provided via simulations and analysis of polar temperature anomaly (noaa.gov) data-sets. Anirban Chakraborty\({}^{*}\), Abhisek Chakraborty\({}^{*}\)+Department of Statistics, Texas A\(\&\)M University 3143, College Station, TX 77843 Functional Data, Gaussian Process Mixtures, Kullback-Leibler Projection, Vecchia Approximation, temperature Anomaly. Footnote †: Both the authors contributed equally. There was no external on internal funding for this work.. ## 1 Introduction Functional data clustering (Ramsay and Silverman, 2002) aims to discern distinct patterns in underlying continuous functions based on observed discrete measurements over a grid. Model-based methods for clustering functional data (Bouveyron and Jacques, 2011; Jacques and Preda, 2013) are popular in applications in engineering, environmental sciences, social sciences etc; since the approaches allow for assumption of complex yet parsimonious covariance structures, and perform simultaneous dimensionality reduction and clustering. Such methods crucially involve modeling the functional principal components scores (Bouveyron and Jacques, 2011; Jacques and Preda, 2013) or coefficients of basis functions (Ray and Mallick, 2006). Recent literature on functional data clustering has gravitated towards adopting Gaussian processes (Rasmussen et al., 2006), especially in the context of environmental applications (Paton and McNicholas, 2020; Vanhatalo et al., 2021), owing to it's flexibility, interpretability, and natural uncertainty quantification. However, naive implementation of Gaussian process clustering (Paton and McNicholas, 2020) incurs \(O(p^{2})\) space complexity and \(O(p^{3})\) time complexity, and becomes infeasible for data sets observed over large grids. To circumnavigate this issue, we appeal to the recent literature on scalable computation involving Gaussian processes. In spatial statistics, the Vecchia approximation (Vecchia, 1988) and its extensions (Katzfuss and Guinness, 2021; Katzfuss et al., 2020) are popular ordered conditional approximations of Gaussian processes, that imply a valid joint distribution for the data and results in straightforward likelihood-based parameter inference. This allows for proper uncertainty quantification in downstream applications. On the computational front, Vecchia approximation of Gaussian processes leads to inherently parallelizable operations and results in massive computational gains, refer to 2.3 for further details. Consequently, Vecchia approximation of Gaussian processes is adopted in a plethora of applications in recent times, e.g, Bayesian optimization (Jimenez and Katzfuss, 2023), Computer model emulation (Katzfuss et al., 2022), Gaussian process regression, to name a few. In this article, to ensure scalability of Gaussian process clustering in large scale applications, we propose to adopt the popular Vecchia approximation for Gaussian processes at every iteration of the clustering, provide crucial theoretical framework for a potential algorithmic design, and finally develop a computationally efficient expectation maximization algorithm. Clustering accuracy and computational gains of the proposed algorithm are delineated through several numerical experiments, and publicly available data set on temperature anomaly in the Earth's geographic north pole. ## 2 Proposed Methodology ### Gaussian Process (GP) Mixture Models Let \(y(\mathbf{x})\) be the observed output at a \(d\)-dimensional input vector \(\mathbf{x}\in\mathcal{X}\), and \(y(\cdot)\sim\mathcal{GP}(\boldsymbol{\mu},\mathbf{K})\) is assumed to be a Gaussian process (\(\mathcal{GP}\)) (Rasmussen et al., 2006) with mean function \(\boldsymbol{\mu}:\mathcal{X}\rightarrow\mathbb{R}\) and a positive-definite covariance function \(\mathbf{K}:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\). Throughout most of this article, for the sake of brevity, we focus on exponential covariance function of the form \(\mathbf{K}(\mathbf{x}_{i},\mathbf{x}_{j})=\sigma^{2}\text{exp}\left(-\frac{( x_{i}-x_{j})^{2}}{l^{2}}\right)\) with range parameter \(l\) and scale parameter \(\sigma\); however the proposed methodology generalises beyond the specific choice. By definition of \(\mathcal{GP}\), the vector \(\mathbf{y}=\left(y(\mathbf{x}_{1}),\ldots,y(\mathbf{x}_{p})\right)^{\top}\) of responses at \(p\) input values \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{p}\}\) follows an \(p\)-variate Gaussian distribution with covariance matrix \(\mathbf{K}=\left(K(\mathbf{x}_{i},\mathbf{x}_{j})\right)_{i,j=1,\ldots,p}\), whose \((i,j)^{\text{th}}\) entry describes the covariance between the responses corresponding to the input values \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). Finite mixtures of such Gaussian processes (Paton and McNicholas, 2020; Vanhatalo et al., 2021) provide a flexible tool to model functional data, and it takes the form \(y(\cdot)\sim\sum_{g=0}^{G}\pi_{g}\mathcal{GP}(\boldsymbol{\mu}_{g},\mathbf{K }_{g})\),, where \(\pi_{g}\geq 0,g=1,\ldots,G\); \(\sum_{g=1}^{G}\pi_{g}=1\). Again, by definition of \(\mathcal{GP}\), a vector \(\mathbf{y}=\left(y(\mathbf{x}_{1}),\ldots,y(\mathbf{x}_{p})\right)^{\top}\) of responses at \(p\) input values \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{p}\}\) follows a \(G\)-mixture of \(p\)-variate Gaussian distributions \(y(\cdot)\sim\sum_{g=0}^{G}\pi_{g}\mathcal{N}_{p}(\boldsymbol{\mu}_{g},\mathbf{ K}_{g})\). Estimation of the model parameters \(\mu_{g},\pi_{g},l_{g},\sigma_{g},\forall g=1,\ldots,G\) is routinely carried out via Expectation-maximization (EM) algorithm (Dempster et al., 1977). ### Naive EM Algorithm Suppose, we observe \(N\) independent realizations \(\mathbf{Y}=\left(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{N}\right)\) of GP mixtures, where \(\mathbf{y}_{i}\) is the output corresponding to a \(p\)-dimensional input grid of a covariate \(x\in\mathcal{R}\), i.e., \(\mathbf{y}_{i}(x)=\left(\mathbf{y}_{i}(x_{1}),\mathbf{y}_{i}(x_{2}),\ldots, \mathbf{y}_{i}(x_{p})\right)\). To classify the realizations into \(G\) (known) separate clusters, we first present a naive EM algorithm (Dempster et al., 1977) and delineate associated computational bottlenecks. Since a \(\mathcal{GP}\) realization on a grid follows a multivariate Gaussian distribution, the complete log-likelihood takes the form, \[\mathcal{L}(\theta|\mathbf{Y})=\sum_{i=1}^{N}\sum_{g=1}^{G}1_{\{z_{i}=g\}} \log\left(\pi_{g}\mathcal{N}_{p}(y_{i};\boldsymbol{\mu}_{g},\mathbf{K}_{g})\right) \tag{1}\] where \(\theta=\left(\pi_{1:G},\mu_{1:G},l_{1:G},\sigma_{1:G}\right)\) denote the model parameters of interest, and \(\mathbf{Z}=\left(z_{1},\ldots,z_{N}\right)\) are the latent clustering indices. Without loss of generality, we assume \(\boldsymbol{\mu}_{g}=0\forall g\), and present the EM-updates for \(\theta=\left(\pi_{1:G},l_{1:G},\sigma_{1:G}\right)\). (1) E- Step.The conditional expectation at \(s^{th}\) iteration, \(Q(\theta|\theta^{s})=\mathbf{E}_{z_{1},\ldots z_{N}\sim p(\cdot|\mathbf{Y}, \theta^{(s)})}\mathcal{L}(\mathbf{Z},\theta|\mathbf{Y})=\sum_{i=1}^{N}\sum_{g= 1}^{G}\left(W_{ig}^{(s)}\text{log}(\pi_{g}\mathcal{N}(\mathbf{y}_{i};0,\mathbf{ K}_{g}))\right)\), where \(W_{ig}^{(s)}=\mathbf{P}(z_{i}=g|\mathbf{Y},\theta^{(s)})\propto\pi_{g} \mathcal{N}_{p}(y_{i};0,\mathbf{K}_{g}^{(s)})\). **(2) M-step.** Next, we calculate \(\theta^{(s+1)}=\text{argmax }Q(\theta|\theta^{(s)})\), via gradient ascent updates \(\pi_{g}^{(s+1)}\propto\sum_{i=1}^{N}W_{ig}^{(s)},\ l_{g}^{(s+1)}=l_{g}^{(s)}+ \lambda\frac{d}{dl_{g}}\epsilon^{(s)},\ \sigma_{g}^{(s+1)}=\sigma_{g}^{(s)}+ \lambda\frac{d}{dl_{\sigma_{g}}}\epsilon^{(s)},\) where \(\epsilon^{(s)}=\sum_{i=1}^{N}\sum_{g=1}^{G}\frac{W_{ig}^{(s)}}{2}\left(\text{ log}|\mathbf{K}_{g}^{-1}|-\mathbf{y}_{i}^{\top}\mathbf{K}_{g}^{-1}\mathbf{y}_{i}\right)\), and \(\lambda\) denotes the learning rate. We run the two steps iteratively until convergence. Although the above algorithm straightforward to implement, both the \(\mathbf{E}\) and \(\mathbf{M}\) steps require inversion of covariance kernels in the normal log-likelihood calculation, which incurs a \(\mathcal{O}(p^{3})\) time complexity. While spectral decompositions (Dennan and Leyva-Ramos, 1981) can speed up matrix inversion, these methods often fail to work in practice even for matrices of moderate dimensions. Alternatively, to develop our modified EM algorithm, we utilize the popular Vecchia approximation in evaluating the log-likelihood in (1), which enable us to carry out matrix inversions in quasi-linear time. ### Vecchia Approximation for GP Motivated by the exact decomposition of the joint density \(p(\mathbf{y})=\prod_{i=1}^{p}p(y_{i}|\mathbf{y}_{1:i-1})\) as a product of univariate conditional densities, (Vecchia, 1988) proposed the approximation \[\widehat{p}(\mathbf{y})=\prod_{i=1}^{p}p(y_{i}|\mathbf{y}_{c(i)}), \tag{2}\] where \(c(i)\subset\{1,\ldots,i-1\}\) is a conditioning index set of size \(|c(i)|=\min(m,i-1)\) for all \(i=2,\ldots,n\) (and \(c(1)=\varnothing\)). Even with relatively small conditioning-set size \(m\ll p\), the approximation (2) with appropriate choice of the \(c(i)\) is very accurate due to the screening effect (Stein, 2011). We get back the exact likelihood for \(m=p-1\). The approximation accuracy of the Vecchia approach depends on the choice of ordering of the variables \(y_{1},\ldots,y_{n}\) and on the choice of the conditioning sets \(c(i)\). A general Vecchia framework (Katzfuss and Guinness, 2021) obtained by varying these choices unifies many popular GP approximations (Quinoro-Candela and Rasmussen, 2005; Snelson and Ghahramani, 2007). In practice, high accuracy can be achieved using a maximum-minimum distance (maximin) ordering (Guinness, 2018), that picks the first variable arbitrarily, and then chooses each subsequent variable in the ordering as the one that maximizes the minimum distance to previous variables in the ordering, and the distance between two variables \(y_{i}\) and \(y_{j}\) is defined as the Euclidean distance \(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|\) between their corresponding inputs. The choice of the conditioning set \(c(i)\) implies a sparsity constraint \(\mathcal{S}\) on the covariance structure of \(\widehat{p}(\mathbf{y})\). (Schafer et al., 2021) showed that, under \(\mathcal{S}\), the Vecchia approximation \(\widehat{p}(\mathbf{y})\) of the true \(\mathcal{GP}\)\(p(\mathbf{y})\) is the optimal Kullback-Leibler (KL) projection of the true \(\mathcal{GP}\)\(p(\mathbf{y})\) within the class of \(\mathcal{GP}\)s that satisfy \(S\), i.e, \(\widehat{p}(\mathbf{y})=\operatorname*{arg\,min}_{L\in\mathcal{S}}\text{KL}[p (\mathbf{y})\parallel\mathcal{N}_{p}(\boldsymbol{\mu},(LL^{T}))^{-1}]\), where \(\mathcal{S}=\{L\in\mathcal{R}^{p\times p}:L_{ij}\neq 0\implies(i,j)\in S\}\). The implied approximation a grid, \(\widehat{p}(\mathbf{y})=\mathcal{N}_{p}(\boldsymbol{\mu},\widetilde{\mathbf{ K}})\) is also multivariate Gaussian, and the Cholesky factor of \(\widetilde{\mathbf{K}}^{-1}\) is highly sparse with fewer than \(pm\) off-diagonal nonzero entries (Datta et al., 2016; Katzfuss and Guinness, 2021). Consequently, the \(p(y_{i}|\mathbf{y}_{c(i)})\) in (2) are all Gaussian distributions that can be computed in parallel using standard formulas, each using \(\mathcal{O}(m^{3})\) operations. ### Vecchia Approximation for GP Mixtures To improve upon the EM algorithm in sub-section 2.2, we develop a Vecchia approximation framework for Gaussian process mixture models of the form \(p^{(\text{SPpm})}(\mathbf{y})=\sum_{g=1}^{G}\pi_{g}p_{g}(\mathbf{y})\). Motivated by 2, for a fixed conditioning set size \(m\), we express \(p^{(\text{SPpm})}(\mathbf{y})\) as mixture of products of univariate conditional densities, \(\widehat{p}^{(\text{SPpm})}(\mathbf{y})=\sum_{g=1}^{G}\pi_{g}[\prod_{i=1}^{p} p_{k}(y_{i}|\mathbf{y}_{c(i)})]\), where \(c(i)\subset\{1,\ldots,i-1\}\) is a conditioning index set of size \(|c(i)|=\min(m,i-1)\) for all \(i=2,\ldots,p\) (and \(c(1)=\varnothing\)). Next, we discuss the theoretical limitation of the proposed approximation for \(\mathcal{GP}\) mixtures. **Proposition 1** [No Free Lunch Theorem].: Suppose \(\widehat{p}^{(\text{SPpm})}(\mathbf{y})\) denotes the maximum likelihood estimator of \(p^{(\text{SPpm})}(\mathbf{y})\). Then, \[\text{KL}[p^{(\text{SPpm})}(\mathbf{y})\parallel\widehat{p}^{(\text{SPpm})}( \mathbf{y})]\leq\] \[:=\text{KL}_{\text{relaxed}}\Bigg{[}p^{(\text{SPpm})}(\mathbf{y})\parallel \widehat{p}^{(\text{SPpm, vecchia(m))}}(\mathbf{y})\Bigg{]}\] for any choice of conditioning set size \(m\). Proof.: By the use of the definition of a maximum likelihood estimator under model misspecification (White, 1982), and the convexity of KL divergence, serially, we have \[\text{KL}[p^{(\text{SPpm})}(\mathbf{y})\parallel\widehat{p}^{( \text{SPpm, mle})}(\mathbf{y})]\] \[=\min_{L_{1:G}\in\mathcal{S}_{m}}\Bigg{\{}\text{KL}[p(\mathbf{y}) \parallel\sum_{g=1}^{G}\pi_{g}\mathcal{N}_{p}(\boldsymbol{\mu},(L_{g}L_{g}^{T} ))^{-1}]\Bigg{\}}\] \[\leq\min_{L_{1:G}\in\mathcal{S}_{m}}\Bigg{[}\sum_{g=1}^{G}\pi_{g} \text{KL}[p(\mathbf{y})\parallel\mathcal{N}_{p}(\boldsymbol{\mu},(L_{g}L_{g}^ {T}))^{-1}]\Bigg{]}.\] Hence, we have the proof. Next, we discuss a theoretical result that provides crucial insights into the choice of the conditioning set size \(m\). **Proposition 2** [Choice of Conditioning Set size].: For a choice of conditioning set size \(m_{1}\geq m_{2}\), \[\text{KL}_{\text{relaxed}}\Bigg{[}p^{(\text{SPpm})}(\mathbf{y}) \parallel\widehat{p}^{(\text{SPpm, vecchia($m_{1}$)})}(\mathbf{y})\Bigg{]}\] \[\leq\text{KL}_{\text{relaxed}}\Bigg{[}p^{(\text{SPpm})}(\mathbf{y })\parallel\widehat{p}^{(\text{SPpm, vecchia($m_{2}$)})}(\mathbf{y})\Bigg{]}.\] That is, enlarging the conditioning sets never increases the relaxed KL divergence. Proof.: By definition, for \(m_{1}\geq m_{2}\), we have \(\mathcal{S}_{m_{2}}\subset\mathcal{S}_{m_{1}}\), which in turn implies \[\min_{L_{1:G}\in\mathcal{S}_{m_{2}}}\Bigg{[}\sum_{g=1}^{G}\pi_{g} \text{KL}[p(\mathbf{y})\parallel\mathcal{N}_{p}(\boldsymbol{\mu},(L_{g}L_{g}^{ T}))^{-1}]\Bigg{]}\] \[\geq\min_{L_{1:G}\in\mathcal{S}_{m_{1}}}\Bigg{[}\sum_{g=1}^{G}\pi_ {g}\text{KL}[p(\mathbf{y})\parallel\mathcal{N}_{p}(\boldsymbol{\mu},(L_{g}L_{g}^ {T}))^{-1}]\Bigg{]}.\] Hence, we have the proof. ### Vecchia-assisted EM (VEM) Algorithm We now have the ground work in place to propose a computationally efficient modified EM algorithm that, first crucially exploits a systematic ordering (Guinness, 2018) of the points in the input grid of the \(\mathcal{GP}\)s, followed by a Vecchia approximation assisted fast Cholesky decomposition of large matrices (Jurek and Katzfuss, 2022), to ensure scalability of clustering of \(\mathcal{GP}\)s observed over large grids. We first outline these data preprocessing steps below. **(i) Ordering.** First, following the popular choice in the literature, we order the points in the input grid via a maximin ordering (Guinness, 2018). Then, we fix the maximum number of nearest neighbors in (2), i.e, \(|c(i)|\leq m\forall i=1,\ldots,p\). We recall from Section 2.3 that, this induces a sparsity structure in the corresponding covariance matrices. The sparsity pattern, denoted by \(\mathcal{S}\), remains fixed throughout the iterations of the algorithm and consequently results in substancial computational gain. **(ii) Matrix inversion.** We adopt a sparse Cholesky factorization scheme for the covariance matrices \(\mathbf{K}_{g}\) imposing the sparsity structure \(\mathcal{S}\)(Jurek and Katzfuss, 2022), described in Algorithm 1. Modulo \(\mathcal{S}\), these sparse Cholesky approximations are calculated in \(\mathcal{O}(pm^{2})\) time. ``` 1:Input covariance matrix \(\boldsymbol{\Sigma}\in\mathbb{R}^{p\times p}\), sparsity matrix \(\mathcal{S}\in\{0,1\}^{p\times p}\) 2:Output lower-triangular \(p\times p\) matrix \(\mathbf{L}=VececInv(\boldsymbol{\Sigma},\mathcal{S})\) 3:for\(i=1\) to \(p\)do 4:for\(j=1\) to \(i-1\)do 5:if\(\mathcal{S}_{i,j}=1\)then 6:\(\mathbf{L}_{i,j}=(\boldsymbol{\Sigma}_{i,j}-\sum_{u=1}^{j-1}\mathbf{L}_{i,u} \mathbf{L}_{j,u})/\mathbf{L}_{j,j}\) 7:endif 8:endfor 9:\(\mathbf{L}_{i,i}=(\boldsymbol{\Sigma}_{i,i}-\sum_{u=1}^{i-1}\mathbf{L}_{u,u})^{1/2}\) 10:endfor ``` **Algorithm 1** Sparse Cholesky decomposition **(ii) Matrix inversion.** We adopt a sparse Cholesky factorization scheme for the covariance matrices \(\mathbf{K}_{g}\) imposing the sparsity structure \(\mathcal{S}\)(Jurek and Katzfuss, 2022), respectively. The sparsity pattern, denoted by \(\mathcal{S}\), remains fixed throughout the iterations of the algorithm and consequently results in substancial computational gain. **(iii) Matrix inversion.** We adopt a sparse Cholesky factorization scheme for the covariance matrices \(\mathbf{K}_{g}\) imposing the sparsity structure \(\mathcal{S}\)(Jurek and Katzfuss, 2022), described in Algorithm 1. Modulo \(\mathcal{S}\), these sparse Cholesky approximations are calculated in \(\mathcal{O}(pm^{2})\) time. ``` 1:Input covariance matrix \(\boldsymbol{\Sigma}\in\mathbb{R}^{p\times p}\), sparsity matrix \(\mathcal{S}\in\{0,1\}^{p\times p}\) 2:Output lower-triangular \(p\times p\) matrix \(\mathbf{L}=VecInv(\boldsymbol{\Sigma},\mathcal{S})\) 3:for\(i=1\) to \(p\)do 4:for\(j=1\) to \(i-1\)do 5:if\(\mathcal{S}_{i,j}=1\)then 6:\(\mathbf{L}_{i,j}=(\boldsymbol{\Sigma}_{i,j}-\sum_{u=1}^{j-1}\mathbf{L}_{i,u} \mathbf{L}_{j,u})/\mathbf{L}_{j,j}\) 7:endif 8:endfor 9:\(\mathbf{L}_{i,i}=(\boldsymbol{\Sigma}_{i,i}-\sum_{u=1}^{i-1}\mathbf{L}_{u,u})^{1/2}\) 11:endfor ``` **Algorithm 2** Sparse Cholesky decomposition **(iii) Matrix inversion.** We use the sparsity structure \(\mathcal{S}\)(Jurek and Katzfuss, 2022) to construct the covariance matrices \(\mathbf{K}_{g}\) and \(\mathcal{S}\) We will be going to use this sparse Cholesky factorization algorithm throughout our EM algorithm, as stated below. **(1) E-step.** From the **E** step Section 2.2, it is evident that the calculation of \(W_{ig}\)'s require \(\mathcal{O}(p^{3})\) time for matrix computations due to inversion and determinant calculation of \(\mathbf{K}_{ig}\)'s. Instead, we first calculate a sparse inverse Cholesky factor \(\hat{\mathbf{U}}_{g}\) = \(VeccInv(\mathbf{K}_{g},\mathcal{S})\) using 1 in \(\mathcal{O}(m\text{log}m)\) time. Consequently, we approximate \(|\mathbf{K}_{g}|\) by product of diagonal elements of \(\mathbf{U}_{g}\). The modified loglikelihood can be expressed as \(\hat{Q}\big{(}\theta|\theta^{(s)}\big{)}=\sum_{i=1}^{N}\sum_{g=1}^{G}\Big{(} \hat{W}_{ig}^{(s)}\text{log}\big{(}\pi_{g}\mathcal{N}_{p}(\mathbf{y}_{i};0, \hat{K}_{g})\big{)}\Big{)}\), where \(\hat{K}_{g}=[\hat{\mathbf{U}}_{g}\hat{\mathbf{U}}_{g}^{\top}]^{-1}\), and \(\hat{W}_{ig}^{(s)}\)'s denote the conditional expectations calculated using Vecchia approximation.. **(2) M-step.** Here, we modify Section 2.2 as, \(\pi_{g}^{(s+1)}\propto\sum_{i=1}^{N}\hat{W}_{ig}^{(s)}\), \(l_{g}^{(s+1)}=l_{g}^{(s)}+\lambda\frac{d}{dl_{g}}\hat{\epsilon}^{(s)}\), \(\sigma_{g}^{(s+1)}=\sigma_{g}^{(s)}+\lambda\frac{d}{d\sigma_{g}}\hat{\epsilon }^{(s)}\), where \(\hat{\epsilon}^{(s)}=\sum_{i,g}\frac{\hat{W}_{ig}^{(s)}}{2}\left(\text{log}| \hat{\mathbf{K}}_{g}^{-1}|-\mathbf{y}_{i}^{\top}\hat{\mathbf{K}}_{g}^{-1} \mathbf{y}_{i}\right)\). ## 3 Performance Evaluation **Experiments.** From each of the two Gaussian processes \(\mathbf{y}(\cdot)\sim\mathcal{GP}(0,\;\mathbf{K}_{l_{i},\sigma_{i}}),\;i=1,2\), we simulate \(10\) realizations observed over an uniform grid of length \(p\) = \(300\); and consider the task of classifying the \(20\) realizations into \(2\) clusters. We compare the accuracy and speed of the naive EM algorithm (2.2) and the proposed VEM algorithm (2.5) with varying values of conditioning set size \(m\in\{15,25,30,60\}\). We utilise the Normalized Mutual Information (NMI) (Danon et al., 2005) as a measure of agreement between the true and the estimated clusters. We consider \(25\) repeated trials under two separate scenarios corresponding two sets of choices of the covariance kernel parameters: **(1)** a difficult case with \((l_{1},\sigma_{1})=(0.2,0.2)\) and \((l_{2},\sigma_{2})=(0.5,0.3)\), where two clusters are rather indistinguishable, as shown in 1a; **(2)** an easy case with \((l_{1},\sigma_{1})=(0.2,0.5)\) and \((l_{2},\sigma_{2})=(0.5,0.2)\), where the clusters are distinguishable, as shown in figure 0(b). In the first scenario, the VEM algorithm often performs similar to the computationally intensive naive EM algorithm as \(m\) increases. For lower the values of \(m\), the Vecchia approximation of \(\mathcal{GP}\) mixtures renders itself inaccurate and the clustering accuracy via the VEM algorithm deteriorates significantly, as expected. Refer to Propositions 1 and 2 in Section 2.5 for further discussions. In the second scenario, the computationally efficient VEM algorithm almost always performs similar to the naive EM algorithm even a very small \(m\), while enjoying remarkably improved time complexity. For instance, with \(m/p=0.1\) (i.e \(m\) = 30), the VEM algorithm took on an average only 40\(\%\) of the time taken by the naive EM algorithm. The computational advantage gets exponentially better for moderate to large data-set. For example, with \(p=700\), VEM algorithm with the same \(m/p\) ratio took only 10\(\%\) time of that of the naive EM algorithm. **Temperature Anomalies at the North Pole.** We consider data on the monthly temperature anomalies in the Earth's geographic north pole between years \(1901-2022\), publicly available at the National Oceanic and Atmospheric Administration website (noaa.gov). Due to numerous anthropogenic factors, including rapid industrialization, agriculture, burning of fossil fuel etc., temperature has increasingly fluctuated since 1901. The study of the temperature anomaly in the Earth's north pole is especially critical, since, among other impacts, the weakened polar jet stream often brings the polar vortex further south, which results in extreme weather events in North America, Europe and Asia. The goal of this study is to group the \(12\) months of year with respect to the extent of temperature anomaly, based on the perception that the weather is impacted differently in the winter and summer months at the north pole. To that end, we consider \(12\) time series of length \(122\), one corresponding to each month of the year, of monthly temperature anomalies over the years \(1901\) - \(2022\). We take \(5\)-year moving av Figure 1: Simulated Gaussian process realizations Figure 3: Estimated mean temperature anomalies in each cluster **Temperature Anomalies at the North Pole.** We consider data on the monthly temperature anomalies in the Earth's geographic north pole between years \(1901-2022\), publicly available at the National Oceanic and Atmospheric Administration website (noaa.gov). Due to numerous anthropogenic factors, including rapid industrialization, agriculture, burning of fossil fuel etc., temperature has increasingly fluctuated since 1901. The study of the temperature anomaly in the Earth's north pole is especially critical, since, among other impacts, the weakened polar jet stream often brings the polar vortex further south, which results in extreme weather events in North America, Europe and Asia. The goal of this study is to group the \(12\) months of year with respect to the extent of temperature anomaly, based on the perception that the weather is impacted differently in the winter and summer months at the north pole. To that end, we consider \(12\) time series of length \(122\), one corresponding to each month of the year, of monthly temperature anomalies over the years \(1901\) - \(2022\). We take \(5\)-year moving av Figure 2: Boxplot of NMI values for multiple simulations of the two scenarios in figure 1. Higher NMI values are desired. erages for each of the \(12\) time series to account for the cyclical variations, and consider \(\mathcal{GP}\)'s with the Matern covariance kernel with \(\nu=0.5\)(Porcu et al., 2023) to describe the resulting time-series. Considering \(3\) classes, we carry out the clustering task via the naive EM algorithm 2.2 and the proposed VEM algorithm 2.5 with conditioning set size \(m=10\). The result from computationally efficient VEM algorithm matches with that obtained via the computationally intensive naive EM algorithm, and it is displayed in Figure 3. Three distinct clusters are identified, e.g, summer months (June to August), winter months (October to February), and transitioning months (March, April and September). ## 4 Conclusion Taking advantage of the popular Vecchia approximation of \(\mathcal{GP}\)s, we proposed an efficient EM algorithm for \(\mathcal{GP}\) clustering, and delineated the computational gains. Theoretical limitations of the proposed methodology is presented via a no free lunch theorem. In future, we shall develop a Vecchia assisted non-parametric Bayes \(\mathcal{GP}\) clustering algorithm, that simultaneously learns the numbers of the clusters and the clustering indices from the data.
2309.16757
A SPectroscopic survey of biased halos In the Reionization Era (ASPIRE): JWST Discovers an Overdensity around a Metal Absorption-selected Galaxy at $z\sim5.5$
The launch of ${\it JWST}$ opens a new window for studying the connection between metal-line absorbers and galaxies at the end of the Epoch of Reionization (EoR). Previous studies have detected absorber-galaxy pairs in limited quantities through ground-based observations. To enhance our understanding of the relationship between absorbers and their host galaxies at $z>5$, we utilized the NIRCam Wide Field Slitless Spectroscopy (WFSS) to search for absorber-associated galaxies by detecting their rest-frame optical emission lines (e.g., [OIII] + H$\beta$). We report the discovery of a MgII-associated galaxy at $z=5.428$ using data from the ${\it JWST}$ ASPIRE program. The MgII absorber is detected on the spectrum of quasar J0305--3150 with a rest-frame equivalent width of 0.74$\mathring{A}$. The associated galaxy has an [OIII] luminosity of $10^{42.5}\ {\rm erg\ s^{-1}}$ with an impact parameter of 24.9 proper kiloparsecs (pkpc). The joint ${\it HST}$-${\it JWST}$ spectral energy distribution (SED) implies a stellar mass and star-formation rate of ${\rm M_* \approx 10^{8.8}}$ ${\rm M_{\odot}}$, ${\rm SFR}\approx 10\ {\rm M_{\odot}\ yr^{-1}}$. Its [OIII] equivalent width and stellar mass are typical of [OIII] emitters at this redshift. Furthermore, connecting the outflow starting time to the SED-derived stellar age, the outflow velocity of this galaxy is $\sim300\ {\rm km\ s^{-1}}$, consistent with theoretical expectations. We identified six additional [OIII] emitters with impact parameters of up to $\sim300$ pkpc at similar redshifts ($|dv|<1000\ {\rm km\ s^{-1}}$). The observed number is consistent with that in cosmological simulations. This pilot study suggests that systematically investigating the absorber-galaxy connection within the ASPIRE program will provide insights into the metal-enrichment history in the early universe.
Yunjing Wu, Feige Wang, Zheng Cai, Xiaohui Fan, Kristian Finlator, Jinyi Yang, Joseph F. Hennawi, Fengwu Sun, Jaclyn B. Champagne, Xiaojing Lin, Zihao Li, Zuyi Chen, Eduardo Bañados, George D. Becker, Sarah E. I. Bosman, Gstavo Bruzual, Stephane Charlot, Hsiao-Wen Chen, Jacopo Chevallard, Anna-Christina Eilers, Emanuele Paolo Farina, Xiangyu Jin, Hyunsung D. Jun, Koki Kakiichi, Mingyu Li, Weizhe Liu, Maria A. Pudoka, Wei Leong Tee, Zhang-Liang Xie, Siwei Zou
2023-09-28T18:00:04Z
http://arxiv.org/abs/2309.16757v2
###### Abstract ###### Abstract The launch of JWST opens a new window for studying the connection between metal-line absorbers and galaxies at the end of the Epoch of Reionization. Previous studies have detected absorber-galaxy pairs in limited quantities through ground-based observations. To enhance our understanding of the relationship between absorbers and their host galaxies at \(z>5\), we utilized the NIRCam wide-field slitless spectroscopy to search for absorber-associated galaxies by detecting their rest-frame optical emission lines (e.g., [O iii] + H\(\beta\)). We report the discovery of a Mg ii-associated galaxy at \(z=5.428\) using data from the JWST ASPIRE program. The Mg ii absorber is detected on the spectrum of quasar J0305-3150 with a rest-frame equivalent width of 0.74 A. The associated galaxy has an [O iii] luminosity of \(10^{42.5}\) erg s\({}^{-1}\) with an impact parameter of 24.9 pkpc. The joint Hubble Space Telescope-JWST spectral energy distribution (SED) implies a stellar mass and star formation rate of \(M_{\rm*}\approx 10^{8.8}\)\(M_{\odot}\), star-formation rate \(\approx 10\) M\({}_{\odot}\) yr\({}^{-1}\). Its [O iii] equivalent width and stellar mass are typical of [O iii] emitters at this redshift. Furthermore, connecting the outflow starting time to the SED-derived stellar age, the outflow velocity of this galaxy is \(\sim\)300 km s\({}^{-1}\), consistent with theoretical expectations. We identified six additional [O iii] emitters with impact parameters of up to \(\sim\)300 pkpc at similar redshifts (\(|dv|<1000\) km s\({}^{-1}\)). The observed number is consistent with that in cosmological simulations. This pilot study suggests that systematically investigating the absorber-galaxy connection within the ASPIRE program will provide insights into the metal-enrichment history in the early Universe. Department of Astronomy, University of Arizona, 933 N Cherry Avenue, Tucson, AZ 85721, USA \({}^{\rm 3}\) New Mexico State University, Las Cruces, 88003 NM, USA \({}^{\rm 4}\) Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen (DTU-Sparc, Technical University of Denmark, Denmark \({}^{\rm 5}\) Department of Physics, Braola Hall, University of California, Santa Barbara, CA 93106-9530, USA \({}^{\rm 6}\) Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden, The Netherlands \({}^{\rm 7}\) Max Planck Institut fur Astronomie, Konigstuhl 17, D-69117, Heidelberg, Germany \({}^{\rm 8}\) The Observatories of the Carnegie Institution for Science, 31 S'anla Barbara Street, Pasadena, CA 91101, USA \({}^{\rm 9}\) Department of Physics & Astronomy, University of California, Riverside, CA 92521, USA \({}^{\rm 10}\) Institut fur Theoretische Physik, Universitat Heidelberg, Philosophenweg 16, D-69120 Heidelberg, Germany \({}^{\rm 11}\) Institute of Radio Astronomy and Astrophysics, National Autonomous University of Mexico, San Jose de la Huerta e8089 Morelia, Michoacan, Mexico \({}^{\rm 12}\) Sorbonne Universite, CNRS, UMR7095, Institut d'Astrophysique de Paris, F-75014, Paris, France \({}^{\rm 13}\) Department of Astronomy & Astrophysics, The University of Chicago, 5640 5 Ellis Avenue, Chicago, IL 60637, USA \({}^{\rm 14}\) Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK \({}^{\rm 15}\) MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, Cambridge, MA 02139, USA \({}^{\rm 16}\) Gemini Observatory, NSF's NOIRLab, 670 N A'ohoku Place, Hilg, HI 96720, USA \({}^{\rm 17}\) SNU Astronomy Research Center, Seoul National University, I 6wanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea \({}^{\rm 18}\) Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtev 128, Kobenhavn N, DK-2200, Denmark _Received 2023 July 14; revised 2023 September 5; accepted 2023 September 23; published 2023 October 19_ ## 1 Introduction The circumgalactic medium (CGM), diffuse gas surrounding galaxies and inside their virial radii, plays an important role in our understanding of galaxy formation and evolution (Tumlinson et al., 2017). However, CGM is nearly invisible and difficult to be detected directly. Intervening metal-line absorbers, detected along the lines of sight of quasars, is a unique tracer for CGM gas in the early Universe. For example, the MUSE Analysis of Gas around Galaxies (MAGG) team has identified 220 C iv absorbers from 28 quasar sight lines at \(3<z<4.5\)(Galbiati et al., 2023). Similarly, at higher redshifts (\(z\gtrsim 5\)), Chen et al. (2017), Becker et al. (2019), Cooper et al. (2019), Zou et al. (2021), and D'Odorico et al. (2022) have conducted statistically meaningful surveys of O i, Mg ii, C iv, and Si iv absorbers. Through comprehensive searches, the XQR-30 group has also provided a catalog of 778 absorbers at \(2<z<6.5\)(Davies et al., 2023). Detecting metal absorbers will help us to understand the details of the chemical enrichment of gaseous reservoirs surrounding galaxies. Theoretical models show that early star formation and associated feedback processes are responsible for enriching the early Universe with metals (Greif et al., 2010; Wise et al., 2012; Sorini et al., 2020). The absorber-galaxy correlation, directly linking galaxies and metal-enriched gas, is thus necessary to study the physical conditions of galaxies dominating the metal enrichment in the early Universe. Some cosmological simulations suggest that the typical stellar mass (\(M_{\rm*}\)) of absorber host galaxies ranges from \(\sim\)10\({}^{7}\) to 10\({}^{9.5}\)\(M_{\odot}\), with impact parameters (projected distance) ranging from a few to \(\sim\)150 pkpc (Keating et al., 2016). At higher redshift, more recent simulations, such as the Technicolor Dawn simulations, suggest that less massive galaxies (with \(M_{\rm*}\)\(\lesssim\) 10\({}^{8}\)\(M_{\odot}\)) could be responsible for metal absorbers at \(z>5\), with impact parameters \(\leqslant\) 300 pkpc (Finlator et al., 2020). Measurements of the impact parameters and star-formation rates (SFRs), stellar masses (\(M_{\rm*}\)), and halo mass (\(M_{h}\)) are proposed to be useful for constraining the detailed physical process of metal enrichment (e.g., Oppenheimer et al., 2009; Finlator et al., 2013; Hirschmann et al., 2013). Therefore, identifying and characterizing galaxies hosting the metal-absorbing gas at high redshifts and studying their properties is crucial for understanding the early chemical enrichment process. Observationally, identifying the host galaxies of metal absorbers at high redshifts has been proven to be challenging with ground-based observations. To date, only \(\sim\)10 absorber-galaxy pairs have been identified at \(z>5\)(Cai et al., 2017; Diaz et al., 2021; Kashino et al., 2023). Diaz et al. (2014, 2015) investigated the projected distribution of galaxies around two high column density C iv absorbers at \(z\sim 5.7\) by searching for Ly\(\alpha\) emitters (LAEs) around them. The closest galaxy to the C iv absorbers lies at \(\sim\)200 pkpc from the C iv absorber along the line of sight of the \(z=6.3\) quasar J1030 \(+\) 0524. This galaxy has an SFR\({}_{\rm{UV}}\)\(\approx\) 10 \(M_{\odot}\) yr\({}^{-1}\) and is at the high-mass end of the LAEs at these redshifts. These studies suggest that the high-C iv column density and C iv absorbers with high column density are either produced by large-scale outflows from relatively massive galaxies or outflows from undetected dwarf galaxies at closer distances. To explore galaxies at the luminosity faint end, Diaz et al. (2021) conducted deep Very Large Telescope (VLT)-MUSE observations of this sight line and identified a 4 times fainter LAE at a projected distance of \(\sim\)11 pkpc to the C iv absorber. Cai et al. (2017) used deep Hubble Space Telescope (HST) narrowband observations to probe C iv-associated LAEs candidates at \(z\sim 5.5\). They identified a candidate LAE with SFR\({}_{\rm{UV}}\)\(\simeq\) 4 \(M_{\odot}\) yr\({}^{-1}\) with a projected impact parameter of 42 pkpc from the absorber. However, they cannot rule out that this LAE candidate could be an O ii emitter at a much lower redshift because of the lack of spectroscopic observations. ALMA can also be used to search for UV-faint galaxies around metal absorbers and is not affected by dust obscuration. Wu et al. (2021) found one [C ii] emitter candidate could be responsible for an O i absorber at \(z\sim 6\). For highly ionized absorbers, Kashino et al. (2023) reported two ALMA-detected [C ii] emitters are associated with a C iv absorber at \(z\approx 5.7\). NIRCam/wide-field slitless spectroscopy (WFSS) on JWST has been proven to be highly effective in enabling galaxy surveys at \(z\gtrsim 5.5\) by detecting strong rest-frame optical emission lines (e.g., [O iii], H\(\beta\); e.g., Sun et al., 2022, 2023; Kashino et al., 2023; Oesch et al., 2023). Because of the large survey area and high sensitivity provided by WFSS observations, it is now possible to crossmatch galaxies and absorbers in the early Universe. For example, the Emission-line Galaxies and Intergalactic Gas in the Epoch of Reionization (EIGER) project presented early JWST observations linking [O iii], H\(\alpha\), and He i emitters with metal absorbers at \(z\gtrsim 5.5\)(Kashino et al., 2023; Bordoloi et al., 2023). In this Letter, we report the discovery of an absorber-galaxy pair at \(z=5.428\) along the line of sight of quasar J0305-3150 identified with the A Spectroscopic Survey of Biased Halos In the Reionization Era (ASPIRE) program (Wang et al., 2023; Yang et al., 2023). Remarkably, the absorber host galaxy also traces galaxy overdensity. We named the host galaxy as ASPIRE-J0305M31-O3-038 (O3-038) in this Letter. The Letter is structured as follows. In Section 2, we summarize the data used in this Letter and present the details of data reduction. We present the absorber searching, host galaxy identification, and the spatial distribution of the galaxies associated with the absorber host galaxy in Section 3. We compare our measurements with both the observational and theoretical studies in Section 4. Finally, we summarize our findings in Section 5. Throughout this Letter, we assume a flat \(\Lambda\)CDM cosmological model with \(\Omega_{M}=0.3\), \(\Omega_{\Lambda}=0.7\), and H\({}_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\). In this cosmological model, an angular size of 1\({}^{\prime\prime}\) corresponds to a physical scale of 6.024 pkpc at \(z=5.4\). ## 2 Target Selection, Observations, and Data Reduction The quasar, J0305-3150 at \(z=6.61\), was discovered by Venemans et al. (2013) and covered by extensive multi-wavelength observations. To investigate the environment around this quasar, Farina et al. (2017) conducted deep MUSE observations to detect LAEs. Champagne et al. (2023) used deep HST imaging to find Lyman break galaxies surrounding this quasar. In this work, we select J0305-3150 as our pilot field to demonstrate the application of JWST observations in conjunction with existing data for investigating early metal enrichment. ### JWST Observations The JWST/NIRCam (Rieke et al., 2023) WFSS were obtained by Program #2078 (PI: F. Wang) with grism spectroscopic observations in F1536W and imaging observations in F115W, F200W, and F356W. The on-source grism exposure time is 2834 s. The direct-imaging exposure times are 472, 2800, and 472 s in the F115W, F200W, and F356W filters, respectively. The data were reduced using the combination of the standard JWST pipeline (Bushouse et al., 2022)21 v1.8.3 and some custom scripts as detailed in Wang et al. (2023) and Yang et al. (2023). We use calibration reference file version of "jwst_1015.pmap". We refer readers to Wang et al. (2023) and Yang et al. (2023) for a more detailed description of the process. Footnote 21: [https://github.com/spacetelescope/jwst](https://github.com/spacetelescope/jwst) ### X-SHOOTER Observations The VLT/X-SHOOTER NIR spectroscopy was obtained through Program ID: 098.B-0537(A) (PI: Farina) with a resolving power of \(R\sim 8100\) and an exposure time of 16, 800 s. The data are reduced with \(\rm{\cal{P}ypeIt}\)22(Prochaska et al., 2020) and presented in Schindler et al. (2020). We briefly recap the reduction procedures below. Sky subtraction was performed based on a standard ABBA method. We obtained the wavelength solutions by fitting the observed sky OH lines. The 1D spectrum was optimally extracted following the optimal-extraction method (Horne, 1986). After that, the flux calibrations and telluric-absorption corrections were performed. For more detailed information, we refer readers to Schindler et al. (2020). We normalized the quasar continuum using \(\rm{\dot{L}inetools}\)23 by manually adding knots for a spline fitting. Footnote 22: [https://pypeit.readthedocs.io/en/release/](https://pypeit.readthedocs.io/en/release/) Footnote 23: [https://linetools.readthedocs.io/en/latest/index.html](https://linetools.readthedocs.io/en/latest/index.html) ### HST Observations HST imaging observations of the field around quasar J0305 were obtained from the program GO \(\#\)15064 (PI: Casey) for investigating the environment surrounding quasars at \(z\) \(\approx\) 6 (Champagne et al., 2023). Five bands were observed: F606W, F814W, F105W, F125W, and F160W. The exposure times are 2115.0, 2029.7, 2611.8, 2611.8, and 2611.8 s, respectively. All HST data were reduced using the standard drizzlepac24 pipeline and with the astrometry registered to the JWST images. All the HST and JWST images are point-spread function (PSF)-matched to the F160W filter. Photometry is performed with \(\rm{\dot{S}ourceXtractor++}\)25(Bertin et al., 2020; Kummel et al., 2020) on the PSF-matched images. More details of the reduction procedures and the photometry measurements are presented in Champagne et al. (2023). Footnote 24: [https://github.com/spacetelescope/drizzlepac](https://github.com/spacetelescope/drizzlepac) Footnote 25: [https://github.com/astroama/SourceXtractorPlusPlusPlus](https://github.com/astroama/SourceXtractorPlusPlusPlus) ## 3 Analysis and Results ### Mg ii-absorber Searching and Voigt-profile Fitting We briefly summarize our Mg ii-absorber searching method in what follows, and more details on the method will be presented in a forthcoming paper (Y. Wu et al., 2023, in preparation.). Following Matejek and Simcoe (2012) and Chen et al. (2017), we first generated a normalized kernel using two Gaussian functions with the separation of the intrinsic Mg ii-doublet interval (\(\sim\)751 km s\({}^{-1}\)). The FWHM of these two Gaussian profiles ranges from 37.5 (the minimal resolution element) to 150 km s\({}^{-1}\), corresponding to reasonable line widths (Chen et al., 2017). The X-Shooter spectrum was convolved with the matched filter. We identify Mg ii absorbers from the convolved spectrum by identifying peaks with a signal-to-noise Ratio (\(\rm{S/N}\)) threshold of 2.5. We then visually inspected all identified Mg ii absorber candidates to remove artifacts caused by sky-line residuals. Two Mg ii absorbers passed the visual inspection and are identified along this sight line. This Mg ii absorber has a typical rest-frame equivalent width (REW) of Mg ii absorbers at \(z\) = 5-6 (Chen et al., 2017). To obtain the detailed physical parameters from observed Mg ii-absorption systems, we used VoigtFit to fit the absorption lines26(Krogager, 2018). We need in initial guesses of the column densities (\(\log(M_{\rm Mg\,\,u}/{\rm cm}^{-2})\) = 13) and Doppler parameters (\(b=10\) km s\({}^{-1}\)) to make a fitting. The absorption-line fitting results for our target are shown in the bottom left panel of Figure 1. The derived \(\log N\), \(b\), and the REWs are listed in Table 1. We also use the "apparent optical depth method" (AODM; Savage and Sembach, 1991) developed in linetools to check the fitting results. The column density measured using AODM is \(\log(N_{\rm Mg\,\,u}/{\rm cm}^{-2})\) = 13.43 \(\pm\) 0.05, which is consistent with our Voigt fitting results. Footnote 26: [https://voigtfit.readthedocs.io/en/latest/](https://voigtfit.readthedocs.io/en/latest/) ### The Host Galaxy and Environment of a Mg ii-absorber Metal-line absorbers are thought to be from metal-polluted gas blown out by their host galaxies. In this scenario, the velocity offset between the absorbers and their host galaxies is typically within \(\Delta v\) \(\lesssim\) \(\pm\)200 km s\({}^{-1}\)(e.g., Steidel et al., 2010; Keating et al., 2016; Diaz et al., 2021). The velocity offsets between Mg ii-absorbers and [O iii] emitters are calculated as \(\Delta v({\rm[O_{III}]}-{\rm Mg_{II}})=(z_{\rm[O_{II}]}-z_{\rm Mg_{I}})/(1+z_{ \rm Mg_{I}})\times c\), where \(z_{\rm Mg_{II}}\) and \(z_{\rm[O_{II}]}\) denote the redshifts of the Mg ii absorber and [O iii] emitters, respectively, and \(c\) is the speed of light. To identify the host galaxies of the Mg ii absorbers, we matched redshifts of the Mg ii absorbers with those of [O iii] emitters reported in Wang et al. (2023). Along the line of sight of J0305-3150, we identified an [O iii] emitter, ASPIRE-J0305M31-O3-038 (hereafter O3-038), located at the exact redshift of the Mg ii absorber at \(z\) = 5.428 \(\pm\) 0.003 (\(dv=-2.9\pm 140\) km s\({}^{-1}\)). The JWST spectrum of the [O iii] emitter and the absorption spectrum of the Mg ii absorber are shown in the left panel of Figure 1. As shown in the right panel of Figure 1, the observed impact parameter between the Mg ii absorber and the [O iii] emitter is 4.\({}^{\prime\prime}\)1, corresponding to 24.9 bkpc at \(z\) = 5.428. We compare the observed impact parameter with the expected halo virial radius to determine if this Mg ii absorber belongs to CGM gas of O3-038. To obtain the virial radius and physical properties of the galaxy, we performed SED fitting using photometry from HST and JWST on the Bayesian Analysis of Galaxy SEDs codes, BEAGLE27(Chevallard and Charlot, 2016), following the procedures described in Lin et al. (2023), Whitter et al. (2023), and Chen et al. (2023). We note that, to get precise SED estimation, it is necessary to subtract the emission line contributions from the photometry. We measured the rest-frame [O iii] equivalent widths from the grism observations. The measured [O iii] flux is \(f_{\rm[O\,\,III]}=(1.013\pm 0.084)\times 10^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\). We then estimated the continuum level by subtracting the line flux from the broad-band photometry and obtained \(\rm{EW_{[O\,\,III]}}=490\pm 80\) A. From the SED fitting, we derive the stellar mass, SFR, and stellar age of this galaxy to be \(M_{\rm\bullet}\simeq 10^{8.4^{+0.24}_{-2.7}}\,M_{\odot}\), \(\rm{SFR}\simeq 9.7^{+4.3}_{-2.7}\,M_{\odot}\) yr\({}^{-1}\), and age =\(\rm{79^{+79}_{-79}}\) Myr, respectively (also see Appendix). We note that we do not have H\(\beta\) detection for O3-038 due to a low transmission at the expected wavelength (Figure 2 in Wang et al., 2023). Thus, we do not have the comparison between H\(\beta\)-derived and SED-derived SFRs. Further, by utilizing the stellar mass to halo mass relation employed in Ma et al. (2018), we estimate the halo mass of \(\sim\)1.2 \(\times\) 10\({}^{11}\,M_{\odot}\). At \(z\) \(\sim\) 5.5, the corresponding virial radius is \(\sim\)25 bkpc. We also obtain a consistent estimation using the empirical relation reported in Sun and Furlanetto (2016). We note that the estimated virial radius is consistent with the measured impact parameter. This simple estimation suggests that the O3-038 is the host galaxy of metal absorber at \(z\) = 5.428. We further estimate the one-dimensional velocity dispersion (\(\sigma_{\rm 1d}\)) and the mass scale (\(M_{\rm h,overdensity}\)) for this overdensity, following the procedure described in Ferragamo et al. (2020) and Evrard et al. (2008). We estimate \(\sigma_{\rm 1d}\) by calculating the variance of the line-of-sight velocities (Evrard et al., 2008), i.e., \(\sigma_{\rm 1d}^{2}={1\over N_{p}}\sum_{i=1}^{N_{p}}(v_{i}-\bar{v})^{2}\), where \(v_{i}\) is the velocity of halo member \(i\), \(N_{p}\) is the number of halo members, and \(\bar{v}\) is the mean velocity of these member galaxies. The calculated \(\sigma_{\rm 1d}\) = 346.4 km s\({}^{-1}\). For the mass scale, we used the scaling relations between \(\sigma_{\rm 1d}\) and the halo mass (Ferragamo et al., 2020; Munari et al., 2013), i.e., \({\sigma_{\rm 1d}\over{\rm km~{}s}^{-1}}=A\left[{h(z)M_{\rm h}\over 10^{19}M_{ \odot}}\right]^{2}\), where \(A=1177.0\) km s\({}^{-1}\), and \(\alpha=1/3\)(Munari et al., 2013). Therefore, the estimated halo mass of this overdensity is \(M_{\rm h,overdensity}=4.1\times 10^{12}\,M_{\odot}\). Interestingly, we found that O3-038 resides in an overdense environment. In the field of quasar J0305, six additional [O iii] emitters were detected with WFSS within a velocity offset of \(\pm 1000\) km s\({}^{-1}\) to \(z=5.428\). Our detection limit is \(L_{\rm[O\,{\sc iii}]}\)\(>\)10\({}^{42.3}\) erg s\({}^{-1}\). To determine the significance of this overdensity, we compared our observations with the expected number of [O iii] emitters in a random field. We calculated the mean number density using the [O iii] luminosity function reported by Sun et al. (2023) and Matthee et al. (2023). The mean number density is \(n(L_{\rm[O\,{\sc iii}]}>10^{42.3}\) erg s\({}^{-1})\approx 1.873\times 10^{-3}\) cMpc\({}^{-3}\). In a comparable survey volume to our case (a cube with an area of \(\approx\)5.5 arcmin\({}^{2}\) and height of \(\pm\)1000 km s\({}^{-1}\), corresponding to a survey volume of \(\approx\)607 cMpc\({}^{3}\)), the number of randomly detected [O iii] emitters brighter than our detection limit is \(\sim\)1.1. Therefore, \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Galaxy} & ASPIRE-J0305M31-O3-038 \\ \hline R.A. (deg) & 46.32023 \\ Decl. (deg) & \(-31.84999\) \\ \(z_{\rm spec}\) & \(5.428\pm 0.003\) \\ impact parameter (pkpc) & 25 \\ \hline \multicolumn{2}{c}{ Absorption Properties} \\ \(z_{\rm abs}\) & 5.4284 \(\pm\) 0.0001 \\ \(\log(M_{\rm Mg\,{\sc ii}}/{\rm cm}^{-2})\) & 13.41 \(\pm\) 0.06 \\ \(b\) (km s\({}^{-1}\)) & 56.5 \(\pm\) 9.3 \\ REW(Mg m\(\lambda\)2796) (Å) & 0.74 \(\pm\) 0.10 \\ REW(Mg m\(\lambda\)2803) (Å) & 0.53 \(\pm\) 0.19 \\ \hline \multicolumn{2}{c}{ Photometric Properties} \\ F356W (AB mag) & 25.71 \(\pm\) 0.04 \\ \hline \multicolumn{2}{c}{ Physical parameters} \\ \(f\)([O iii]\({}^{\prime}\) (10\({}^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & 1.013 \(\pm\) 0.084 \\ \(L\)([O iii]\({}^{\prime}\) (10\({}^{42}\) erg s\({}^{-1}\)) & 3.30 \(\pm\) 0.51 \\ EW([O iii]\({}^{\prime}\)) (Å) & 490 \(\pm\) 80 \\ SFR\({}^{\rm R}\) (\(M_{\rm g}\)yr\({}^{-1}\)) & 9.7\({}^{+4.3}_{-2.7}\) \\ \(\log[M_{\rm star}/(M_{\odot})]^{\rm g}\) & 8.8\({}^{+0.2}_{-0.4}\) \\ \(t_{\rm spec}\) (Myr) & 79\({}^{+4.7}_{-57}\) \\ \hline \end{tabular} 1 \end{table} Table 1Physical Properties of ASPIRE-J0305M31-O3-038 Figure 1: Top left: the JWST/NIRCam WFSS spectrum of ASPIRE-J0305M31-O3-038. The error spectrum is shown in pink lines. Bottom left: normalized X-SHOOTER spectrum of the quasar J0305 with Mg ii \(\lambda\lambda\)2796, 2803 and Fe ii \(\lambda\) 2382, 2600 absorptions at \(z=5.4284\). Red shaded regions denote the best-fit Voigt profile. Dashed blue lines indicate the redshift. Right: JWST/NIRCam composite the red, green, and blue map of the quasar field with the pixel scale of 0\({}^{\prime\prime}_{0}\)/03 (blue: F115W, green: F200W, red: F356W). White circles denote the location of ASPIRE-J0305M31-O3-038 and the quasar J0305. We note that the Mg ii absorber is in front of the quasar with the redshift of \(z=5.428\). The impact parameter between the Mg ii absorber and ASPIRE-J0305M31-O3-038 is 4\({}^{\prime\prime}_{1}\), corresponding to 24.9 plpc at \(z=5.428\). we observe a number density 6 times higher than expected in a blank field. This suggests that the Mg ii absorber is associated with an overdense environment. The spatial and redshift distributions of these [O iii] emitters are shown in Figure 2. ## 4 Discussion ### Properties of the Absorption-selected Galaxy The primary goal of this work is to investigate the nature of galaxies that host metal line absorbers. The left panel of Figure 3 shows the measurements of [O iii] EWs and \(M_{\oplus}\) from galaxies in both high redshift (Matthee et al., 2023) and local Universe (Kauffmann et al., 2003; Brinchmann et al., 2004). Generally, lower-mass galaxies exhibit higher [O iii] EW, which will be expected if they have relatively higher specific SFRs (Mannucci et al., 2010; Tang et al., 2019; Endsley et al., 2021). We find that the [O iii] EW and stellar mass of O3-038 are consistent with the general population of [O iii] emitters at \(z\) \(\sim\) 6. Similar to other [O iii] emitters, O3-038 exhibits a larger EW than local star-forming galaxies (SFGs) at matched stellar mass. Similar to other [O iii] emitters, O3-038 exhibits a larger EW than local SFGs at matched stellar mass. The observed large [O iii] equivalent width (EW\({}_{\rm[O\,III]}=490\pm 80\)) suggests the presence of the stellar population with the age of \(\sim\)50 Gyr (Tang et al., 2019). Such young stellar populations could generate hard ionizing photons. Consequently, the observed [O iii] EW will be relatively higher than local SFGs, given a specific stellar mass. We also examine the similarity between this Mg ii-selected [O iii] emitter and other metal-absorber-selected galaxies. Figure 3: Left: [O iii] EWs and derived stellar masses at \(z\) \(\sim\) 6, compared to those of local galaxies (shaded region; MPA-JHU catalog (Kauffmann et al., 2003; Brinchmann et al., 2004)). The measurement of the absorber-associated [O iii] emitter is shown as the red star. Gray dots indicate results obtained from the EIGER sample (Matthee et al., 2023). Right: SFR distribution of metal-absorber-selected galaxies. Galaxies at \(z\) \(>\) 5 are marked as hatched bars. At \(z\) \(<\) 5, the C iv-associated LAEs selected from the MAGG survey (Galbiati et al., 2023) are shown in gray. The blue bars show the C iv-associated LAEs at \(z\) \(>\) 5 (Cai et al., 2017; Díaz et al., 2021). At \(z\) \(\approx\) 6, one ALMA-detected O i-associated emitter is shown in pink (Wu et al., 2021). The [O iii] emitter detected with JWST is shown in red. Figure 2: Left: spatial distribution of [O iii] emitters relative to the quasar. The location of the quasar is shown with a blue star. Dots represent the locations of [O iii] emitters, while the shaded circles correspond to the virial radii of galaxies estimated from their best SED-derived stellar masses. Right: the redshift distribution of [O iii] emitters at \(z\) \(<\) 6 in this quasar field (0305). The hatched bar indicates [O iii] emitters clustering at \(z\) \(\simeq\) 5.4. Targets at other redshifts (\(|d{\rm r}|>1000\) km s\({}^{-1}\)) are shown in blank bars, \(\Delta v({\rm O_{HI}}-{\rm Mg_{H}})=(z_{\rm[O_{HI}]}-z_{\rm Mg_{H}})/(1+z_{\rm Mg _{H}})\)\(\times\) c. The velocity offsets and impact parameters are shown in the top right panel. The Mg ii-associated galaxy is highlighted in red. Because different galaxies were identified by different tracers, e.g., Ly\(\alpha\) or [C ii] emission, we compare their SFRs for consistency. For example, for metal-related LAEs, we convert their measured Ly\(\alpha\) luminosity to the SFRs according to the relation in Ouchi et al. (2020). For metal-selected [C ii] emitters, we followed the \(L_{\rm[C\,II]}\)-SFR relation reported by Schaerer et al. (2020). Figure 3 right panel shows our results. We find that ASPIRE-J0305M31-O3-038 has a higher SFR than the majority of the absorber-selected galaxy sample. We note that there will be internal systematic offsets between different SFR tracers. ### Absorber-Galaxy Cross-correlation Function Determining the correlation function between galaxies and metal-line absorbers is critical for our understanding of which galaxies are responsible for the metal enrichment in the early Universe (Keating et al., 2016; Meyer et al., 2019; Finlator et al., 2020). We calculate the galaxy number excess relative to the blank field around absorbers as a function of impact parameters (\(r\)): \[\xi_{\rm gal-abs}(r)=\frac{1}{n_{0}}\frac{N(r)}{\Delta V(r)}\ -\ 1, \tag{1}\] where \(n_{0}\) is the comoving number density of [O iii] emitters with \(L_{\rm[O\,III]}\) brighter than our detection limit, \(N(r)\) indicates the number of [O iii] emitters around an absorber, and \(\Delta V(r)\) is the survey volume physically associated with the absorber as the function of impact parameters. We note that, to compare observations and simulations fairly, we obtained \(n_{0}\) by integrating the observed [O iii] luminosity from our detection limit to the bright end. Thus, we obtain \(n_{0}=n(L_{\rm[O\,III]}>10^{42.3}\,\rm erg\,s^{-1})\approx 1.873\times 10^{-3}\, \rm cMpc^{-3}\). Here, we define the survey volume as different cylinders with radii ranging from 20 to 300 pkpc and heights of \(|\Delta v|<1000\) km s\({}^{-1}\). In Figure 4, we show the observed number abundance of galaxies surrounding this Mg ii absorber. The errors are estimated by assuming Poissonian noise (84% confidence level) on the number of observed galaxies (Gehrels, 1986). In order to understand the relationship between metal absorbers and galaxies, as well as the environments in which they exist, we compare the observed correlation function with the predictions obtained from cosmological simulations. We focus on the Technicolor Dawn (Finlator et al., 2020), a hydrodynamic cosmological simulation for generating metal absorption and galaxies from \(z=5\) to 7 and taking into account radiative transfer effects. In Technicolor Dawn simulations, the correlation function is measured by linking the simulated Mg ii absorber with similar EWs to our target (\(\rm REW_{2796}>0.74\) A) and these simulated galaxies with \(M_{\rm UV}<-18\) (comparable to our [O iii] detection limit). We show the comparison in Figure 4. The measured clustering effect between Mg ii absorber and bright [O iii] emitters is consistent with the simulated predictions. Our pilot study highlights that JWST NIRCam\(/\)WFSS observations offer an efficient method to understand the metal-enrichment process in the early Universe by detecting galaxies with \(M_{\rm*}\sim 10^{9}\,M_{\odot}\) and linking them with metal absorbers. ### Constraints on Outflow Velocities The metal-enrichment history of CGM is related to galactic outflows (Simcoe et al., 2004; Oppenheimer & Dave, 2006). One key parameter that describes this process is the "outflow velocity" (\(v_{\rm out}\)). Nelson et al. (2019) calculated \(v_{\rm out}\) from the TNG50 cosmological simulation. They found that, at \(z\sim 5\), a galaxy with a stellar mass of \(\sim\)10\({}^{9}\,M_{\odot}\) will have a typical outflow velocity of \(v_{\rm out}\sim 250\) km s\({}^{-1}\) at an impact parameter of \(\sim\)20 pkpc. Observationally, Diaz et al. (2021) estimated the mean outflow velocity to be \(\approx\)200 km s\({}^{-1}\) based on the measured impact parameter between LAEs and C iv absorbers. In their measurement, they assumed that the metal-enriched gas starts being ejected at \(z=10\) since they do not know the star formation history of the galaxy (Figure 19 in their work). Similarly, Galbiati et al. (2023) also assumed a gas launching time and then estimated a mean outflow velocity of \(\approx\)200 km s\({}^{-1}\). From the SED fitting, we derived a stellar age of (79\({}^{+79}_{-57}\) Myr) by assuming a constant star-forming history for O3-038. Together with the observed impact parameter (24.9 pkpc), we estimate an outflow velocity of \(v_{\rm out}=\frac{\rm impact\,\,parameter}{\rm stellar\,\,\,age}\approx 300\) km s\({}^{-1}\). This value is consistent with the simulated prediction (Muratov et al., 2015; Nelson et al., 2019). ## 5 Summary In this Letter, we report the discovery of the host galaxy of a Mg ii absorber at \(z=5.428\) using NIRCam\(/\)WFSS data from the JWST ASPIRE program (Wang et al., 2023). This galaxy is bright in [O iii] with an [O iii] luminosity of \(L_{\rm[O\,III]}=3.30\pm 0.51\times 10^{42}\) erg s\({}^{-1}\) and an [O iii] REW of \(490\pm 80\) A. Based on SED fitting to its HST\(+\)JWST photometry, we estimate the stellar mass, SFR, and stellar age of Figure 4: Number excess of galaxies around Mg ii absorbers at \(z\simeq 5\), \(\zeta_{\rm gal-abs}(r)=\frac{1}{m_{0}\,\Delta V}-1\). The red dots are the measurements obtained from our observations. Error bars are estimated by assuming Poissonian uncertainties in the 84% confidence level (Gehrels, 1986). The blue solid line indicates simulation-predicted values obtained from Doughty and Finlator (2023). The observed values are consistent with that predicted from cosmological simulations. this absorber-selected galaxy as \(M_{\rm w}\simeq 10^{8.8^{+0.24}_{-0.4}}M_{\odot}\), SFR\(\simeq 9.7^{+4.3}_{-7.2}\,M_{\odot}\) yr\({}^{-1}\), and \(\rm Age=79^{+79}_{-75}\) Myr, respectively. The derived stellar mass and measured [O iii] EW suggest that ASPIRE-J0305M31-03-038 shares the same properties as other typical [O iii] emitters at \(z\sim 6\). Meanwhile, the SFR of this Mg ii-selected galaxy is higher than the majority of metal-absorber-selected galaxies at \(z\sim 5\)(Cai et al., 2017; Diaz et al., 2021; Wu et al., 2021). Furthermore, we find that the galaxy resides in a galaxy overdensity at \(z\simeq 5.4\) with six additional galaxies located at \(\Delta_{\rm v}<1000\) km s\({}^{-1}\). We measure the number excess of galaxies around the Mg ii absorber in the radius range of 20-300 pkpc and find that it is consistent with that obtained from cosmological simulations (Dougluty & Finlator, 2023). This pilot experiment demonstrates the capability of NIRCam/WFSS in detecting the host galaxies of high-redshift metal absorbers. With the full data set of 25 quasar sight lines in the ASPIRE program, we expect to finally build up a statistical sample of absorber-galaxy pairs at \(z>5\) and constrain the metal enrichment in the early Universe. ###### Acknowledgements. We thank Fuyan Bian, Mengtao Tang, and Jakob Helton for very helpful discussions. We thank Jorryt Matthee and the EIGER team for sharing their data and acknowledge Marta Galbiati and Michele Fumagalli for sharing their MUSE catalog. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program \(\#2078\). Support for program \(\#2078\) was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. Z.C., Y.W., X.L., Z.L., M.L., and S.Z. are supported by the National Key R&D Program of China (grant no. 2018YFA0404503), the National Science Foundation of China (grant no. 12073014). The science research grants from the China Manned Space Project with No. CMS-CSST2021-A05, and Tsinghua University Initiative Scientific Research Program (No. 2023080023) K.F. gratefully acknowledges support from STScI Program \(\#\)HST-AR-16125.001-A. This program was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Associations of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. K.F.'s simulation utilized resources from the New Mexico State University High Performance Computing Group, which is directly supported by the National Science Foundation (OAC-2019000), the Student Technology Advisory Committee, and New Mexico State University and benefits from inclusion in various grants (DoD ARO-W911NF1810454; NSF EPSCoR OIA-1757207; Partnership for the Advancement of Cancer Research, supported in part by NCI grants U54 CA132383 (NMSU)). F.S. acknowledges support from the NRAO Student Observing Support (SOS) award SOSPA7-022. HD.J. was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (MSIT) of Korea (No. 2020R1A2C3011091, 2021M3F7A1084525, 2022R1C1C2013543) G.B. was supported by the U.S. National Science Foundation through grant AST-1751404. SEIB acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 740246 "Cosmic Gas"). E.P.F. is supported by the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. JWST data used in this Letter were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via doi:10.17909/v47-kd84. _Facilities:_ JWST, VLT:Kueyen, HST. _Software:_ astropy (Astropy Collaboration et al., 2013, 2018, 2022), PypeIt (Prochaska et al., 2020), VoigtFit (Krogager, 2018), BEAGLE (Chevallard & Charlot, 2016), drizzlepac (STSCI Development Team, 2012). ## Appendix A SED-fitting Result To obtain the stellar properties, we fit the HST \(+\) JWST photometries with Bayesian Analysis of Galaxy SEDS (BEAGLE; Chevallard & Charlot, 2016). We assume a constant star formation history and apply a flat prior in the ranges of metallicity (\(0.01<Z/Z_{\odot}<0.1\)) and stellar mass (\(5\leq\log(M_{\rm w}/M_{\odot})\leq 12\)). For more details, we refer interested readers to Section 3.1 in Lin et al. (2023). The best-fit SED fitting results are shown in Figure 5 and listed in Table 1. We note that the absolute UV magnitude (\(M_{\rm UV}\)) is derived from the posterior SED by integrating the model spectra with a 100 A width window at rest frame 1500 A. Following Tacchella et al. (2023), we also include a noise floor of 5% in our SED-fitting results. ## ORCID IDs * Yunjing Wu [https://orcid.org/0000-0003-0111-8249](https://orcid.org/0000-0003-0111-8249) * Feige Wang [https://orcid.org/0000-0002-7633-431X](https://orcid.org/0000-0002-7633-431X) * Zheng Cai [https://orcid.org/0000-0001-8467-6478](https://orcid.org/0000-0001-8467-6478) * Xiaohui Fan [https://orcid.org/0000-0003-3310-0131](https://orcid.org/0000-0003-3310-0131) * Kristian Finlude [https://orcid.org/0000-0002-0496-1656](https://orcid.org/0000-0002-0496-1656) * Jinyi Yang [https://orcid.org/0000-0001-5287-4242](https://orcid.org/0000-0001-5287-4242) * Joseph F. Hennawi [https://orcid.org/0000-0002-7054-4332](https://orcid.org/0000-0002-7054-4332) * Fengwu Sun [https://orcid.org/0000-0002-4622-6617](https://orcid.org/0000-0002-4622-6617) * Jaclyn B. Champagne [https://orcid.org/0000-0002-6184-9097](https://orcid.org/0000-0002-6184-9097) * Xiaojing Lin [https://orcid.org/0000-0001-6052-4234](https://orcid.org/0000-0001-6052-4234) * Zhao Li [https://orcid.org/0000-0001-5951-459X](https://orcid.org/0000-0001-5951-459X) * Zuyi Chen [https://orcid.org/0000-0002-2178-5471](https://orcid.org/0000-0002-2178-5471) * Eduardo Banados [https://orcid.org/0000-0002-2931-7824](https://orcid.org/0000-0002-2931-7824) * George D. Becker [https://orcid.org/0000-0003-2344-263X](https://orcid.org/0000-0003-2344-263X) * Sarah E. I. Bosman [https://orcid.org/0000-0001-8582-7012](https://orcid.org/0000-0001-8582-7012) * Gustavo Bruzual [6] Gustavo Bruzual 0 [https://orcid.org/0000-0002-6971-5755](https://orcid.org/0000-0002-6971-5755) * Stephane Charlot [https://orcid.org/0000-0003-3458-2275](https://orcid.org/0000-0003-3458-2275) * Hsiao-Wen Chen [https://orcid.org/0000-0001-8813-4182](https://orcid.org/0000-0001-8813-4182) * Jacopo Chevallard [https://orcid.org/0000-0002-7636-0534](https://orcid.org/0000-0002-7636-0534) Anna-Christina Eilers [https://orcid.org/0000-0003-2895-6218](https://orcid.org/0000-0003-2895-6218) * Emanuele Paolo Farina [https://orcid.org/0000-0002-6822-2254](https://orcid.org/0000-0002-6822-2254) * Xiangyu Jin [https://orcid.org/0000-0002-5768-738X](https://orcid.org/0000-0002-5768-738X) * Hyunsung D. Jun [https://orcid.org/0000-0003-1470-5901](https://orcid.org/0000-0003-1470-5901) * Koki Kaichi [https://orcid.org/0000-0001-6874-1321](https://orcid.org/0000-0001-6874-1321) * Mingyu Li [https://orcid.org/0000-0001-6251-649X](https://orcid.org/0000-0001-6251-649X) * Weizhe Liu [https://orcid.org/0000-0003-3762-7344](https://orcid.org/0000-0003-3762-7344) * Maria A. Pudoka [https://orcid.org/0000-0003-4924-5941](https://orcid.org/0000-0003-4924-5941) * Wei Leong Tee [https://orcid.org/0000-0003-0747-1780](https://orcid.org/0000-0003-0747-1780) * Zhang-Liang Xie [https://orcid.org/0000-0002-0125-6679](https://orcid.org/0000-0002-0125-6679) * Siwei Zou [https://orcid.org/0000-0002-3983-6484](https://orcid.org/0000-0002-3983-6484)
2309.11038
CaveSeg: Deep Semantic Segmentation and Scene Parsing for Autonomous Underwater Cave Exploration
In this paper, we present CaveSeg - the first visual learning pipeline for semantic segmentation and scene parsing for AUV navigation inside underwater caves. We address the problem of scarce annotated training data by preparing a comprehensive dataset for semantic segmentation of underwater cave scenes. It contains pixel annotations for important navigation markers (e.g. caveline, arrows), obstacles (e.g. ground plane and overhead layers), scuba divers, and open areas for servoing. Through comprehensive benchmark analyses on cave systems in USA, Mexico, and Spain locations, we demonstrate that robust deep visual models can be developed based on CaveSeg for fast semantic scene parsing of underwater cave environments. In particular, we formulate a novel transformer-based model that is computationally light and offers near real-time execution in addition to achieving state-of-the-art performance. Finally, we explore the design choices and implications of semantic segmentation for visual servoing by AUVs inside underwater caves. The proposed model and benchmark dataset open up promising opportunities for future research in autonomous underwater cave exploration and mapping.
A. Abdullah, T. Barua, R. Tibbetts, Z. Chen, M. J. Islam, I. Rekleitis
2023-09-20T03:36:22Z
http://arxiv.org/abs/2309.11038v6
# CaveSeg: Deep Semantic Segmentation and Scene Parsing for Autonomous Underwater Cave Exploration ###### Abstract In this paper, we present CaveSeg - the first visual learning pipeline for semantic segmentation and scene parsing for AUV navigation inside underwater caves. We address the problem of scarce annotated training data by preparing a comprehensive dataset for semantic segmentation of underwater cave scenes. It contains pixel annotations for important navigation markers (_e.g_. _caveline_, arrows_), obstacles (_e.g_. ground plain and overhead layers), scuba divers, and open areas for servoing. Through comprehensive benchmark analyses on cave systems in USA, Mexico, and Spain locations, we demonstrate that robust deep visual models can be developed based on CaveSeg for fast semantic scene parsing of underwater cave environments. In particular, we formulate a novel transformer-based model that is computationally light and offers near real-time execution in addition to achieving state-of-the-art performance. Finally, we explore the design choices and implications of semantic segmentation for visual servoing by AUVs inside underwater caves. The proposed model and benchmark dataset open up promising opportunities for future research in autonomous underwater cave exploration and mapping. ## 1 Introduction and Background Underwater cave formations, sediment properties, and water chemistry provide insights into the world's past climate conditions and geological processes [1], [2]. Underwater caves also play a crucial role in monitoring and tracking groundwater flows in Karst topographies, while almost \(25\%\) of the world's population relies on Karst freshwater resources [3]. Despite the importance, underwater cave exploration and mapping by humans is a tedious, labor-intensive, and extremely dangerous operation, even for highly skilled divers [4]. When a new section of a cave is discovered, a single and continuous line termed _caveline_[5] is laid out identifying the skeleton of the main passages. The caveeline is attached to other navigation guides such as _arrows_ and _cookies_[6], marking the orientation of the cave, distance to the entrance and presence of other divers. Such surveys by the explorers produce a one-dimensional retraction of the 3D environment. Recording all this information together with additional observations [7] is a challenging, time-consuming, and error-prone process. Figure 1: (a) A tethered BlueROV2 is operating inside an underwater cave system in FL, USA; it is teleoperated by a surface operator following the _caveline_ as a navigation guide; (b) the corresponding POV from the robot’s camera; (c) the proposed semantic parsing concept is shown; the envisioned capabilities are: first-layer & second-layer obstacle avoidance, ground plain estimation, and caveline detection, following, and 3D estimation – to enable autonomous robot navigation inside underwater caves. Enabling Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) to navigate, explore, and map underwater caves safely and effectively is of significant importance [2, 8]; Fig. 1(a) shows an ROV deployment scenario inside the Orange Grove Cave System in Florida. Our earlier work [9] developed a high-precision camera trajectory estimation method by a Visual-Inertial Odometry (VIO) algorithm [10], which can generate 3D caveline trajectory estimates that are comparable to the manually surveyed measurements. More recently in [6], we addressed the lack of annotated samples in visual learning for caveline detection and tracking across different scenes for AUV navigation. In this work, we focus on developing a deep visual learning pipeline to extract dense semantic information for autonomous underwater cave exploration by mobile robots. Considering the limited onboard resources available on embedded platforms, our objective is to design a computationally light model that can (learn to) identify the navigation markers of underwater caves (_e.g_. caveline, arrows), obstacles to avoid (_e.g_. ground plain and overhead layers), scuba divers (for cooperative missions), and safe open areas for visual servoing in real-time. We identify two major difficulties to achieve these: (**i**) no large-scale datasets are available for underwater cave environments; (**ii**) the state-of-the-art (SOTA) models for semantic scene parsing are computationally too demanding for robotic platforms. We address these challenges by proposing **CaveSeg**, the first large-scale semantic segmentation dataset and learning pipeline for underwater cave exploration. We collect comprehensive training data by ROVs and scuba divers through robotics trials in three major locations [6]: the Devil's system in Florida, USA; Dos Ojos Cenote in QR, Mexico; and Cueva del Agua in Murcia, Spain. Our processed data contain \(2300\) pixel-annotated samples with \(10\) object categories that include first and second layer obstacles, human scuba divers, and navigation aids (see Sec 3; Fig. 2). We also compile a **CaveSeg-Challenge** test set that contains \(250\) samples from unseen waterbody and cave systems such as the Blue Groto and Orange Grove cave systems in FL, USA. We conduct extensive benchmark evaluation of SOTA models across the Convolutional Neural Network (CNN) [11, 12], Conditional Random Fields (CRF) [13, 14], and Vision Transformer (ViT) [15, 16, 17] literature, which validate that robust semantic learning is feasible on CaveSeg. Moreover, we develop a novel **CaveSeg model** by rigorous design choices to balance the robustness-efficiency trade-off. The proposed model consists of a transformer-based backbone, a multi-scale pyramid pooling head, and a hierarchical feature aggregation module for robust semantic learning (see Sec.4). Experimental evaluation and comparison with SOTA models suggest that the CaveSeg model is over \(3\times\) more memory efficiency and offers \(1.8\times\) faster inference than SOTA models, while providing comparable benchmark performances. A series of experiments on unseen challenging scenes prove the robustness of our model across different waterbody conditions and optical artifacts; see more details in Sec.5. Furthermore, we highlight several challenging scenarios and practical use cases of CaveSeg for real-time AUV navigation inside underwater caves. Those scenarios include safe cave exploration by caveline following and obstacle avoidance, planning towards caveline rediscovery, finding safe open passages and exit directions inside caves, giving uninterrupted right-of-way to scuba divers exiting the cave, and 3D semantic mapping and state estimation. We demonstrate that CaveSeg-generated semantic labels can be utilized effectively for vision-based cave exploration and semantic mapping by AUVs (see more in Sec. 6). ## 2 Related Work ### Underwater Cave Exploration and Mapping Exploration and mapping of underwater caves by human divers traditionally employed photogrammetry [18] methods in order to generate informative photorealistic representation, especially for sites with archaeological interest [1, 19]. Autonomous underwater cave mapping by AUVs remains an open problem due to many challenges. Mallios _et al_. [20] manually moved an AUV collecting acoustic data for offline mapping, and Weidner _et al_. [21, 22] used images from a stereo camera to create a 3D reconstruction of the cave walls, floor, and ceiling. Major challenges to vision-based state estimation in an underwater environment include lighting variations, light absorption, and blurriness [23]. Rahman _et al_. [10, 24, 25] proposed a framework where visual, acoustic, inertial, and water depth data are fused together to estimate the trajectory of an AUV or a sensor while in parallel generates a sparse representation of the underwater cave. Denser models of the cave boundaries can be obtained by mapping the moving shadows [26], the contours [27], or dense online stereo reconstruction [28]. More recently, Richmond _et al_. [2] describe a manportable AUV named _Snufish_, which can work safely inside underwater caves and bring back chemical profiles, detailed imagery, and sonar maps of the cave. Our recent work [6] develops a ViT-based caveline detection model for autonomous caveline following by underwater robots. However, beyond detecting cavelines, a full-form scene parsing is essential for safe and effective AUV navigation inside underwater caves. ### Semantic Segmentation of Underwater Scenes Deep visual learning algorithms have revolutionized semantic segmentation and scene parsing benchmarks on standard datasets, which are mostly developed for terres trial applications. The existing learning pipelines trained on terrestrial imagery are not directly applicable because the object categories and image statistics are entirely different. A unique set of underwater image distortion artifacts and the unavailability of large-scale annotated datasets have influenced a significant lack of research attempts on semantic segmentation of underwater imagery. The contemporary research literature offers application-specific image datasets for coral reef segmentation and coverage estimation [29, 30], visual attention modeling [31], human-robot cooperative missions [32], image restoration and foreground enhancement [33], and fish detection [34, 35]. More recent work by Modasshir _et al_. combined a deep learning-based classifier model to identify and track the locations of different types of corals to generate semantic maps [36] as well as volumetric models [37]. Islam _et al_. formulated the SUIM dataset [32] for semantic segmentation of underwater imagery with eight object categories: fish, coral reefs, aquatic plants, wrecks/ruins, human divers, robots/instruments, and sea-floor. Other datasets consider even fewer object categories such as marine debris or ship hull defects [38]. With limited training samples per object category over only a few waterbody types, it is extremely challenging to achieve good generalization performance by SOTA deep learning-based models for image segmentation and scene parsing. More importantly, these object categories are not useful for underwater cave exploration and mapping applications - which we address in this paper. ## 3 CaveSeg Dataset: Data Preparation and Problem Formulation We prepared learning pipelines with data collected from **three cave systems** in different geographical locations [6]: the Devil's system in Florida, USA; Dos Ojos Cenote in QR, Mexico; and Cueva del Agua in Murcia, Spain. For the semantic labels, we considered the following object categories: caveline, first layer (immediate avoidance areas), second layer (areas to be avoided subsequently), open area (obstacle-free regions), scuba divers, navigation aids (_e.g_., arrows, reels, and cookies), and caveline-attached rocks. Moreover, salactites, salagmites, and columns are also considered if they are present in the scene, which is the case for the Mexico caves. With these \(10\) object categories in consideration, a total of \(2300\) images are labeled in the **CaveSeg dataset**. A few annotated samples are shown in Fig. 2; the dataset and relevant information are available online at [https://robopi.ece.ufl.edu/caveseg.html](https://robopi.ece.ufl.edu/caveseg.html). As mentioned earlier, the role of the caveline is crucial for any underwater cave operations. It represents the direction of exploration, the main area where the cave extends, and equally important, the path to safely exit the cave. Caveline pixels are marked as yellow in order to generate maximum contrast. The rock formations where the cave-line is attached are often called _placements_ or _attachment points_. These rocks most of the time signal a change in the direction of the caveline and they are marked in blue. Navigational markers such as _arrows_ (dark red) and _cokies_ (red) provide important information about the direction to the nearest cave exit and the presence of other divers in the cave, respectively. Figure 3: Frequencies and distributions of important object categories are shown in the _train_, _validation_, and _test_ sets. Figure 2: A few sample images from the proposed **CaveSeg dataset**, corresponding ground truth labels, and their overlayed visualizations are shown; color codes for each object category are listed on the right. A special category in our semantic mapping scheme is the scuba divers (magenta). An AUV should always give the right of way to divers, especially those exiting the cave. In case of an emergency, there should be nothing impeding the divers from reaching the surface, which is a norm practiced by cave divers. As such we have established a human diver category to incorporate emergency responses when a diver is detected, for example, lowering the light intensity, moving away from the main passage, avoiding abrupt motions, etc. Fig. 3 shows the frequency and distribution of various object categories in CaveSeg dataset. Human diver is present in \(15\%\) of the images. Over \(90\%\) of the samples contain caveline, obstacle-free open areas, as well as the first and second layer obstacles. Other navigation markers (_e.g_., cookies, arrows, reels) typically occupy smaller pixel areas compared to other objects, and they are found in \(10\%\) of the data. Overall, these \(10\) object categories in the scene embed useful information for vision-based planning and navigation for AUVs inside underwater caves. ## 4 CaveSeg Model: Semantic Scene Segmentation Of Underwater Caves ### Network Design and Learning Pipeline We search for a computationally light architecture that provides real-time underwater cave scene segmentation in addition to achieving state-of-the-art (SOTA) performance. To this end, we explore both CNN-based and transformer-based backbone architectures [15, 39, 40]. The windowed multi-head self-attention (W-MSA) module proposed in Swin Transformer [17] is a powerful tool to maintain efficient computation. Additionally, the window shifting technique connects features across spatially overlapped windows. We find this connection to be particularly useful since cave scenes hold some spatial relation among categories. For instance, navigation markers and attachment rocks are almost always found on the caveline, while the second layer obstacles are usually found around the first layer or ground plain. Although computationally heavy, we found such cross-category attention extraction to be effective in cave scene segmentation tasks. Inspired by this, we design a novel architecture that incorporates these capabilities with a light Swin Transformer backbone for feature extraction. With an efficient Pyramid Pooling Module (PPM) and hierarchical feature aggregation, our proposed CaveSeg model is over \(3.3\times\) memory efficient and offers \(60\%\) faster inference rates than the Swin Transformer base model. #### 4.1.1 CaveSeg Model Architecture The detailed network architecture is shown in Fig 4. First, input RGB images are partitioned into \(4\times 4\) non-overlapping patches or tokens; a linear embedding layer of dimension \(48\) is then applied to each token. These feature tokens are passed through a four-stage backbone, each containing a windowed and shifted windowed module of multi-head self-attention [41]. In each stage, patches are merged with \(2\times 2\) neighboring regions to reduce the number of tokens while at the same time, the linear embedded dimension is doubled. Subsequently, bottom-up and top-down feature aggregation [42] is performed in two separate branches. A pyramid pooling module (PPM) [43] is attached to the backbone that further improves global feature extraction at deep layers of the network [44]. Features from each stage of the backbone as well as from the PPM head are then fused and compiled into a hierarchical feature map. Finally, a fully connected convolution layer performs pixel-wise category estimation on that feature space. Figure 4: The network architecture of our proposed **CaveSeg model** is shown. Input images are partitioned into \(4\times 4\) patches and fed into a four-stage transformer backbone for coarse-to-fine feature extraction. The extracted multi-scale features are then pooled and combined by the PPM head for bottom-up and top-down feature aggregation. A hierarchical feature map is then compiled by merging several multi-level feature representations. On this feature space, a classifier performs pixel-level semantic segmentation to generate the final outputs. #### 4.1.2 Supervised Learning Pipeline The end-to-end training is driven by the standard cross-entropy loss [45] that quantifies the dissimilarity in the generated and predicted pixel labels for each category. For multi-class segmentation of an image with \(N\) pixels and \(M\) classes, it is calculated as \[\mathcal{L}_{CE}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{M}y_{i,c}\log(p_{i,c}), \tag{1}\] where \(p_{i,c}\) denotes the probability of pixel \(i\) belonging to class \(c\), and \(y\) is \(0\) (or \(1\)) for correct (or incorrect) predictions, respectively. The training is optimized by the Stochastic Gradient Descent (SGD) algorithm [46] with an initial learning rate of \(1e^{-4}\) and a momentum of \(0.9\). ### Training Setups for CaveSeg & Baseline Models We configured a unified training pipeline for CaveSeg and several SOTA benchmark models for training and evaluation. Specifically, we used the MMSegmentation libraries [47] in PyTorch for the large-scale training on CaveSeg dataset with a \(75\):\(15\):\(10\) split ratio for train, validation, and test, respectively. For baseline comparison, we considered the following SOTA models across the CNN, CRF, and ViT literature: FastFCN [40], DepLabV3+ [39], Segmenter [16], Segformer [15], and Swin Transformer [17]. The proposed CaveSeg model is pre-trained on ADE20k [48] followed by the unified training. The input-output resolution is set to \(960\times 540\) for all models; other SOTA model-specific parameters are chosen based on their recommended configurations. ## 5 Performance Analyses of CaveSeg ### Quantitative Evaluation We use three standard metrics [49], [50] for quantitative assessments: mean Intersection Over Union **mIOU**, mean class-wise Accuracy (**mAcc**), and Average pixel Accuracy (**aAcc**). The IoU (Intersection Over Union) measures cave-line localization performance using the area of overlapping regions of the predicted and ground truth labels. it is defined as \(IoU=\frac{Area\,of\,overlap}{Area\,of\,union}\). On the other hand, mAcc and aAcc represent the mean accuracy of each class category and over all pixels, respectively. The quantitative results are presented in Table 1; all evaluations are performed on the **CaveSeg-Challenge** set, which we curated with \(250\) test samples. These samples include low-light noisy scenarios compiled in the CL-Challenge set (released earlier in [6]), as well as data from cave explorations in other such as the Blue Grotto and Orange Grove cave systems in FL, USA. As Table 1 shows, our proposed dataset and learning pipelines can achieve up to \(68\%\) accuracy for pixel-wise segmentation. The proposed CaveSeg model offers comparable performance margins despite having a significantly lighter architecture. The comparisons for computational efficiency are provided in Table 2. CaveSeg model has less than \(50\%\) parameters than other competitive models and offers up to \(1.8\times\) faster inference rates. While we analyze the class-wise performance, we find that small and rare object categories such as arrows, cookies, and reels are challenging in general. ### Qualitative Evaluation Figure 5 compares the qualitative performance of CaveSeg with other SOTA models. A unique feature of CaveSeg and other transformer-based models is that they perform better in finding large and connected areas,, categories such as first/second layer obstacles and open areas. The continuous regions segmented by the CaveSeg model facilitate a better understanding of the surroundings compared to DeepLabV3+ and FastFCN. This validates our intuition that window-shifting technique can indeed extract global features more accurately. While the shifted window method focuses on global feature extraction, the deeper layers and multi-scale pooling of PPM help to preserve the details of local features. The precise detection of caveline, which is only a few pixels wide, further supports this design choice. \begin{table} \begin{tabular}{l|c|c|c} \hline **Method** & **mIoU**\(\uparrow\) & **mAcc**\(\uparrow\) & **aAcc**\(\uparrow\) \\ \hline FastFCN [40] & \(27.82\) & \(39.30\) & \(68.28\) \\ DeepLabV3+ [39] & \(30.22\) & \(42.02\) & \(68.61\) \\ Segmenter [16] & \(17.92\) & \(24.14\) & \(66.42\) \\ Segformer [15] & \(25.62\) & \(34.09\) & \(64.47\) \\ Swin Transformer [17] & \(32.06\) & \(32.55\) & \(66.53\) \\ **CaveSeg** (proposed) & \(32.64\) & \(34.82\) & \(68.01\) \\ \hline \end{tabular} \end{table} Table 1: Semantic segmentation performances of all models are compared on \(250\) test images from the CaveSeg-Challenge set; here, higher scores (\(\uparrow\)) are better for all metrics in consideration. \begin{table} \begin{tabular}{l|c|c|c} \hline **Method** & **\# Params**\(\downarrow\) & **Memory**\(\downarrow\) & **Speed**\(\uparrow\) \\ \hline FastFCN & \(64\) M & \(548.32\) MB & \(16.89\) FPS \\ DeepLabV3+ & \(58\) M & \(489.38\) MB & \(15.04\) FPS \\ Segmenter & \(86\) M & \(798.04\) MB & \(13.73\) FPS \\ Segformer & \(84\) M & \(956.62\) MB & \(10.92\) FPS \\ Swin Tx & \(121\) M & \(1380.16\) MB & \(12.39\) FPS \\ **CaveSeg** & \(38\) M & \(406.40\) MB & \(19.78\) FPS \\ \hline \end{tabular} \end{table} Table 2: Computational complexities for all models are compared based on number of parameters, memory requirements in MegBytes (MB), and inference speeds in FPS. All experiments are performed on an NvidiaTM A\(100\) GPU server with \(16\) GB RAM. ## 6 Use Cases: Vision-based Cave Exploration and Semantic Mapping by AUVs ### Safe AUV Navigation Inside Underwater Caves The confined nature of the underwater cave environment makes obstacle avoidance a particularly important task. Safe navigation approaches, such as AquaNav proposed by Xanthidis [51] and especially AquaVis [52] can generate smooth paths by avoiding obstacles. To this end, having a holistic semantic map of the scene can ensure that the AUV can safely follow the caveline and keep it in the FOV during cave exploration. Our previous work [6] demonstrates the utility of caveline detection and following for autonomous underwater cave exploration. While caveline detection is paramount, having semantic knowledge about the surrounding objects in the scene is essential to ensure safe AUV navigation. As shown in Fig. 6, cavelines can be obscure in particular areas due to occlusions and blends inside the cave passages. Hence, dense semantic information provided by CaveSeg is important to ensure continuous tracking and re-discovery of the caveline and other navigation markers. A few more challenging scenarios are shown Fig. 7, where simply following the cave-line is not sufficient due to the cluttered scene geometry. With the semantic knowledge of first layer and second layer obstacles, caveline, and open areas - an AUV can plan its trajectory safely and efficiently. ### Working With and/or Alongside Human Scuba Divers The underwater cave environment is extremely hostile due to the lack of direct access to the surface. Cave divers enforce strict protocols on light configurations (always a primary and two backup lights), use of breathing gas (use only \(30\%\) on the way in, leaving \(30\%\) for the return and \(30\%\) for emergencies), and always keep near the caveline, ensuring there is an uninterrupted line to open water [5]. Of particular importance is the right of way for an exiting team. As such, CaveSeg maintains a label for divers (see Fig. 7) to ensure appropriate actions by the AUV. Specifically, in the presence of a diver, the lights of the vehicle will be dimmed so that the approaching diver can see the caveline. The AUV will reduce its speed, and refrain from using the downward Figure 5: A few qualitative performance comparisons of all models on CaveSeg-Challenge test set are shown (results for only four top-performing models are shown for clarity). Note that the object detection and localization accuracy for categories such as caveline, open area, and navigation markers are particularly important for AUV navigation. Figure 6: A BlueROV2 is being teleoperated through a cave passage where the caveline is almost invisible; note that the yellow wire on the top image is the ROV’s tether, not a caveline. In such scenarios, an AUV can leverage the segmented obstacles and open areas in the scene (overlayed on the right column) toward planning a trajectory to rediscover and follow the caveline. facing propellers in order to avoid stirring up the sediment; see Fig. 8 where the teleoperated ROV disturbed the sediment on the floor. Finally, the vehicle will move away from the caveline towards the walls of the cave. ### 3D Semantic Mapping and State Estimation Another prominent use case of CaveSeg is the 3D estimation and reconstruction of caveline for AUV navigation. Specifically, 2D caveline estimation can be combined with camera pose estimation from Visual-Inertial-Odometry (VIO) to generate 3D estimates. This could potentially be used to compare and improve manual surveys of existing caves. Moreover, 3D caveline estimations can also be utilized to reduce uncertainty in VIO and SLAM systems since the line directions in 3D point clouds can provide additional spatial constraints [53]. We demonstrate a sample result in Fig. 9, where _ray-plane triangulation_ of cavelines is achieved using Visual-Inertial pose estimation for enabling 3D perception capabilities for underwater cave exploration. This experiment is performed on field data collected by ROVs in the Orange Grove cave system, FL, USA; we are currently exploring these capabilities for more comprehensive 3D semantic mapping of underwater caves. ## 7 Conclusion and Future Work This paper presents CaveSeg - the first comprehensive dataset and deep visual learning pipeline for underwater cave scene segmentation. With the primary focus on vision-guided cave exploration and mapping by AUVs, we include semantic labels for object categories such as caveline, layered obstacles, arrows, cookies, attachment points, reels, as well as human scuba divers. We perform extensive benchmark evaluations of several SOTA models that demonstrate the utility of such dataset for robust semantic segmentation of underwater cave scenes. We further propose a computationally light model that offers up to \(1.8\times\) faster inference in addition to providing SOTA performance. More importantly, we demonstrate that the predicted scene parsing labels can be utilized for safe AUV navigation inside underwater caves. We are currently investigating the scope of augmenting semantic maps with geometric information from a Visual-Inertial SLAM system such as SVIn2 [10]. This will potentially reveal the 3D position of the cave-line as well as the relative location of obstacles and helpful markers. The synergy between geometry and semantics will provide additional visual features to further disambiguate weakly labeled areas during cave exploration. For future CaveSeg releases, we will augment semantic labels for objects such as vertical columns, stalagmite, and stalactite that are found in decorated caves. ## 8 Acknowledgements This research has been supported in part by the NSF grants \(1943205\) and \(2024741\). The authors would also like to acknowledge the help of the Woodville Karst Figure 8: A BluROV2 is being teleoperated inside the Blue Grotto cave, FL: (left) entering the cave; (right) stirring the sediments. Figure 7: A few examples of semantic scene parsing by CaveSeg for important navigation information are shown. The first scene shows a traceable caveline towards the open area and surrounding obstacles. When no open areas are visible (second image), arrows on the caveline are helpful to know the exit direction of the cave. Lastly, the accurate detection of human divers is a crucial step for giving the right-of-way as well as diver-robot cooperation. Figure 9: Ray-plane triangulation of cavelines using VIO pose estimation and 2D line segments from deep segmentation pipeline are shown. The reconstructed 3D Lines are colored based on average re-projection error within its neighborhood in a local connectivity graph. Plain Project (WKPP), El Centro Investigador del Sistema Acuifero de Quintana Roo A.C. (CINDAQ), Global Underwater Explorers (GUE), and Ricardo Constantino, Project Baseline in collecting data, providing access to challenging underwater caves, and mentoring us in underwater cave exploration. We also appreciate the help from Evan Kornacki for coordinating our field experimental setups. Lastly, we thank the RoboPI lab members and former students (Ailani Morales and Boxiao Yu) who helped us with image labeling and annotation tasks for CL-ViT and in the early stages of this project.
2309.06544
Forecasts on interacting dark energy with standard sirens
We present the predictions with standard sirens at Gravitational Waves detectors, such as the Laser Interferometer Space Antenna (LISA) and the Einstein Telescope (ET), for interacting dark energy theories. We focus on four models characterised by couplings between the dark energy field and the dark matter fluid arising from conformal or disformal transformations of the metric, along with an exponential self-interacting potential. To this purpose we construct mock catalogues and perform a Markov Chain Monte Carlo analysis by considering ET and LISA standard sirens, and also their combination with Baryon Acoustic Oscillations (BAO) and Supernovae Ia (SNIa) data. We find that in all the four models considered, the accuracy on the $H_0$ parameter increases by one order of magnitude at 1$\sigma$ when compared to the SNIa+BAO data set, possibly shedding light in the future on the origin of the $H_0$-tension. The combination of standard sirens with SNIa+BAO allows to improve the accuracy on some coupling and exponential parameters, hinting at future prospects for constraining interactions in the dark sector.
Elsa M. Teixeira, Richard Daniel, Noemi Frusciante, Carsten van de Bruck
2023-09-12T19:35:19Z
http://arxiv.org/abs/2309.06544v2
# Forecasts on interacting dark energy with standard sirens ###### Abstract We present the predictions with standard sirens at Gravitational Waves detectors, such as the Laser Interferometer Space Antenna (LISA) and the Einstein Telescope (ET), for interacting dark energy theories. We focus on four models characterised by couplings between the dark energy field and the dark matter fluid arising from conformal or disformal transformations of the metric, along with an exponential self-interacting potential. To this purpose we construct mock catalogues and perform a Markov Chain Monte Carlo analysis by considering ET and LISA standard sirens, and also their combination with Baryon Acoustic Oscillations (BAO) and Supernovae Ia (SNIa) data. We find that in all the four models considered, the accuracy on the \(H_{0}\) parameter increases by one order of magnitude at \(1\sigma\) when compared to the SNIa+BAO data set, possibly shedding light in the future on the origin of the \(H_{0}\)-tension. The combination of standard sirens with SNIa+BAO allows to improve the accuracy on some coupling and exponential parameters, hinting at future prospects for constraining interactions in the dark sector. ###### Contents * I Introduction * II Gravitational Waves as Standard Sirens * III Methodology and Data Sets * III.1 Simulated Cosmology * III.2 Distribution of Simulated Merger Events * III.3 Simulation of Measurements and Errors * III.4 Data Sets and Likelihoods * IV Forecast Results * IV.1 Conformal Coupling * IV.2 Kinetic Conformal Coupling * IV.3 Disformal Coupling * IV.4 Mixed Conformal-Disformal Coupling * V Conclusions ## I Introduction Understanding the nature of the dark sector of the Universe is one of the greatest endeavours of Cosmology at the present. This comprises the weakly interacting dark matter (DM) - responsible for the formation and dynamics of structures in the Universe - and dark energy (DE) - the driver of the late time cosmic acceleration. Together these components dominate about 95% of the energy budget of the Universe. In the \(\Lambda\)-cold-dark-matter (\(\Lambda\)CDM) scenario, i.e. the standard model of cosmology, DE is portrayed simply as a cosmological constant, \(\Lambda\). This model comes with some theoretical issues [1; 2; 3], among which is the fact that fundamental theories do not properly account for the currently measured small value of the cosmological constant. \(\Lambda\)CDM also requires a primordial inflationary period to explain the geometrical flatness, Cosmic Microwave Background (CMB) smoothness, and initial conditions for large-scale structures. More recently, observational tensions on the value of the cosmological parameters \(H_{0}\)[4; 5; 6; 7; 8; 9; 10] and \(\sigma_{8}\)[11; 12; 13; 14; 15; 16], measured by early- and late-Universe probes, increased the motivation to investigate alternative models of gravity [17]. Consequently, alternative theories are explored by cosmologists in which \(\Lambda\) is promoted to a dynamical DE scalar field, \(\phi\), namely _quintessence_[18; 19] (see [20] for a review), which evolves in time according to its self-interaction potential. While in the standard \(\Lambda\)CDM scenario the two dark components do not directly couple with each other, in a dynamical DE model one can instead consider that they experience some _non-minimal_ interaction. Such constructed models are referred to as _coupled quintessence models_[21; 22]. The dynamics of the field, along with the dark interaction, could provide a more natural explanation of the accelerated expansion, while also addressing the observational tensions [23]. Nevertheless, the coupling can be formalised at the Lagrangian level through what is known as a conformal/disformal transformation of the metric tensor [24; 25; 26; 27; 28; 29; 30]. If this transformation depends directly on the quintessence field, then this is physically equivalent to considering that the DM particles propagate on the geodesics of the transformed metric, \(\bar{g}_{\mu\nu}\). In the conformal case, this is achieved from a re-scaling of the metric and, consequently, of time- and space-like norms and intervals alike, while preserving the light cones: \[\bar{g}_{\mu\nu}=C(\phi)g_{\mu\nu}\,, \tag{1}\] where \(C\) is the conformal function. These find important applications in modified gravity theories as they preserve the structure of Scalar-Tensor theories of the Brans-Dicke form [31]. Alternatively, one can consider that the metric transformation should depend on the first-order partial derivatives of the scalar field as well. This results in a disformal transformation: \[\bar{g}_{\mu\nu}=C(\phi,X)g_{\mu\nu}+D(\phi)\partial_{\mu}\phi\partial_{\nu}\phi\,, \tag{2}\] where \(C\) and \(D\) are the conformal and disformal functions, respectively and \(X=-g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi/2\). This gives rise to a more intricate scenario, with a distortion of the metric defined directionally according to the gradient of \(\phi\). First introduced by Bekenstein [25], the disformal transformations re-surged in the cosmological literature [27] when, in analogy to the conformal case, it was shown that they preserve a more general class of Scalar-Tensor theories categorised in the Horndeski Lagrangian [28]. Disformal transformations in cosmology arise naturally in brane-world models [32; 33] and have been the focus of many theoretical proposals for the nature of the dark sector and their interactions [34; 35; 36; 37; 38; 39; 40; 41; 42]. The stability conditions for the functions \(C\) and \(D\) have been discussed in Refs. [26; 27; 28; 40] and the case in which \(D\equiv D(\phi,X)\) has been discussed in Refs. [43; 44]. The coupling in the dark sector gives rise to an additional gravitational fifth force in the Universe between DM particles, mediated by the DE field. This new force leaves distinct features in the background equations, as well as signatures in the cosmological density perturbations that describe the formation of structures [45]. Although these deviations from the benchmark model are constrained to be small (especially at the background level), they are still expected to leave detectable, characteristic observational imprints that the data can probe. These are essential to test the viability of such alternative models by identifying the range of validity of the parameter space and the robustness of its predictions. In the past few years, we have witnessed the rise of gravitational wave (GW) astronomy as a new independent probe of gravitational effects [46]. An accurate redshift-luminosity relation can be constructed when GW events are combined with an electromagnetic (EM) counterpart multi-messenger signal. These observations become _standard sirens_[47], analogous to the standard candles used in local EM measurements. So far, only one GW event, GW170817, with a corresponding EM counterpart, GRB170817A, has been detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO)-Virgo and the International Gamma-ray Astrophysics Laboratory (INTEGRAL)-Fermi collaborations, respectively, and which originated from the merger of a binary pair of neutron stars [48; 49]. This single combined detection had a strong impact on the allowed modifications to the gravitational interaction by ruling out many proposals [50; 51; 52; 53; 54] with many other models further constrained [55; 56; 57; 58; 59]. Current GW detectors, (advanced) Virgo [60], (advanced) LIGO [61] and the Kamioka Gravitational Wave Detector (KAGRA) [62], are second-generation (2G) ground-based detectors, with another one under planning (2030), the Indian Initiative in Gravitational-wave Observations (IndIGO) [63]. The increasing number of detectors will boost the capabilities of GW astronomy both in the number of confirmed events (a larger volume of the Universe is covered) and sky localisation (a better triangulation of the source), which will also aid in the search for a counterpart. However, 2G detectors are limited in their sensitivity and future third-generation (3G) ground-based detectors are designed to become more sensitive, precise and capable of probing a larger range of frequencies. Special emphasis should be given to the Einstein Telescope (ET), which is expected to improve the current sensitivity by a factor of 10 [64]. ET will also extend the redshift range, _e.g._\(z\sim 5\) for binary black-holes (BBHs) compared to \(z\sim 0.5\) for 2G detectors [65]. The number of detectable multi-messenger events is expected to reach tens of thousands of standard sirens [66]. While these ground-based detectors will cover a frequency band in the range \(1\lesssim f\lesssim 10^{3}\) Hz [67], the upcoming space-based 3G detectors, such as the Laser Interferometer Space Antenna (LISA) [68] will have a peak sensitivity near \(10^{-3}\) Hz and will be able to detect GW events beyond \(z=20\), probing a wide range of targets. There are many proposals of 3G GW observatories, such as the DECI-hertz Interferometer Gravitational wave Observatory (DECIGO) [69]. However, we have opted to focus our analysis on ET and LISA covering ground- and space-based experiments. In this paper, we aim at forecasting the constraining power of future 3G detectors; in particular we will focus on ET and LISA. Given the potential of such missions we are interested in assessing their ability to constrain modifications to General Relativity (GR) as well as to provide complementary constraints on \(H_{0}\) using standard sirens. We investigate four models characterised by coupling functions between the DM and DE fields: a conformally coupled quintessence field [21; 22], characterised by a conformal coupling in the form of an exponential function of the scalar field; a kinetic model [70] with a conformal function given by a power law of the kinetic term of the scalar field; a purely disformally coupled quintessence [29; 38] with a constant disformal coupling and a mixed disformally coupled quintessence [42; 43] which combines the previous model with an exponential conformal coupling. All the scenarios considered are characterised by the same simple exponential potential which introduces one more free parameter. The models we consider differ considerably in the way the (effective) coupling between DM and DE evolves and, in particular, they differ in their background evolution. As such, they represent a well studied sample of models of interacting dark energy suitable for our analysis. We construct our pipeline following the methodology presented in [71; 72; 73; 74] providing for the first time GWs forecasts on the free parameters of the four models in question. This paper is organised as follows. We start by giving a brief introduction to the physics of standard sirens in Sec. II. Sec. III provides an overview of the methodology used and the details on the simulation of the standard siren events developed for this study, as well as a brief account of the data set combinations considered. We outline the criteria for particular catalogue choices, and discuss the sampling method employed for the forecasts. In Sec. IV we introduce each of the four models under study and present the results of our analysis, emphasising their significant implications. Lastly, in Sec. V, we summarise our results and outline our concluding thoughts and future prospects. ## II Gravitational waves as standard screens Interferometers are sensitive to the strain, \(h(t)\) from a GW event, which in the transverse-traceless gauge is described as, [71] \[h(t)=F_{\times}(\theta_{0},\phi_{0},\psi)h_{\times}(t)+F_{+}(\theta_{0},\phi_{0},\psi)h_{+}(t)\,, \tag{3}\] where \(\theta_{0}\) and \(\phi_{0}\) define the initial location of the event relative to the detector in polar coordinates, \(\psi\) is the polarisation of the GW event, and \(t\) is cosmic time. We adopt a random sampling method in the range \([0-2\pi]\) for \(\theta_{0}\) and \([0-\pi]\) for both \(\phi_{0}\) and \(\psi\). The factors \(F_{\times,+}\) describe the antenna beam pattern function, \[\begin{split} F_{\times}^{(1)}&=\frac{\sqrt{3}}{2} \left[\frac{1}{2}(1+\cos^{2}(\theta))\cos(2\phi)\cos(2\psi)\right.\\ &\qquad\qquad\left.+\cos(\theta)\sin(2\phi)\cos(2\psi)\right]\,, \\ F_{+}^{(1)}&=\frac{\sqrt{3}}{2}\left[\frac{1}{2}(1+ \cos^{2}(\theta))\cos(2\phi)\cos(2\psi)\right.\\ &\qquad\qquad\left.-\cos(\theta)\sin(2\phi)\cos(2\psi)\right]\,. \end{split} \tag{4}\] The superscript number indicates which interferometer is being considered, _e.g_ LISA only has two separate interferometers and therefore \(F^{(3)}=0\). Since the detectors are spatially distributed in an equilateral triangle formation, the other two antenna pattern functions relate to \(F_{\times,+}^{(1)}\) as \[\begin{split} F_{\times,+}^{(1)}(\theta,\phi,\psi)& =F_{\times,+}^{(2)}(\theta,\phi+\frac{2\pi}{3},\psi)\\ &=F_{\times,+}^{(3)}(\theta,\phi+\frac{4\pi}{3},\psi)\,.\end{split} \tag{5}\] As LISA is sensitive to lower frequencies, and equivalently larger masses, it can detect GW events of inspiral mergers lasting over several months, during which the interferometer's position will change relative to the event. This change in position is accounted for following the method described in [72]. The timescale of the event is described as \[t=t_{c}-5(8\pi f)^{-8/3}M_{c}^{-5/3}\,. \tag{6}\] Here \(t_{c}\) is the time of the merger, \(t\) indicates the time at which LISA detects the merger, \(f\) is the frequency of the GW, and \(M_{c}\) is the chirp mass. The location angles are updated accordingly: \[\theta =\cos^{-1}\left[\frac{1}{2}\cos(\theta_{0})\right. \tag{7}\] \[\left.-\frac{\sqrt{3}}{2}\sin(\theta_{0})\cos\left(\frac{2\pi t} {T}-\phi_{0}\right)\right]\,,\] \[\phi =\frac{2\pi t}{T}\] (8) \[-\tan\left[\frac{\sqrt{3}\cos(\theta_{0})+\sin(\theta_{0})\cos \left(\frac{2\pi t}{T}-\phi_{0}\right)}{2\sin(\theta_{0})\cos\left(\frac{2\pi t }{T}-\phi_{0}\right)}\right]\,,\] which, in turn, are used to update the beam pattern functions. Here we have specified the period, \(T\), as the orbit around the Sun. While the individual masses of the objects are not directly discernible, GW detectors are sensitive to the chirp mass, a collective mass quantity related to the frequency evolution of the signal emitted before the merger, during the inspiral phase of the binary [75], defined as \[M_{c}=(1+z)\left(\frac{(m_{1}\,m_{2})^{3}}{m_{1}+m_{2}}\right)^{1/5}\,, \tag{9}\] where \((1+z)\) is a conversion redshift factor from the physical to the observational chirp mass. The Fourier transform of the strain using the stationary phase approximation [73] reads \[\mathcal{H}=\mathcal{A}f^{-7/6}e^{i\Psi(f)}\,, \tag{10}\] where \(\Psi(f)\) is the phase of the waveform. Notice that when \(\mathcal{H}\) is inserted into Eq. (14), the exponential term disappears, meaning that the \(\Psi(f)\) factor can be discarded for this analysis. \(\mathcal{A}\) is the Fourier amplitude of the waveform, \[\mathcal{A} =\frac{M_{c}^{5/6}}{d_{L}}\pi^{-2/3}\sqrt{\frac{5}{96}} \tag{11}\] \[\quad\times\sqrt{[F_{+}(1+\cos^{2}(l))]^{2}+(2F_{\times}\cos(l))^{ 2}}\,,\] where \(d_{L}\) is the luminosity distance from the merger and \(l\) is the inclination angle, which we sample randomly between \([0^{\circ},20^{\circ}]\), as that is the maximum detection inclination range. LISA has been designed to effectively measure frequencies as low as \(f_{min}=1\times 10^{-4}\,\mathrm{Hz}\), which is why it stands as a promising probe of extreme mass ratio inspiral (EMRI) and binary massive black hole (BMBH) mergers. For the purpose of the simulations, the upper bound frequency of LISA is determined by two quantities: the structure of LISA itself and the last stable orbit of the merging system. LISA can detect frequencies up to \(f_{max}=c\,(2\pi L)^{-1}\), where \(L\) is the length of LISA's interferometer arm, taken to be \(2.5\,\mathrm{Gm}\) and \(c\) is the speed of light. Moreover, the total mass of an orbiting system is inversely proportional to its measured frequency, implying that even though massive mergers give rise to large detection amplitudes, the frequency will fall below \(f_{min}\). Therefore if the last stable orbit frequency, \(f_{LSO}=(6^{3/2}2\pi M_{obs})^{-1}\), with \(M_{obs}\) being the observed total mass, is found to be lower than \(f_{min}\), we disregard that simulated event. If otherwise it lies between \(f_{min}\) and \(f_{max}\) then \(f_{LSO}\) becomes the new maximum frequency for that event. ## III Methodology and data sets Given the main objective of this study, we create simulated data that forecasts the potential future observations of standard siren events. Specifically, we focus on those that could be detected by ET and LISA. Below, we provide a concise overview of the samples we have generated along with the methodology and the data combinations used. ### Simulated Cosmology To simulate GW catalogues from future probes of black hole mergers, the following cosmological quantities are required: the redshift of the merger, \(z\), the value of the Hubble rate at merger, \(H(z)\), its comoving and luminosity distance, \(d_{c}(z)\) and \(d_{L}(z)\) respectively, and the cosmic time between the merger and measurement, \(t\). For this purpose, we resort to the public Einstein-Boltzmann code CLASS code1[76, 77, 78], which we extend to accommodate general models of interacting dark energy. This new patch is then used to provide a _mock Universe_ adopting a flat \(\Lambda\)CDM as the fiducial model to simulate the GW data, according to the best-fit cosmological parameters of the _Planck_ 2018 data release [5]. These are: the Hubble parameter at present time, \(H_{0}=67.32\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), the density of baryons, \(\Omega_{b}h^{2}=0.022383\) (with \(h=H_{0}/100\)) and the density of cold dark matter, \(\Omega_{c}h^{2}=0.12011\). Furthermore, we are also interested in the derived quantity \(\Omega_{m}^{0}=\Omega_{b}+\Omega_{c}\), which for the fiducial _Planck_ case is \(\Omega_{m}^{0}=0.3144\). Footnote 1: [https://github.com/lesgourg/class_public](https://github.com/lesgourg/class_public) Provided with the background cosmology, we simulate the merger events to determine the redshift-luminosity relation. First, we generate a redshift distribution of events weighted by a probability distribution. The characteristics of these events, such as the chirp mass, are simulated using a uniform distribution. Although each instance of running the script will yield a different set of simulated data, the resulting conclusions will be unaltered as the fiducial parameters constrain the mock data. Once the mergers have been simulated, we emulate the measurement process from the inspiral, yielding the errors associated with each event. As such, simulated data points are removed if they produce a signal-to-noise ratio below the threshold. ### Distribution of Simulated Merger Events ET is designed to probe a range of frequencies, \(f\), similar to that of LIGO, thereby probing merger events of nearby compact objects such as binary neutron stars (BNS) in the mass range of \([1,2],[1,2]\,\mathrm{M_{\odot}}\), and black hole-neutron star binaries (BHNS) in the mass range \([3,10],[1,2]\,\mathrm{M_{\odot}}\), with the \([\cdot,\cdot]\) notation indicating the uniformly distributed mass ranges considered. Advanced LIGO claims a ratio of BHNS to BNS merger events of \(\sim 0.03\)[79]. The redshift probability distribution of these events is proportional to \[P\propto\frac{4\pi d_{c}(z)R(z)}{(1+z)H(z)}\,, \tag{12}\] where the comoving distance and the Hubble parameter are taken at various redshifts determined by CLASS. \(R(z)\) stands for the merger rate, which, at a linear approximation level, is [74] \[R=\begin{cases}1+2z&\text{if }z<1\,,\\ \frac{3}{4}(5-z)&\text{if }1\leq z<5\,,\\ 0&\text{otherwise}\,.\end{cases} \tag{13}\] On the other hand, LISA will target lower frequencies when compared with other proposed 3G detectors, implying sensitivity to events from larger mass binary systems since \(f\propto M^{-1}\). Therefore we focus on simulating the detection of events from EMRIs and BMBHs in the ranges \([1-30],[10^{4}-10^{8}]\,\mathrm{M_{\odot}}\)[80] and \([10^{4}-10^{8}],[10^{4}-10^{8}]\,\mathrm{M_{\odot}}\)[81], respectively. The number of detected BMBH to EMRI events is estimated to follow a \(2:1\) ratio according to the mission's proposal [82, 83]. Although in principle LISA will also be able to probe mergers of binary intermediate-mass black holes (IMBHs) and binary compact objects, we opt to discard these from the simulations. This is due to the fact that there is no definitive observational proof of IMBH, and expected events from binary compact objects will only be observed at redshifts \(z\approx 3\)[84]. These events are insignificant since we are interested in the higher range of redshifts for our cosmology. Considering events involving BMBHs only, the redshift probability distributions are based on the histogram for the L6A2M5N2 mission specification [81] which considers three formation processes of BMBHs. We consider the light seed model (pop III) which attributes the formation of BMBHs to the remnants of population III stars around \(z=15-20\). In [81], two additional scenarios for massive black hole formation were investigated, namely delay and no delay scenarios. These cases involve the collapse of gas in a galactic centre at \(z=15-20\), leading to the formation of a black hole through a heavy seed mechanism with and without a delay between galaxy merger and the merger of the central massive black hole. Further information on these scenarios can be found in Ref. [85]. In our investigation we provide mock data and obtain forecasts for both the delay and no delay cases. However, the analysis reveals that the predicted constraining power from these models shows no actual improvement compared to the pop III case. Consequently, in this paper, we focus solely on the pop III model, as it proves sufficient to forecast the constraining power of LISA. ### Simulation of Measurements and Errors To simulate the errors associated with the standard siren catalogue, we follow the methodology of [71, 72, 73, 74]. An apparent detection of a GW event is assessed by evaluating the signal-to-noise ratio (SNR), \(\rho\), and only confirmed if \(\rho>8\). The SNR is defined as \[\rho_{1,2,3}^{2}=4\int_{f_{min}}^{f_{max}}df\,\frac{|\mathcal{H}|^{2}}{S_{h}}\,, \tag{14}\] where the number labels indicate the interferometer being considered. \(\mathcal{H}\) has been defined in Eq. (10) and \(S_{h}\) is the noise power spectral density, an SNR weighting function that accounts for the particular properties of the instruments used. For ET, in particular, \(S_{h}\) is designed to follow \[S_{h}^{\rm(ET)}=S_{0}\left(x^{p_{1}}+a_{1}x^{p_{2}}+a_{2}\frac{1+ \sum_{n=1}^{6}b_{n}x^{n}}{1+\sum_{m=1}^{4}c_{m}x^{m}}\right)\,, \tag{15}\] where, \(x=f/200\) Hz\({}^{-1}\), \(S_{0}=1.449\times 10^{-52}\) Hz, \(p_{1}=-4.05\), \(p_{2}=-0.69\), \(a_{1}=185.62\), \(a_{2}=232.56\), \(b_{n}=\{31.18,-64.72,52.24,-42.16,10.17,11.53\}\), and \(c_{m}=\{13.58,-36.46,18.56,27.43\}\), assuming a lower cutoff at \(f=1\) Hz. On the other hand, for LISA, \(S_{h}\) depends on the instrumental (or short) noise, \(S_{inst}\), the noise from low-level acceleration, \(S_{acc}\), and the confusion background noise, \(S_{conf}\)[85]: \[S_{h}^{\rm(LISA)}=\frac{20}{3}\frac{4S_{acc}+S_{insta}+S_{conf}}{ L^{2}}\left[1+\left(\frac{fL}{0.81c}\right)\right]\,, \tag{16}\] where \(S_{acc}=9\times 10^{-30}/(2\pi f)^{4}(1+10^{-4}/f)\), \(S_{inst}=2.22\times 10^{-23}\) and \(S_{conf}=2.65\times 10^{-23}\). Therefore, the total SNR contribution for each detector is given by combining (10) with either Eq. (15) or Eq. (16) for the ET and LISA, respectively: \[\rho_{tot}=\sqrt{\rho_{1}^{2}+\rho_{2}^{2}+\rho_{3}^{2}}\,. \tag{17}\] The instrumental error in the luminosity distance is determined _via_ the Fisher matrix, \[\sigma_{d_{L}}^{inst}\approx\left\langle\frac{\partial\mathcal{H }}{\partial d_{L}},\frac{\partial\mathcal{H}}{\partial d_{L}}\right\rangle^{- \frac{1}{2}}\,, \tag{18}\] following [72]. Since \(\mathcal{H}\propto d_{L}^{-1}\) this results simply in \[\sigma_{d_{L}}^{inst}\approx\frac{2d_{L}}{\rho}\,, \tag{19}\] where the factor of 2 accounts for the symmetry in the inclination angle, which actually ranges from \(-20^{\circ}\) to \(20^{\circ}\). The error due to gravitational lensing is, \[\sigma_{d_{L}}^{len}=\frac{d_{L}}{2}\times 0.066\left[4(1-(1+z)^{1/4}) \right]^{1.8}\,, \tag{20}\] reduced by a half to account for both the merger and ringdown of the event. Being space-based, LISA is also subject to an error associated with the peculiar velocities of GW sources [86]: \[\sigma_{d_{L}}^{pec}=d_{L}\frac{\sqrt{\langle v^{2}\rangle}}{c} \left[1+\frac{c(1+z)}{Hd_{L}}\right]\,, \tag{21}\] with an estimate of the peculiar velocity of the host galaxy with respect to the Hubble flow of \(\sqrt{\langle v^{2}\rangle}=500\,\mathrm{km\,s^{-1}}\). Bringing all the contributions together, the total error in the luminosity distance is simply a combination of the errors in Eqs. (19)-(21): \[\sigma_{d_{L}}=\sqrt{(\sigma_{d_{L}}^{inst})^{2}+(\sigma_{d_{L}}^{ len})^{2}+(\sigma_{d_{L}}^{pec})^{2}}\,. \tag{22}\] The simulation allows us to interpolate any number of events over a continuous redshift distribution in the range \(0<z\lesssim 5\) for ET and \(0<z\lesssim 10\) for LISA. However, the number of mergers detected by ET will depend on factors such as running costs and the complementary detection with other experiments [71]. ET is expected to report more than \(10^{4}\) mergers yearly. However, due to the scarcity of EM counterpart signals, the predicted number of detectable mergers with an actual EM counterpart over the course of 10 years is approximately 200 [87]. According to [81], LISA's number of detected mergers, for a 10-year mission proposal, is 56 events. To incorporate uncertainty into the luminosity distance of each merger, we apply a Gaussian distribution centred around the background cosmology. The standard deviation for this distribution is set to the calculated errors, \(\sigma_{d_{L}}\). This introduces artificial randomness around each merger, leading to a larger deviation from \(\Lambda\)CDM in LISA compared to ET. The reason for this difference is that LISA probes larger redshifts, which are associated with larger errors, resulting in a broader spread of the data, as depicted in Fig. 1. ### Data Sets and Likelihoods To examine the fit of the simulated data to the coupled quintessence models considered in this study, we employ the Markov Chain Monte Carlo (MCMC) method, using samples generated from our modified version of CLASS interfaced with the Monte Python sampler [88; 89]. In particular, we resort to the Nested Sampling algorithm through the MultiNest2[90; 91; 92] and PyMultiNest3[93] packages to estimate observational constraints on the free parameters, instead of Figure 1: Mock data from ET (red circle markers) and LISA (blue square markers), for the fiducial model, \(\Lambda\)CDM, shown by the grey dotted line. the traditional Metropolis-Hastings algorithm. The Metropolis-Hastings algorithm struggles to explore the full line of degeneracies between the parameters, resulting in false peaks in the posterior distribution, which the sampler cannot move away from. Nested sampling is able to explore the full extent of the degeneracies as it is much better suited for multi-model sampling (see section IV) and other more complicated distributions. Subsequently, we analyse the MCMC chains and present the results using the GetDist4 Python package [94]. Footnote 4: [https://github.com/cmbant/getdist](https://github.com/cmbant/getdist) The likelihood function for the simulated data set of standard siren GW events is constructed according to the effective Gaussian distribution: \[\ln\mathcal{L}_{SS}=-\frac{1}{2}\sum_{i=1}^{n}\left[\frac{d_{SS}^{\rm(obs)}(z_{ i})-d_{SS}(z_{i})}{\sigma_{d_{L,i}}}\right]^{2}\,, \tag{23}\] where \(d_{SS}^{\rm(obs)}(z)\) is the observed luminosity distance, which in this case corresponds to the samples generated according to the procedure outlined above; \(d_{SS}(z)\) is the model-dependent theoretical prediction for the luminosity distance of the event, computed numerically with the modified CLASS code; \(\sigma_{d_{L}}\) is the total error in the luminosity distance, as defined in Eq. (22); and \(n\) is the number of observed events. Since we want to forecast the constraining power of standard siren data probed by ET and LISA on coupled quintessence models, we assess the independent and combined constraints with _current_ background data. This allows for a direct comparison of whether GW catalogues will improve the constraints on \(\{\Omega_{m}^{0},H_{0}\}\) and on the model-specific parameters, affecting the background evolution. In particular, we include baryonic acoustic oscillations (BAO) data from the Sloan Digital Sky Survey (SDSS) DR7 Main Galaxy Sample [95], SDSS DR12 consensus release [96] and the 6dF Galaxy Survey [97], in combination with distance _moduli_ measurements of 1048 type Ia Supernova (SNIa) data from Pantheon [98]. This combined data set is referred to as "SNIa+BAO". Our analysis involves a set of free sampling parameters, including the baseline \(\Lambda\)CDM cosmological parameters \((\Omega_{m}^{0},H_{0})\) and the parameters associated with each coupled quintessence model, for which we consider as fiducial value their \(\Lambda\)CDM limit. The models discussed in Sec. IV reduce to \(\Lambda\)CDM in the following limits: \(\lambda=0\) and \(\beta=0\) for IV.1; \(\lambda=0\) and \(\alpha=0\) for IV.2; \(\lambda=0\) and \(D_{0}=0\) for IV.3; \(\lambda=0\), \(\beta=0\) and \(D_{0}=0\) for IV.4. We adopt flat priors for all parameters within the ranges specified in Table 1. ## IV Forecast results In what follows, we employ the methodology and the data sets discussed in Sec. III to investigate the power that LISA and ET standard sirens have in constraining the cosmological parameters \(\{\Omega_{m}^{0},H_{0}\}\), the model-dependent conformal and disformal coupling parameters and the steepness of the self-interacting potential. In particular, we consider four interacting DE models: a standard coupled quintessence model, a kinetically coupled model, a constant disformal model and a mixed conformal-disformal model. For each of the four scenarios, we provide a brief review of the theoretical framework before presenting the forecasts obtained considering the specifications and assumptions discussed in previous sections. In each subsection that follows, we show the resulting 2D contours, and 1D marginalised posterior distributions for \(\{H_{0},\Omega_{m}^{0}\}\) plus the set of model-specific parameters (see Table 1) for the cases of ET and LISA and their combination. These plots also include a combination of SNIa+BAO, and for reference, the results of SNIa+BAO alone. The results are also summarised in a table with their corresponding \(1\sigma\), identified in the text with the notation \(\{\sigma_{\rm p}\}\) where \(p\) is an index spanning over the model parameters. We also use \(\mathcal{F}_{\rm p}^{({\rm i},{\rm j})}=\{\sigma_{\rm p}^{({\rm j})}/\sigma_{ \rm p}^{({\rm j})}\}\) where here i and j stand for two different data sets, to denote the change in error for the specific parameter p. ### Conformal Coupling The first model we consider is the conformal coupling model, for which \[C(\phi)=e^{2\beta\phi/M_{\rm Pl}}\quad\text{and}\quad V(\phi)=V_{0}e^{- \lambda\phi/M_{\rm Pl}}\,, \tag{24}\] where \(C(\phi)\) is defined according to Eq. (1) and \(V(\phi)\) is the DE potential energy. The exponential parameters \(\beta\) and \(\lambda\) are constant dimensionless parameters and \(V_{0}\) is a constant with dimensions of \((\text{mass})^{4}\) that sets the energy scale of the potential5. In such models, the mass of the DM particles becomes \(\phi\)-dependent and the DE field mediates a long-range force between DM particles so that the effective gravitational coupling is \begin{table} \begin{tabular}{|c|c|c|} \hline Model & Parameter & Prior \\ \hline \multirow{4}{*}{All} & \(\Omega_{b}h^{2}\) & \([0.018,0.03]\) \\ & \(\Omega_{c}h^{2}\) & \([0.1,0.2]\) \\ & \(h\) & \([0.6,0.8]\) \\ & \(\lambda\) & \([0,2]\) \\ \hline IV.1 and IV.4 & \(\beta\) & \([0,2]\) \\ \hline IV.2 & \(\alpha\) & \([0,0.001]\) \\ \hline IV.3 and IV.4 & \(D_{0}/\text{meV}^{-1}\) & \([0,2]\) \\ \hline \end{tabular} \end{table} Table 1: Flat priors on the cosmological and model parameters sampled in Sec. IV. given by \(G_{\rm eff}=G_{N}\left(1+2\beta^{2}\right)\)[21; 99; 100]. The free parameters we are particularly interested in are the slope of the potential \(\lambda\) and the coupling parameter \(\beta\). Constraints on this model have been obtained in Ref. [101] using background data only (\(H(z)\), BAO and supernova Union2.1). Using these data, the authors found the following upper limits: \(\beta<0.193\) and \(\lambda<1.27\). In [102] stronger constraints have been obtained using _Planck_ data, BAO and SNIa data, also in line with Ref. [103], in which the authors found \(\beta<0.0298\) and \(\lambda<0.6\) for the \(1\sigma\) upper limits. According to the results in Figs. 2, 3 and 4, summarised in Table 2, we comment on the resulting constraints for GW data sets compared with SNIa+BAO for the parameters \(\{\Omega_{m}^{0},H_{0},\beta,\lambda\}\). When ET standard sirens are considered, we find that the cosmological and model parameters can be constrained at \(1\sigma\) with an accuracy \(\{0.0080,0.37,0.0070,0.32\}\) for ET alone and \(\{0.0075,0.36,0.039,0.31\}\) for ET+SNIa+BAO, resulting in a change in error of \(\mathcal{F}_{\Omega_{m}^{\rm ET},H_{0},\beta,\lambda}^{\rm(ET,ET+SNIa+BAO)}= \{0.94,0.97,0.56,0.97\}\). Thus, the forecasted constraints of ET+SNIa+BAO, compared to ET alone, have increased accuracy in all parameters shown by the reduction in \(\sigma\). This trend is also present in the LISA data set with the cosmological and model parameters constrained with an accuracy of \(\{0.0071,0.47,0.098,0.24\}\) for LISA alone and \(\{0.0051,0.37,0.031,0.22\}\) for LISA+SNIa+BAO, resulting in a reduction in \(\sigma\) by a factor of \(\mathcal{F}_{\Omega_{m}^{\rm(LISA,LISA+SNIa+BAO)}}^{\rm(LISA,LISA+SNIa+BAO)}= \{0.72,0.79,0.32,0.92\}\). For the combination of just SNIa+BAO we obtain an accuracy of \(\{0.0074,4.1,0.049,0.28\}\). Compared to ET+SNIa+BAO and LISA+SNIa+BAO to SNIa+BAO, there is also a reduction in \(\sigma\) for all parameters except one. The ET+SNIa+BAO data set results in a nominal increase in \(\sigma_{\Omega_{m}^{0}}\) compared to SNIa+BAO. Comparing the errors of ET and LISA to SNIa+BAO we see only minor changes in the constraining power regarding \(\Omega_{m}^{0}\), with ET performing slightly worse and LISA slightly better. A similar trend occurs for the parameter \(\lambda\), with ET performing nominally worse and LISA better. However, particular attention should be given to the significant reduction in \(\sigma_{H_{0}}\) when comparing ET and LISA to SNIa+BAO. There is a reduction in the error by a factor of \(\mathcal{F}_{H_{0}}^{\rm(SNIa+BAO,ET)}=0.090\) and \(\mathcal{F}_{H_{0}}^{\rm(SNIa+BAO,LISA)}=0.11\). Forecasting GWs will improve the constraints on \(H_{0}\), suggesting that GWs will be critical in addressing the Hubble tension. On the other hand, we see the opposite effect with \(\beta\), with an increase in the error by a factor of \(F_{\beta}^{\rm(SNIa+BAO,ET)}=1.4\) and \(\mathcal{F}_{\beta}^{\rm(SNIa+BAO,LISA)}=2.0\). Nonetheless, when the background data is combined with ET and/or LISA, the constraints improve by \(F_{\beta}^{\rm(ET,ET+SNIa+BAO)}=0.56\) and \(\mathcal{F}_{\beta}^{\rm(LISA,LISA+SNIa+BAO)}=0.32\). In comparing the constraining power of ET and LISA (see Fig. 4), it is evident that they have comparable spreads for the cosmological parameters. An interesting feature we observe is that ET is more constraining in regard to \(H_{0}\). We attribute this feature to the fact that the ET catalogue has more data points than LISA at low redshifts, as illustrated in Fig. 1. By combining GW data from LISA and ET, which implies an increase of data points over a wide range of redshifts, we predict an enhanced constraining power in the cosmological parameters, \(\{H_{0},\ \Omega_{m}^{0}\}\), compared to the SNIa+BAO case, more precisely \(\mathcal{F}_{\Omega_{m}^{0},H_{0}}^{\rm(SNIa+BAO,ET+LISA)}=\{0.65,0.063\}\). However, for the model parameters \(\beta\) and \(\lambda\), we observe modifications to the constraining power with \(\mathcal{F}_{\beta\lambda}^{\rm(SNIa+BAO,ET+LISA)}=\{1.8,0.86\}\). The combination of ET+LISA with SNIa+BAO results in a negligible change of constraining power for \(\Omega_{m}^{0},H_{0},\lambda\). Only the parameter \(\beta\) is more constrained when the data sets are combined, with the \(1\sigma\) reduced by almost a third. Compared to the current background constraints mentioned in the beginning of the section, we find in our analysis that the upper bounds at \(1\sigma\) on the model parameters are improved in the following cases: \(\beta<0.14\) and \(\lambda<0.62\) (SNIa+BAO); \(\beta<0.175\) and \(\lambda<0.76\) (ET); \(\beta<0.096\) and \(\lambda<0.75\) (ET+SNIa+BAO); \(\lambda<0.48\) (LISA and LISA+SNIa+BAO) and \(\beta<0.073\) (LISA+SNIa+BAO); \(\lambda<0.43\) (ET+LISA); \(\lambda<0.52\) and \(\beta<0.08\) (ET+LISA+BAO). \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{8}{|c|}{Conformal Coupled Quintessence} \\ \hline \hline Data sets & \(\Omega_{m}^{0}\) & \(\sigma_{\Omega_{0}^{0}}\) & \(H_{0}\) & \(\sigma_{H_{0}}\) & \(\beta\) & \(\sigma_{\beta}\) & \(\lambda\) & \(\sigma_{\lambda}\) \\ \hline \hline SNIa+BAO & \(0.3019^{+0.0088}_{-0.0059}\) & \(0.0074\) & \(73.2^{+4.7}_{-3.5}\) & 4.1 & \(0.085^{+0.055}_{-0.043}\) & \(0.049\) & \(0.42^{+0.20}_{-0.36}\) & \(0.28\) \\ \hline \hline ET & \(0.307^{+0.011}_{-0.0050}\) & \(0.0080\) & \(67.49^{+0.39}_{-0.34}\) & 0.37 & \(0.115^{+0.060}_{-0.079}\) & \(0.070\) & \(0.50^{+0.28}_{-0.38}\) & \(0.32\) \\ ET+SNIa+BAO & \(0.3046^{+0.0099}_{-0.0051}\) & \(0.0075\) & \(67.37\pm 0.36\) & \(0.36\) & \(0.063^{+0.033}_{-0.048}\) & \(0.039\) & \(0.49^{+0.26}_{-0.35}\) & \(0.31\) \\ \hline \hline LISA & \(0.3039^{+0.0093}_{-0.0049}\) & \(0.0071\) & \(67.50^{+0.50}_{-0.44}\) & 0.47 & \(0.167^{+0.085}_{-0.11}\) & \(0.098\) & \(0.33^{+0.15}_{-0.32}\) & \(0.24\) \\ LISA+SNIa+BAO & \(0.3028^{+0.0065}_{-0.0036}\) & \(0.0051\) & \(67.52\pm 0.37\) & \(0.37\) & \(0.048^{+0.025}_{-0.037}\) & \(0.031\) & \(0.33^{+0.15}_{-0.29}\) & \(0.22\) \\ \hline \hline ET+LISA & \(0.3079^{+0.0061}_{-0.0034}\) & \(0.0048\) & \(67.56\pm 0.26\) & 0.26 & \(0.178^{+0.099}_{-0.081}\) & \(0.090\) & \(0.30^{+0.13}_{-0.27}\) & \(0.24\) \\ ET+LISA+SNIa+BAO & \(0.3044^{+0.0063}_{-0.0032}\) & \(0.0048\) & \(67.45\pm 0.28\) & 0.28 & \(0.052^{+0.028}_{-0.038}\) & \(0.033\) & \(0.35^{+0.17}_{-0.30}\) & \(0.24\) \\ \hline \end{tabular} \end{table} Table 2: Marginalised constraints on cosmological and model parameters for the Conformal Coupled Quintessence model at 68% C.L. ### Kinetic Conformal Coupling As an example of a coupled quintessence model in which the conformal function is less trivial, we focus on a pure dependence on derivatives of the scalar field through the kinetic term of \(\phi\), \(X=-\partial_{\mu}\phi\partial^{\mu}\phi/2\), to which we refer as the kinetic coupling. Such a setting has been proposed in [104] (see references therein as well), and we focus on the particular example of a power law, as studied in [70]. Even though this model is proposed based on a Lagrangian framework (\(\mathcal{L}_{\rm DM}\longrightarrow\left(X/M_{\rm Pl}^{4}\right)^{\alpha} \mathcal{L}_{\rm DM}\)), at the background level it is equivalent to the kinetic-dependent conformal transformation \(\bar{g}_{\mu\nu}=C(X)g_{\mu\nu}\), with \[C(X)=\left(M_{\rm Pl}^{-4}X\right)^{2\alpha}\quad\text{and}\quad V(\phi)=V_{0}e ^{-\lambda\phi/M_{\rm Pl}}\,. \tag{25}\] where \(\alpha\) is a dimensionless constant and a simple exponential potential has been assumed just like in the previous case, and the same considerations apply for \(\lambda\) and \(V_{0}\). In summary, an analysis based on _Planck_ and the SNIa+BAO background data in Ref. [70] reveals the power of BAO data in constraining \(\Omega_{m}^{0}\), which is highly correlated with the steepness of the potential \(\lambda\). The coupling parameter \(\alpha\) is constrained to be of the order of \(10^{-4}\). The constraints on the cosmological parameters are found to be compatible with the \(\Lambda\)CDM ones within the errors. Moreover, a positive correlation between \(H_{0}\) and \(\Omega_{m}^{0}\) is identified. While this trend is attributed to the evolution of the linear perturbations for non-vanishing \(\alpha\), we find that it is still present for the background standard siren data Figure 3: 68% and 95% C.L. 2D contours and 1D marginalised posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,\beta\}\) in the conformal coupled quintessence model with LISA mock data (charcoal filled line), SNIa+BAO data (red dotted line) and their combination (yellow dashed line). The scale is the same as in Fig. 2 for comparison purposes, with the SNIa+BAO contours standing as the reference. The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). Figure 2: 68% and 95% C.L. 2D contours and 1D marginalised posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,\beta\}\) in the conformal coupled quintessence model with the ET mock data (charcoal filled line), SNIa+BAO data (red dotted line) and their combination (yellow dashed line). The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). sets. From the results presented in Figures 5, 6 and 7, and summarised in Table 3, we analyse the constraints on the parameters \(\{\Omega_{m}^{0},H_{0},\lambda,10^{4}\alpha\}\) for the same data sets as in the previous case. When evaluating the errors from ET standard sirens and comparing them to SNIa+BAO data, we observe that for most parameters, ET's \(1\sigma\) constraints are of the same order, apart from the \(H_{0}\) parameter, which is improved by 1 order of magnitude. This reduction is quantified by the fractional change of \(\mathcal{F}_{\Omega_{m}^{0},H_{0},\beta,\lambda}^{\rm(SNIa+BAO,ET)}=\{1.1,0.12,1.0,1.1\}\). When the data sets are combined (ET+SNIa+BAO), we find that the \(1\sigma\) region is narrower for all parameters compared to ET alone, with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},\alpha,\lambda}^{\rm(ET,ET+SNIa+BAO)}=\{0.84,0.92,1.0,0.92\}\). In the case of LISA standard sirens, we observe that all cosmological and model parameters are better or equally constrained by LISA alone compared to SNIa+BAO, with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},\alpha,\lambda}^{\rm(SNIa+BAO,LISA)}=\{0.91,0.13,1.0,1.0\}\). Combining LISA with SNIa+BAO, we find improved constraints with respect to the SNIa+BAO data set alone. Moreover, when comparing LISA+SNIa+BAO with LISA alone, the former shows an even better constraining power, with the most significant reduction in error observed for \(\Omega_{m}^{0}\), with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},\alpha,\lambda}^{\rm(LISA,LISA+SNIa+BAO)}=\{0.78,0.92,1.0,0.87\}\). For both the ET and LISA data sets, the accuracy on \(H_{0}\) can be improved by 1 order of magnitude (0.36 for ET and 0.39 for LISA) compared to SNIa+BAO (3.1), as reported in section IV.1 as well. Interestingly, for all data sets and combinations, the accuracy of the model parameters remains largely unaffected, with the \(1\sigma\) region for \(\lambda\) showing only nominal changes and remaining unchanged for \(\alpha\). Comparing the constraining power of ET and LISA with their combination, ET+LISA, we see that the latter provides better constraining power for the cosmological parameters than any of the other data sets analysed. Regarding the model parameters, there seems to be a minimal change in accuracy compared to the single ET or LISA data sets. We do note that the GW combination provides better accuracy with respect to SNIa+BAO, with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},\alpha,\lambda}^{\rm(SNIa+BAO,ET+LISA)}=\{0.68,0.084,1.0,0.91\}\). The full combination of ET+LISA+SNIa+BAO, has a negligible change in the constraints when compared to ET+LISA for all parameters. Comparing the accuracy of the constraints of the Kinetic model obtained in Ref. [70] with CMB TT, TE and EE _Planck_ 2018, _Planck_ CMB lensing, BAO and SNIa data, we note that the parameter \(\alpha\) is better constrained by CMB data and its combination with BAO and SNIa by 1 order of magnitude when compared to our data combinations, given that the latter only depend on the background evolution. More precisely, we report \(\sigma_{10^{4}\alpha}=2.9\) for all the data set combinations while in Ref. [70] this was reduced to \(\sigma_{10^{4}\alpha}=0.95\) (Plk18), 0.84 (Plk18+SNIa+BAO), and 0.7 (Plk18+SNIa+BAO+Lensing). Future ET and LISA catalogues will be able to constrain \(\lambda\) at the same level as _Planck_ CMB data (\(\sigma_{\lambda}=0.48\) with Plk18 and \(\sigma_{\lambda}=0.2\) with both Plk18+SNIa+BAO and Plk18+SNIa+BAO+Lensing). On the other hand, the standard siren data will better constrain \(H_{0}\) by 1 order of magnitude with respect to Plk18 (\(\sigma_{H_{0}}=2.5\)). CMB lensing data increase the constraint by 1 order of magnitude, namely with accuracy \(\sigma_{H_{0}}=0.6\), which is of the same order of magnitude as the ET and LISA cases, even though the standard sirens perform better in terms of the relative error with \(\sigma_{H_{0}}<0.4\) for all the combinations considered. ### Disformal Coupling In the following we study the model with disformal coupling only, \[C=1\,,\quad D=D_{0}^{4}\quad\text{and}\quad V(\phi)=V_{0}e^{-\lambda\phi/M_{ \rm Pl}}\,, \tag{26}\] in which case the conformal contribution vanishes and \(D\) is simply a constant with dimensions of \((\text{mass})^{-4}\) in Eq. (2) (and hence \(D_{0}\) has units of \((\text{mass})^{-1}\)), and \(V(\phi)\) follows the same considerations as in the previous cases. The constraints on this model have been obtained in [101] and [102]. It was found that using background data only (\(H(z)\), BAO and supernova Union2.1 data) results in the following constraints: \(D_{0}>0.07\)\(\text{meV}^{-1}\) and \(\lambda<1.56\) at 95.4% [101]. An upper bound can be obtained for \(D_{0}\) with CMB data (including lensing) and BAO, SNIa, cosmic chronometers, cluster abundance, and \(H_{0}\) priors which is \(D_{0}<0.2500\)\(\text{meV}^{-1}\) and a stringent upper limit for \(\lambda\) is \(<0.6720\) at \(1\sigma\)[102]. From Figs. 8, 9 and 10, summarised in Table 4, we analyse the results for the parameters Figure 5: 68% and 95% C.L. 2D contours and 1D posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,10^{4}\alpha\}\) in the kinetic conformal coupled quintessence model with ET (charcoal filled line), SNIa+BAO (red dotted line) data and their combination (yellow dashed line). The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). \(\{\Omega_{m}^{0},H_{0},D_{0},\lambda\}\) for the same data sets as in the previous cases. In our analysis of the ET data set alone, we observe an improved accuracy for all cosmological and model parameters, \(\{\Omega_{m}^{0},H_{0},D_{0},\lambda\}\), compared to SNIa+BAO, with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},D_{0},\lambda}^{(\mathrm{SNIa+BAO,ET+ISA})}=\{ 0.71,0.10,0.98,0.85\}\). As expected, the combination of data sets, ET+SNIa+BAO, also results in improved accuracy compared to SNIa+BAO. Compared to the ET data set alone, there are only minor changes in the parameters' accuracy, \(\mathcal{F}_{\Omega_{m}^{0},H_{0},D_{0},\lambda}^{(\mathrm{ET+SNIa+BAO})}=\{ 1.1,0.97,1.1,1.0\}\). In the case of LISA standard sirens, we find that the cosmological and model parameters follow a similar accuracy trend with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},D_{0},\lambda}^{(\mathrm{SNIa+BAO,LISA})}= \{0.71,0.11,0.98,0.85\}\). Moreover, the same is true for the combination LISA+SNIa+BAO, with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},D_{0},\lambda}^{(\mathrm{LISA+SNIa+BAO})}= \{1.0,1.0,0.98,1.22\}\). There is only a nominal change in the accuracy of parameters compared to that of the LISA alone, apart from \(\lambda\), which results in a larger \(1\sigma\) region, with \(\sigma_{\lambda}=0.71\). Regardless of the data combination, \(\{\Omega_{m}^{0},D_{0},\lambda\}\) are constrained at the same level, with the parameter \(\lambda\) having a slight improvement in accuracy for both ET and LISA (both have \(\sigma_{\lambda}=0.58\) compared to \(\sigma_{\lambda}=0.7\) for SNIa+BAO). The accuracy of the \(H_{0}\) parameter is 1 order of magnitude better for both ET and LISA than SNIa+BAO. There is no change in the model parameters for ET and LISA, and thus we see no noticeable change in the constraints for ET+LISA. However, there is an increase in accuracy for the cosmological parameters. As both ET and LISA improved the constraints compared to SNIa-BAO, we observe the expected result, that ET+LISA have further improved constraints with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},D_{0},\lambda}^{(\mathrm{SNIa+BAO,ET+LISA})}= \{0.55,0.071,0.96,0.85\}\). Following the trend with LISA and the combination with SNIa+BAO, we note that the combined data set ET+LISA+SNIa+BAO, has very little change in the accuracy compared to ET+LISA apart from the constraint for \(\lambda\), which results in a worse accuracy than ET+LISA and SNIa+BAO with \(\mathcal{F}_{\Omega_{m}^{0},H_{0},D_{0},\lambda}^{(\mathrm{ET+LISA+SNIa+BAO})}= \{1.0,1.1,1.0,1.3\}\). The first thing to be noted in comparison with the results reported in Ref. [101] is that we were able to derive constraints at 68% C.L. and not only at 95% C.L. for all the model parameters, therefore providing better constraints in all the cases. Moreover, the constraining power on \(H_{0}\) is largely improved from \(\sigma_{H_{0}}\approx 2.2\) for the background data to \(\sigma_{H_{0}}\approx 0.3\) in all the cases, including standard sirens. When compared with the results of CMB, CMB lensing and additional data in Ref. [102], which report only upper bounds for \(\lambda\) and \(D_{0}\), we see that both parameters are constrained at 68% C.L. with standard sirens, with lower and upper bounds, in particular with more accommodating upper bounds, as this analysis includes only back ground data. Accordingly, the error in \(H_{0}\) is brought to the same order of magnitude with \(\sigma_{H_{0}}\approx 0.9\), which is still about 3 times larger than the one reported in this analysis. Figure 6: 68% and 95% C.L. 2D contours and 1D posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,10^{4}\alpha\}\) in the kinetic conformal coupled quintessence model with LISA mock data (charcoal filled line), SNIa+BAO (red dotted line) data and their combination (yellow dashed line). The scale is the same as in Fig. 5 for comparison purposes, with the SNIa+BAO contours standing as the reference. The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). Figure 7: 68% and 95% C.L. 2D contours and 1D constrained posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,10^{4}\alpha\}\) in the kinetic conformal coupled quintessence model with ET mock data (charcoal filled line), LISA mock data (red dotted line) and their combination (yellow dashed line). The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). Figure 8: 68% and 95% C.L. 2D contours and 1D posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,D_{0}\}\) in the constant disformal coupled quintessence model with ET (charcoal filled line), SNIa+BAO (red dotted line) data and their combination (yellow dashed line). The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). Figure 6: 68% and 95% C.L. 2D contours and 1D posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,10^{4}\alpha\}\) in the kinetic conformal coupled quintessence model with LISA mock data (charcoal filled line), SNIa+BAO (red dotted line) data and their combination (yellow dashed line). The scale is the same as in Fig. 5 for comparison purposes, with the SNIa+BAO contours standing as the reference. The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). ### Mixed Conformal-Disformal Coupling Finally, we discuss a model with a mixed coupling consisting of a conformal and a disformal part. Specifically, we consider \[C(\phi) =e^{2\beta\phi/M_{\rm Pl}},\quad D(\phi)=D_{0}^{4}\quad\text{and}\] \[V(\phi) =V_{0}e^{-\lambda\phi/M_{\rm Pl}}\,. \tag{27}\] As for the disformal case, the constraints on such a model were discussed in [101] and [102]. For the same background data it has been reported that \(D_{0}>0.102\) meV\({}^{-1}\), \(\beta<0.453\) and \(\lambda<1.59\) at 95.4% [101]. An example of the constraints including CMB data in [102] are \(\beta\lesssim 0.17\) and \(\lambda\lesssim 0.35\) at \(1\sigma\), with the details depending on the data sets used, with the disformal coupling \(D_{0}\) not always well constrained for this case, with lower bounds of \(D_{0}\gtrsim 0.35\) meV\({}^{-1}\) for some data combinations. In Figs. 11, 12 and 13 and Table 5 we show the results for the parameters \(\{\Omega_{m}^{0},H_{0},\beta,D_{0},\lambda\}\) for the same data sets as before. In our analysis of the ET data set alone, we observe improved accuracy for the cosmological parameters, \(\Omega_{m}^{0}\) and \(H_{0}\), compared to the SNIa+BAO data set, with \(\mathcal{F}_{\Omega_{m}^{\rm SNIa+BAO,ET}}^{(\rm{LISA+BAO,ET})}=\{0.61,0.088\}\). The combined data sets, ET+SNIa+BAO, show comparable results, with a slight increase in accuracy compared to ET alone. For the model parameters, \(\{\beta,D_{0},\lambda\}\), ET compared with SNIa+BAO demonstrates close constraining power, with \(\mathcal{F}_{\beta,D_{0},\lambda}^{(\rm{SNIa+BAO,ET})}=\{1.0,0.92,1.0\}\). However, the combined data set leads to an increase in the error in \(\beta\) with \(\sigma_{\beta}=0.71\) for ET+SNIa+BAO. For the case of LISA standard sirens, we find that the cosmological parameters follow a similar trend as ET, with increased accuracy, \(\mathcal{F}_{\Omega_{m}^{\rm{eff}},H_{0}}^{(\rm{SNIa+BAO,LISA})}=\{0.72,0.088\}\), with the combined data set showing a comparable trend. However, it is worth noting a noticeable reduction in accuracy for \(H_{0}\) between LISA and the combined data set, \(\mathcal{F}_{H_{0}}^{(\rm{LISA,\,LISA+SNIa+BAO})}=\{1.1\}\). Regarding the model parameters, \(\{\beta,D_{0},\lambda\}\), we find that unlike for ET, LISA alone exhibits increased accuracy compared to SNIa+BAO, except for \(D_{0}\), \(\mathcal{F}_{\beta,D_{0},\lambda}^{(\rm{SNIa+BAO,LISA})}=\{0.98,1.1,0.98\}\). The combination of the data sets results in comparable accuracy to LISA alone, with \(\mathcal{F}_{\beta,D_{0},\lambda}^{(\rm{LISA,\,LISA+SNIa+BAO})}=\{1.0,0.89,1.0\}\). Similar to sections IV.1 to IV.3, the combination of the GW data sets leads to a significant change in the accuracy of \(\Omega_{m}^{0}\) and \(H_{0}\) compared to the SNIa+BAO data sets, \(\mathcal{F}_{\Omega_{m}^{\rm{eff}},H_{0}}^{(\rm{SNIa+BAO,\,ET+LISA})}=\{0.49,0.061\}\). The accuracy of the cosmological parameters is also slightly enhanced compared to ET or LISA alone. Regarding the model parameters, we find only a very small change compared to SNIa+BAO, \(\mathcal{F}_{\beta,D_{0},\lambda}^{(\rm{SNIa+BAO,\,ET+LISA})}=\{0.96,1.2,1.1\}\), noting that both \(D_{0}\) and \(\lambda\) are slightly less constrained. Combining all of the data sets, we note that there is a similar trend as before, with the cosmological parameters exhibiting an enhanced constraint when compared to SNIa+BAO, while the model parameters remain mostly unchanged, \(\mathcal{F}_{\Omega_{m}^{\rm{eff}},H_{0},\beta,D_{0},\lambda}^{(\rm{SNIa+BAO, ET+LISA+SNIa+BAO})}=\{0.58,0.073,1.0,0.92,1.0\}\). In summary, regardless of the combination of data sets considered, the constraints on \(\Omega_{m}^{0},\beta,D_{0},\lambda\) are of the same order of magnitude as those obtained from SNIa+BAO. Additionally, we note that the accuracy on \(H_{0}\) is improved by 1 order of magnitude for both ET and LISA compared to SNIa+BAO. Similarly to the comparison in Sec. IV.3, the main improvement in contrast with the results reported in Ref. [101] is the fact that we can obtain constraints at 68% C.L. for all the model parameters. The potential parameter \(\lambda\) is constrained with upper bounds for the standard sirens at \(1\sigma\) of the same order of the \(2\sigma\) ones reported in the previous studies. Moreover, the constraining power on \(H_{0}\) is largely improved from \(\sigma_{H_{0}}\approx 2.1\) for the background data to \(\sigma_{H_{0}}\approx 0.3\) in all the cases, including standard sirens. The comparison with results including CMB, CMB lensing and additional data in Ref. [102], which are either unable to constrain \(D_{0}\) or find just a lower bound and report only upper bounds for \(\lambda\) and \(\beta\), shows that standard sirens successfully constrain the three model parameters at 68% C.L. for all the combinations, which is a great improvement given that only background data has been considered. Including CMB data brings the error in \(H_{0}\) to the same order of magnitude with \(\sigma_{H_{0}}\approx 0.6\), which is still around 2 times larger than the ones reported in this analysis. Figure 10: 68% and 95% C.L. 2D contours and 1D marginalised posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,D_{0}\}\) in the constant disformal coupled quintessence model with ET mock data (charcoal filled line), LISA mock data (red dotted line) and their combination (yellow dashed line). The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). ## V Conclusions In this paper, we have explored the potential of future GWs detectors, namely LISA and ET, to constrain conformal and disformal couplings between the dark energy and dark matter fluids. We have considered four models: conformal coupled quintessence, a kinetic model, constant disformal coupled quintessence and a mixed conformal-disformal model. All the cases considered have the same exponential potential of which we have constrained its slope, \(\lambda\). We have generated mock catalogues of standard siren events with ET and LISA specifications and with those we performed an MCMC analysis considering, separately, their combination with current SNIa+BAO data as well for reference. Under the assumption we have used to generate the mock data assuming a particular cosmology, we find the following: * The conformal coupled quintessence: the combinations of LISA+SNIa+BAO and ET+SNIa+BAO improve the constraints on both the slope parameter, \(\lambda\), and the conformal coupling parameter \(\beta\). The combination of ET+LISA with SNIa+BAO reduces the error in \(\beta\) by one third. * The kinetic model: ET and LISA alone cannot improve the constraints on \(\lambda\) or on the conformal \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{8}{|c|}{Mixed Conformal-Disformal Coupled Quintessence} \\ \hline \hline Data sets & \(\Omega_{m}^{0}\) & \(\sigma_{\Omega_{m}}\) & \(H_{0}\) & \(\sigma_{H_{0}}\) & \(\beta\) & \(\sigma_{\beta}\) & \(D_{0}/\text{meV}^{-1}\) & \(\sigma_{D_{0}}\) & \(\lambda\) & \(\sigma_{\lambda}\) \\ \hline \hline SNIa+BAO & \(0.308^{+0.021}_{-0.015}\) & \(0.018\) & \(71.2\pm 3.3\) & \(3.30\) & \(1.01\pm 0.57\) & \(0.57\) & \(1.23^{+0.59}_{-0.43}\) & \(0.51\) & \(0.98\pm 0.57\) & \(0.57\) \\ \hline \hline ET & \(0.286^{+0.010}_{-0.012}\) & \(0.011\) & \(67.65\pm 0.29\) & \(0.29\) & \(0.85\pm 0.58\) & \(0.58\) & \(1.27^{+0.58}_{-0.35}\) & \(0.47\) & \(1.03\pm 0.58\) & \(0.58\) \\ ET+SNIa+BAO & \(0.294^{+0.011}_{-0.013}\) & \(0.012\) & \(67.50\pm 0.30\) & \(0.30\) & \(0.92^{+0.66}_{-0.76}\) & \(0.71\) & \(1.32^{+0.53}_{-0.35}\) & \(0.44\) & \(0.97\pm 0.58\) & \(0.58\) \\ \hline \hline LISA & \(0.310^{+0.017}_{-0.0087}\) & \(0.013\) & \(67.55^{+0.27}_{-0.31}\) & \(0.29\) & \(0.97\pm 0.56\) & \(0.56\) & \(1.15^{+0.63}_{-0.44}\) & \(0.54\) & \(1.01\pm 0.56\) & \(0.56\) \\ LISA+SNIa+BAO & \(0.310^{+0.016}_{-0.010}\) & \(0.013\) & \(67.59\pm 0.33\) & \(0.33\) & \(1.01\pm 0.58\) & \(0.58\) & \(1.25^{+0.53}_{-0.43}\) & \(0.48\) & \(0.98\pm 0.57\) & \(0.57\) \\ \hline \hline ET+LISA & \(0.302^{+0.0120}_{-0.0058}\) & \(0.0089\) & \(67.54\pm 0.20\) & \(0.20\) & \(0.92\pm 0.55\) & \(0.55\) & \(1.09^{+0.76}_{-0.42}\) & \(0.59\) & \(1.05^{+0.71}_{-0.56}\) & \(0.64\) \\ ET+LISA+SNIa+BAO & \(0.304^{+0.0120}_{-0.0089}\) & \(0.0105\) & \(67.53\pm 0.24\) & \(0.24\) & \(0.98\pm 0.57\) & \(0.57\) & \(1.27^{+0.53}_{-0.41}\) & \(0.47\) & \(0.97\pm 0.57\) & \(0.57\) \\ \hline \end{tabular} \end{table} Table 5: Marginalised constraints on cosmological and model parameters for the Mixed Conformal-Disformal Coupled Quintessence Model at 68% C.L. Figure 11: 68% and 95% C.L. 2D contours and 1D posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,\beta,D_{0}\}\) in the mixed conformal-disformal coupled quintessence model with ET (charcoal filled line), SNIa+BAO (red dotted line) data and their combination (yellow dashed line). The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). Figure 12: 68% and 95% C.L. 2D contours and 1D posterior distributions for the parameters \(\{H_{0},\Omega_{m}^{0},\lambda,\beta,D_{0}\}\) in the mixed conformal-disformal coupled quintessence model with LISA (charcoal filled line), SNIa+BAO (red dotted line) data and their combination (yellow dashed line). The scale is the same as in Fig. 11 for comparison purposes, with the SNIa+BAO contours standing as the reference. The dotted lines depict the fiducial values for the mock data \(\{\Omega_{m}^{0},H_{0}\}=\{0.3144,67.32\}\). exponential parameter \(\alpha\). When LISA is combined with SNIa+BAO the accuracy slightly improves for the slope parameter and for the matter density \(\Omega_{m}^{0}\). * The constant disformal coupled quintessence: all combinations can constrain the disformal parameter \(D_{0}\) at \(1\sigma\) with the same order of magnitude and a small improvement for LISA+SNIa+BAO. For the slope parameter instead, both ET, LISA and their combination perform better than SNIa+BAO. Moreover for the full catalogue combination of ET+LISA, the error in \(\Omega_{m}^{0}\) can be reduced. * The mixed conformal-disformal coupled quintessence: for all the parameters of the model there is not a significant improvement on the accuracy of their constraints when using ET or LISA data separately. A small reduction in the size of the \(1\sigma\) region is reported only for the disformal parameter, \(D_{0}\), in the full combinations. The error in \(\Omega_{m}^{0}\) is slightly reduced for ET+LISA. Regardless of the model considered we found that the accuracy on the \(H_{0}\) parameter increases by 1 order of magnitude at \(1\sigma\) when compared to the combination of BAO and SNIa data. This is promising in light of solving/understanding the \(H_{0}\) tension. This improvement is also responsible for the increased accuracy on the constraints for the model parameters when we consider the full combinations that we just reviewed. Ultimately, our results show that future 3G detectors can improve our knowledge on DE-DM interaction and shed light on the \(H_{0}\) tension. ###### Acknowledgements. We thank M. Califano for useful discussions. E.M.T. is supported by the grant SFRH/BD/143231/2019 from Fundacao para a Ciencia e a Tecnologia (FCT) and acknowledges the support of the European Consortium for Astroparticle Theory (EuCAPT) in the form of an Exchange Travel Grant. R.D. is supported by a STFC CDT studentship. N.F. is supported by the Italian Ministry of University and Research (MUR) through the Rita Levi Montalcini project "Tests of gravity on cosmic scales" with reference PGR19ILFGP. C.v.d.B. is supported (in part) by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant: ST/T001038/1. E.M.T, C.v.d.B. and N.F also acknowledge the FCT project with ref. number PTDC/FIS-AST/0054/2021 and the COST Action CosmoVerse, CA21136, supported by COST (European Cooperation in Science and Technology).
2309.08569
Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach
Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a learning framework that can provide node privacy at the user level, while incurring low utility loss. We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data at the node level before the data is collected by a central server for model training. Specifically, we investigate the application of randomization mechanisms in high-dimensional feature settings and propose an LDP protocol with strict privacy guarantees. Based on frequency estimation in statistical analysis of randomized data, we develop reconstruction methods to approximate features and labels from perturbed data. We also formulate this learning framework to utilize frequency estimates of graph clusters to supervise the training procedure at a sub-graph level. Extensive experiments on real-world and semi-synthetic datasets demonstrate the validity of our proposed model.
Karuna Bhaila, Wen Huang, Yongkai Wu, Xintao Wu
2023-09-15T17:35:51Z
http://arxiv.org/abs/2309.08569v2
# Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach ###### Abstract Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a learning framework that can provide node privacy at the user level, while incurring low utility loss. We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data at the node level before the data is collected by a central server for model training. Specifically, we investigate the application of randomization mechanisms in high-dimensional feature settings and propose an LDP protocol with strict privacy guarantees. Based on frequency estimation in statistical analysis of randomized data, we develop reconstruction methods to approximate features and labels from perturbed data. We also formulate this learning framework to utilize frequency estimates of graph clusters to supervise the training procedure at a sub-graph level. Extensive experiments on real-world and semi-synthetic datasets demonstrate the validity of our proposed model. graph neural networks, local differential privacy, frequency estimation, learning from label proportions ## I Introduction Graph data are ubiquitous in the modern world allowing graph-structured representation for complex data in the realm of social networks, finance, biology, and so on. Graph Neural Networks (GNNs) have been widely adopted in such domains to model the expressive nature of graph-structured data [1, 2]. GNNs rely on _message-passing_ mechanisms to propagate information between graph nodes and output embeddings that encode both node features and neighborhood features aggregated using graph adjacency information. These embeddings are used in predictive downstream tasks for meaningful applications such as drug discovery [3], traffic prediction [4], recommendation [5]. This widespread prevalence of GNNs, however, raises concerns regarding the privacy of sensitive information whose leakage may lead to undesirable and even harmful consequences. GNNs have been shown to be vulnerable to several privacy risks including membership inference [6], link re-identification [7], and attribute disclosure [8]. This risk of data leakage is considerably higher in GNNs compared to traditional learning models due to the presence of additional graph structure information [6]. To ensure compliance with legal data protection guidelines [9] and for the protection of user privacy, GNNs must thus be trained and deployed in a privacy-preserving manner. In this paper, we aim to address such privacy concerns in GNNs. We focus on a specific scenario of node privacy wherein node-level features and labels are held locally by each user and the global graph structure is available to the server that hosts applications. The server could benefit from users' feature data which paired with graph topology can be utilized for embedding generation and/or predictive modeling via GNNs. However, collecting user feature and label data, possibly containing sensitive and identifying information, may incur serious privacy issues. To this end, Local Differential Privacy (LDP) [10] is often adopted in data collection for training machine learning or releasing statistics in a private manner [11]. Furthermore, it has been deployed in large-scale data-gathering of user behavior and usage statistics at Apple [12] and Google [13] motivating the integration of LDP in data collection for GNNs as well. **Challenges** The main challenge in training GNNs with privately collected data arises due to the utility-privacy trade-off of differentially private mechanisms. With randomization of data at an individual level, privatized data oftentimes misrepresents the data distribution of the population. A learning model that learns feature and label correlation from this data overfits the noise and is unable to achieve good utility on predictive and analytical tasks with unseen data. Furthermore, since GNNs propagate information throughout the graph to output node embeddings, the quality of the embeddings also suffers due to additive noise present in the training data after applying LDP mechanisms. **Prior Work** A few recent works have attempted to address node privacy in GNNs [14, 15] but they enforce privacy only during training and/or model release which puts user information at risk if the server is malicious. Most notably, Sajadmanesh and Gatica-Perez [16] propose a node-level LDP framework in the distributed setting where features and labels are held private by the user, and the graph structure is known to the server. They propose an LDP protocol called multi-bit mechanism to perturb node features by extending the 1-bit mechanism [17] to multidimensional features. The multi-bit mechanism randomly selects a subset of features for each user, transforms each selected feature value to either 1 or -1, and indiscriminately reports the value 0 for the remaining ones. To protect label privacy, node labels are perturbed using Randomized Response (RR) [18]. A GCN-based multi-hop aggregator is then prepended to the GNN model for implicit
2309.09663
Probe nuclear structure using the anisotropic flow at the Large Hadron Collider
Recent studies have shown that the shape and radial profile of the colliding nuclei have strong influences on the initial condition of the heavy ion collisions and the subsequent development of the anisotropic flow. Using A Multi-Phase Transport model (AMPT) model, we investigated the impact of nuclear quadrupole deformation $\beta_2$ and nuclear diffuseness $a_0$ of $^{129}$Xe on various of flow observables in Xe--Xe collisions at $\sqrtnn =$ 5.44 TeV. We found that $\beta_2$ has a strong influence on central collisions while $a_0$ mostly influences the mid-central collisions. The relative change of flow observables induced by a change in $\beta_2$ and $a_0$ are also found to be insensitive to the values of parameters controlling the strength of the interaction among final state particles. Our study demonstrates the potential for constraining the initial condition of heavy ion collisions using future system scans at the LHC.
Zhiyong Lu, Mingrui Zhao, Jiangyong Jia, You Zhou
2023-09-18T11:02:07Z
http://arxiv.org/abs/2309.09663v1
# Probe nuclear structure using the anisotropic flow at the Large Hadron Collider ###### Abstract Recent studies have shown that the shape and radial profile of the colliding nuclei have strong influences on the initial condition of the heavy ion collisions and the subsequent development of the anisotropic flow. Using A Multi-Phase Transport model (AMPT) model, we investigated the impact of nuclear quadrupole deformation \(\beta_{2}\) and nuclear diffuseness \(a_{0}\) of \({}^{129}\)Xe on various of flow observables in Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV. We found that \(\beta_{2}\) has a strong influence on central collisions while \(a_{0}\) mostly influences the mid-central collisions. The relative change of flow observables induced by a change in \(\beta_{2}\) and \(a_{0}\) are also found to be insensitive to the values of parameters controlling the strength of the interaction among final state particles. Our study demonstrates the potential for constraining the initial condition of heavy ion collisions using future system scans at the LHC. ## 1 Introduction Ultra-relativistic heavy-ion collisions conducted at both the Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider (LHC) provide a pivotal platform for the comprehensive study of Quantum Chromodynamics (QCD) across both perturbative and nonperturbative regimes. These collisions afford the remarkable opportunity to recreate a novel state of matter, quark-gluon plasma (QGP), characterized by extreme temperatures and densities in the early stages of high-energy heavy-ion interactions. Over the past two decades, a dedicated endeavor has been directed towards extracting precise insights into the properties and the dynamic evolution of QGP [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Anisotropic flow, which quantifies the anisotropic expansion of the produced particles, has been a powerful tool for QGP studies [5; 6; 7; 8; 9; 10; 11]. It is characterized by the Fourier coefficients \(v_{n}\) of the azimuthal particle distribution [12]: \[f(\varphi)=\frac{1}{2\pi}\left[1+2\sum_{n=1}^{\infty}v_{n}\cos[n(\varphi- \Psi_{n})]\right] \tag{1}\] where \(\varphi\) is the azimuthal angle of the final particles, \(\Psi_{n}\) is the \(n_{th}\)-order flow symmetry plane, and \(v_{n}\) is called flow coefficient. From Eq(1), the flow coefficient \(v_{n}\) can be defined as: \[v_{n}=\langle\cos[n(\varphi-\Psi_{n})]\rangle\,. \tag{2}\] Here, the angular bracket \(\langle\rangle\) denotes the average over all particles in one event. The \(v_{n}\) and flow angle \(\Psi_{n}\) are the magnitude (amplitude) and angle (orientation) of the flow vector, defined as: \[V_{n}\equiv v_{n}\mathrm{e}^{in\Psi_{n}} \tag{3}\] Systematic measurements on \(v_{n}\), event-by-event fluctuations of \(v_{n}\), and correlations of different flow coefficients \(v_{n}\), \(v_{m}\), \(v_{k}\) have been previously reported in Refs. [13, 14, 15, 16, 17]. By performing the Bayesian fit on the extensive flow data, critical information on the temperature dependence of shear and bulk viscosity over entropy density ratios of the QGP, \(\eta/s(T)\) and \(\zeta/s(T)\), can be extracted [18, 19, 20, 21]. In addition, the flow measurements give direct access to the event-averaged initial-state shape of the nuclear overlap region and their event-by-event fluctuations. For typical heavy-ion collisions, the nuclear density profile in the initial-state can be described by Woods-Saxon distribution: \[\rho(r,\theta,\phi) =\frac{\rho_{0}}{1+e^{[r-R(\theta,\phi)]/a_{0}}},\] \[R(\theta,\phi) =R_{0}(1+\beta_{2}[\cos\gamma Y_{2,0}+\sin\gamma Y_{2,2}]+\beta_ {3}\sum_{m=-3}^{3}\alpha_{3,m}Y_{3,m}+\beta_{4}\sum_{m=-4}^{4}\alpha_{4,m}Y_{4,m}) \tag{4}\] where \(a_{0}\) denotes the nuclear diffuseness, while \(R_{0}\) represents the half-width radius. The nuclear surface, denoted as \(R(\theta,\phi)\), is expanded in terms of spherical harmonics \(Y_{n,m}\), where we retain terms up to \(n=4\) as expressed in the Eq. (4). Furthermore, \(\beta_{2}\), \(\beta_{3}\), and \(\beta_{4}\) stand for the quadrupole, octupole, and hexadecapole deformation parameters, respectively. The parameter \(\gamma\) characterizes the triaxial shape, depicting any imbalance present within the axes of the spheroid. Analogous to \(\gamma\), \(\alpha_{3,m}\) and \(\alpha_{4,m}\) describe the inequality of axes and satisfy the normalization condition. In recent years, various observables have been investigated for their sensitivities to nuclear structure parameters in heavy-ion collisions [22, 23, 24, 25, 26, 27, 28, 29]. As mentioned above, the anisotropic flow reflects the initial spatial anisotropies in the overlap region of the colliding nucleus. Thus, it serves as an ideal probe of initial conditions and can be utilized for the nuclear structure study. The flow coefficient \(v_{n}\) has been found to be sensitive to the deformation, characterized by deformation parameters \(\beta_{n}\), in \({}^{96}\)Ru-\({}^{96}\)Ru, \({}^{96}\)Zr-\({}^{96}\)Zr,\({}^{238}\)U-\({}^{238}\)U, \({}^{197}\)Au-\({}^{197}\)Au collisions [22, 23, 24, 25, 26]. Beyond \(v_{n}\), the spotlight extends to multi-particle cumulants of \(v_{n}\) and nonlinear flow, underlining their potential for discerning the parameters \(\beta_{n}\), \(a_{0}\), and \(R_{0}\)[25, 26, 27]. Moreover, the mean transverse momentum of the produced charged hadrons, denoted as \([p_{T}]\), which reflects the initial overlap region's size, is a valuable probe for exploring neutron skin thickness and nuclear deformation [28]. The fluctuations of \(v_{n}\) and \([p_{T}]\), as well as the correlations between them, quantified by Pearson correlation coefficient(PCC) and denoted as \(\rho(v_{n}^{2},[p_{T}])\), emerge as a nuanced avenue for constraining deformation parameters \(\beta_{n}\) and the triaxial parameter \(\gamma\)[29]. Almost all these studies are conducted within the RHIC energies (GeV energy scale), while a similar study at the LHC energies ranges (TeV energy scale) is still lacking at the moment. In particular, \({}^{129}\)Xe is the nucleus believed to have quadrupole and triaxial deformation [30], determined from low-energy nuclear theory and experiments [31, 32, 33, 34, 35, 36]. Nevertheless, only very few selected flow observables, such as \(v_{n}\{2\}\) and \(\rho(v_{n}^{2},[p_{T}])\), have been studied so far [37, 38]. The realm of systematic investigations targeting more intricate flow observables, based on multi-particle correlations in the final state that potentially tie into the many-body interactions within nuclei prior to collisions, are currently unavailable at the LHC. This paper will present comprehensive investigations of the nuclear structure using various flow observables, such as flow coefficients, flow fluctuations, correlations between flow coefficients, and nonlinear flow modes, based on the AMPT model simulations. We also study how the final state effects influence these flow observables to ensure that the nuclear parameters can be constrained without being biased by the final state effects. ## 2 A Multi-Phase Transport Model A Multi-Phase Transport (AMPT) Model [39] holds widespread application in the realm of ultra-relativistic nuclear collisions for investigating the initial conditions and transport characteristics of Quark-Gluon Plasma (QGP) [39; 40; 41; 42; 43; 44; 45; 46; 47]. This paper employs the AMPT model incorporating the string melting scenario. The model encompasses a sequence of processes, including the initial conditions of parton production, interactions among partons, hadronization through coalescence, and, finally, hadronic rescattering. More specifically, the nucleons are generated using the HIJING model [48] to establish their spatial and momentum distributions, after which they convert into partons. The interactions among these partons are governed by Zhang's Parton Cascade model (ZPC) [49]. In this model, the parton-scattering cross-section \(\sigma\) can be characterized by the following: \[\sigma=\frac{9\pi\alpha_{s}^{2}}{2\mu^{2}} \tag{5}\] where \(\alpha_{s}\) is the QCD coupling constant and \(\mu\) is the screening mass. This cross-section delineates the dynamic expansion of the QGP phase. Subsequent to the parton cascade, partons combine to form hadrons, termed hadronization, employing a coalescence model [50]. Following hadronization, the interactions among the resulting hadrons in the final state are elucidated by the ART model [51]. The AMPT simulations for the Xe-Xe collisions are employed by parameterizing the nucleon density profile with Woods-Saxon distribution shown in Eq (4). To investigate the effect of nuclear deformation and diffuseness, and also the sensitivity to the system's dynamic evolution, several sets of values for \(\beta_{2}\), \(\gamma\), \(a_{0}\), and \(\sigma\) are used for comparisons. Here we use sets 1-4 to study the effect of nuclear deformations, i.e., by changing \(\beta_{2}\) from 0 (spherical) to 0.18 (deformed, obtained from Ref. [30]) and changing \(\gamma\) from 0 (prolate) to \(27^{\circ}\) (triaxial) to \(60^{\circ}\) (oblate). Also, comparing the results from sets 2 (\(a_{0}=0.57\) from Ref. [30]) and 5 (\(a_{0}=0.492\) from Ref. [52]) can provide information for nuclear diffuseness. For the impact of transport properties, we use \(\sigma=3.0\) mb (set 6)[53; 54] instead of \(\sigma=6.0\) mb (set 3)[55]. The observables are shown in centrality dependence, where the centrality in this study is determined by the impact parameter in Xe-Xe collisions. More detailed information concerning the input parameters of the AMPT model can be found in table 1: ## 3 Analysis details ### Observables Experimentally, flow coefficients cannot be obtained directly via Eq. (2) but from two- and multi-particle correlations/cumulants[56; 57; 14; 58]: \[v_{n}\{2\}\equiv\sqrt{c_{n}\{2\}} \tag{6}\] where \(c_{n}\{2\}\) is the two-particle cumulant. In this analysis, \(v_{2}\{2\}\),\(v_{3}\{2\}\),\(v_{4}\{2\}\) are studied. In addition, the four-particle cumulants of \(v_{n}\), denoted as \(v_{n}\{4\}\), can be obtained via four-particle cumulants \(c_{n}\{4\}\): \[v_{n}\{4\}\equiv\sqrt[4]{-c_{n}\{4\}}. \tag{7}\] The higher order cumulants of \(v_{n}\) are defined in a similar way, noted as \(v_{n}\{6\}\), \(v_{n}\{8\}\), etc. The two- and multi-particle cumulants of \(v_{n}\) have different contributions from flow fluctuations \(\sigma_{v_{n}}\). It is well known that for the Gaussian type flow fluctuations and in the case \(\sigma_{v_{n}}\ll\bar{v}_{n}\), we have [59]: \[\begin{split} v_{n}\{2\}^{2}&\approx\bar{v}_{n}^{2 }+\sigma_{v_{n}}^{2},\\ v_{n}\{4\}^{2}&\approx\bar{v}_{n}^{2}-\sigma_{v_{n }}^{2},\\ v_{n}\{6\}^{2}&\approx\bar{v}_{n}^{2}-\sigma_{v_{n }}^{2},\\ v_{n}\{8\}^{2}&\approx\bar{v}_{n}^{2}-\sigma_{v_{n }}^{2},\end{split} \tag{8}\] here \(\sigma_{v_{n}}\) is the standard deviation of \(v_{n}\) distribution, which represents the event-by-event fluctuations of \(v_{n}\). Then mean flow coefficient \(\bar{v}_{n}\) (which is also known as \(v_{n}\) from the flow symmetry plane) and the flow fluctuation \(\sigma_{v_{n}}\) can be extracted from the combination of \(v_{n}\{2\}\) and \(v_{n}\{4\}\) according to the Eq. (8): \[\begin{split}\bar{v}_{n}&\approx\sqrt{\frac{v_{n} \{2\}^{2}+v_{n}\{4\}^{2}}{2}},\\ \sigma_{v_{n}}&\approx\sqrt{\frac{v_{n}\{2\}^{2}-v_ {n}\{4\}^{2}}{2}}\end{split} \tag{9}\] For central and semi-central collisions, the lower order flow coefficients \(v_{n}\) (for \(n=2,3\)) are linearly correlated with the initial eccentricity coefficients \(\varepsilon_{n}\)[13, 60]. While higher harmonic flow \(v_{n}\) (for \(n>3\)) not only has the linear response to the corresponding initial \(\varepsilon_{n}\) but also has contributions from the lower order \(\varepsilon_{2}\) and/or \(\varepsilon_{3}\)[61, 62, 63]. The latter is called the nonlinear flow mode. For example, \(V_{4}\) and \(V_{5}\) can be decomposed into the linear and nonlinear components: \[\begin{split} V_{4}&=V_{4}^{\rm NL}+V_{4}^{\rm L} \approx\chi_{4,22}(V_{2})^{2}+V_{4}^{\rm L},\\ V_{5}&=V_{5}^{\rm NL}+V_{5}^{\rm L}\approx\chi_{5, 32}V_{2}V_{3}+V_{5}^{\rm L}.\end{split} \tag{10}\] Here \(V_{n}^{\rm NL}\) and \(V_{n}^{\rm L}\) are the nonlinear and linear (or called leftover) components, respectively. Their magnitudes are denoted as \(v_{n,mk}\) and \(v_{n}^{\rm L}\). Besides, \(\chi_{n,mk}\) is the nonlinear coefficient representing the strength of nonlinear response from lower order eccentricities [63]. The correlation between different order flow symmetry planes can \begin{table} \begin{tabular}{c c c c c} \hline set & \(\beta_{2}\) & \(\gamma\) & \(a_{0}\) & \(\sigma\) \\ \hline Set 1 & 0 & 0 & 0.57 & 6.0 mb \\ Set 2 & 0.18 & 0 & 0.57 & 6.0 mb \\ Set 3 & 0.18 & 27\({}^{\circ}\) & 0.57 & 6.0 mb \\ Set 4 & 0.18 & 60\({}^{\circ}\) & 0.57 & 6.0 mb \\ Set 5 & 0.18 & 0 & 0.492 & 6.0 mb \\ Set 6 & 0.18 & 27\({}^{\circ}\) & 0.57 & 3.0 mb \\ \hline \end{tabular} \end{table} Table 1: parameter sets used in this study be studied by calculating the ratio between \(v_{n,mk}\) and \(v_{n}\{2\}\)[64]: \[\begin{split}\rho_{4,22}&=\frac{v_{4,22}}{v_{4}\{2\}}, \\ \rho_{5,32}&=\frac{v_{5,32}}{v_{5}\{2\}}\end{split} \tag{11}\] \(\rho_{4,22}\) and \(\rho_{5,32}\) can be used to study the correlations between \(\Psi_{2}\) and \(\Psi_{4}\), as well as the correlations between three planes of \(\Psi_{2}\), \(\Psi_{3}\) and \(\Psi_{5}\). The study of nonlinear flow modes, i.e., \(v_{n,mk}\), \(\rho_{n,mk}\), \(\chi_{n,mk}\) have been performed before, they could provide further constraints on the initial conditions [61; 14; 65]. The correlations between \(v_{n}^{2}\) and \(v_{m}^{2}\) can be quantified via normalized symmetric cumulants NSC(\(m,n\)), defined as [14]: \[\text{NSC}(m,n)=\frac{\langle v_{m}^{2}\,v_{n}^{2}\rangle-\langle v_{m}^{2} \rangle\langle v_{n}^{2}\rangle}{\langle v_{m}^{2}\rangle\langle v_{n}^{2} \rangle}, \tag{12}\] where the angular bracket \(\langle\rangle\) represents an average over all events. It allows to study if \(v_{n}^{2}\) and \(v_{m}^{2}\) are correlated, anti-correlated or uncorrelated, if NSC(\(m,n\)) \(>0\), \(<0\) and \(=0\), respectively. In this paper, NSC(\(3,2\)) and NSC(\(4,2\)) will be studied with the AMPT model to see if the results could bring extra information into the initial conditions and the structure of \({}^{129}\)Xe. ### Multi-particle correlation All the flow observables introduced in section 3.1 can be obtained via the multi-particle correlation method [56; 57; 58; 14]. To begin with, the flow coefficient \(v_{n}\) can be calculated using two-particle correlations: \[v_{n}\{2\}\equiv\sqrt{c_{n}\{2\}}=\langle\langle\cos n(\varphi_{1}-\varphi_{2 })\rangle\rangle^{1/2}, \tag{13}\] where \(\varphi_{1}\) and \(\varphi_{2}\) are azimuthal angles from different particles. Double brackets \(\langle\langle\rangle\rangle\) denote the average over all particles in an event and then the average over all events. For multi-particle cumulants of \(v_{n}\)[58]: \[\begin{split} v_{n}\{4\}&\equiv\sqrt[4]{-c_{n}\{4 \}},\\ v_{n}\{6\}&\equiv\sqrt[6]{\frac{1}{4}c_{n}\{6\}},\\ v_{n}\{8\}&\equiv\sqrt[6]{-\frac{1}{33}c_{n}\{8\}} \end{split} \tag{14}\] where : \[\begin{split} c_{n}\{4\}&=\langle v_{n}^{4}\rangle-2 \langle v_{n}^{2}\rangle^{2}\\ c_{n}\{6\}&=\langle v_{n}^{6}\rangle-9\langle v_{n}^{4} \rangle\langle v_{n}^{2}\rangle+12\langle v_{n}^{2}\rangle^{3},\\ c_{n}\{8\}&=\langle v_{n}^{8}\rangle-16\langle v_{n}^{ 6}\rangle\langle v_{n}^{2}\rangle-18\langle v_{n}^{4}\rangle^{2}+144\langle v _{n}^{4}\rangle\langle v_{n}^{2}\rangle^{2}-144\langle v_{n}^{2}\rangle^{4} \end{split} \tag{15}\] and \[\begin{split}\langle v_{n}^{4}\rangle&=\langle \langle\cos(n\varphi_{1}+n\varphi_{3}-n\varphi_{2}-n\varphi_{4})\rangle\rangle \rangle,\\ \langle v_{n}^{6}\rangle&=\langle\langle\cos(n \varphi_{1}+n\varphi_{3}+n\varphi_{5}-n\varphi_{2}-n\varphi_{4}-n\varphi_{6}) \rangle\rangle,\\ \langle v_{n}^{8}\rangle&=\langle\langle\cos(n \varphi_{1}+n\varphi_{3}+n\varphi_{5}+n\varphi_{7}-n\varphi_{2}-n\varphi_{4}-n \varphi_{6}-n\varphi_{8})\rangle\rangle\rangle\end{split} \tag{16}\] The magnitude of nonlinear flow mode \(v_{n,mk}\) could be obtained via the multi-particle correlations [63; 64]. For \(n=4,5\), which is studied in this paper, we have: \[\begin{split} v_{4,22}&=\frac{\langle\langle\cos(4 \varphi_{1}-2\varphi_{2}-2\varphi_{3})\rangle\rangle}{\sqrt{\langle\langle\cos( 2\varphi_{1}+2\varphi_{2}-2\varphi_{3}-2\varphi_{4})\rangle\rangle}},\\ v_{5,32}&=\frac{\langle\langle\cos(5\varphi_{1}-3 \varphi_{2}-2\varphi_{3})\rangle\rangle}{\sqrt{\langle\langle\cos(3\varphi_{1 }+2\varphi_{2}-3\varphi_{3}-2\varphi_{4})\rangle\rangle}}\end{split} \tag{17}\] They quantify the magnitude of the nonlinear mode in high-order flow coefficients. By subtracting \(v_{4,22}\) and \(v_{5,32}\) from \(v_{4}\), and \(v_{5}\), respectively, we can easily calculate the magnitude of linear modes: \[\begin{split} v_{4}^{\mathrm{L}}&=\sqrt{v_{4}^{2} \{2\}-v_{4,22}^{2}},\\ v_{5}^{\mathrm{L}}&=\sqrt{v_{5}^{2}\{2\}-v_{5,32}^{2 }}\end{split} \tag{18}\] Nonlinear coefficient \(\chi_{n,mk}\) also describes the nonlinear contributions but is independent of \(v_{2}\) and \(v_{3}\). It can be derived by taking the ratio of \(v_{n,mk}\) and corresponding lower-order \(v_{n}(n=2,3)\): \[\begin{split}\chi_{4,22}&=\frac{v_{4,22}}{\sqrt{ \langle v_{2}^{4}\rangle}}=\frac{\langle\langle\cos(4\varphi_{1}-2\varphi_{2}- 2\varphi_{3})\rangle\rangle}{\langle\langle\cos(2\varphi_{1}+2\varphi_{2}-2 \varphi_{3}-2\varphi_{4})\rangle\rangle},\\ \chi_{5,32}&=\frac{v_{5,32}}{\sqrt{\langle v_{2}^{ 2}v_{3}^{2}\rangle}}=\frac{\langle\langle\cos(5\varphi_{1}-3\varphi_{2}-2 \varphi_{3})\rangle\rangle}{\langle\langle\cos(3\varphi_{1}+2\varphi_{2}-3 \varphi_{3}-2\varphi_{4})\rangle\rangle}\end{split} \tag{19}\] The \(\rho_{n,mk}\), correlation between different order flow symmetry planes can be obtained by taking Eqs. (13) and (17): \[\begin{split}\rho_{4,22}&=\frac{v_{4,22}}{v_{4}\{2 \}}=\frac{\langle\langle\cos(4\varphi_{1}-2\varphi_{2}-2\varphi_{3})\rangle \rangle}{\sqrt{\langle\langle\cos(2\varphi_{1}+2\varphi_{2}-2\varphi_{3}-2 \varphi_{4})\rangle\rangle\langle\langle\cos(4\varphi_{1}-4\varphi_{2})\rangle \rangle}},\\ \rho_{5,32}&=\frac{v_{5,32}}{v_{5}\{2\}}=\frac{ \langle\langle\cos(5\varphi_{1}-3\varphi_{2}-2\varphi_{3})\rangle\rangle}{ \sqrt{\langle\langle\cos(3\varphi_{1}+2\varphi_{2}-3\varphi_{3}-2\varphi_{4}) \rangle\rangle\langle\langle\cos(5\varphi_{1}-5\varphi_{2})\rangle\rangle}} \end{split} \tag{20}\] Expanding Eq. (12) with multi-particle correlations, normalized symmetric cumulants \(\mathrm{NSC}(m,n)\) can be expressed as: \[\mathrm{NSC}(m,n)=\frac{\langle\langle\cos(m\varphi_{1}+n\varphi_{3}-m\varphi_{ 2}-n\varphi_{4})\rangle\rangle-\langle\langle\cos(m\varphi_{1}-m\varphi_{3}) \rangle\rangle\langle\langle\cos(n\varphi_{2}-n\varphi_{4})\rangle\rangle}{ \langle\langle\cos(m\varphi_{1}-m\varphi_{3})\rangle\rangle\langle\langle\cos( n\varphi_{2}-n\varphi_{4})\rangle\rangle}\] All these observables are now expressed in terms of 2- and multi-particle correlations and then can be calculated using the _Generic Framework_[14; 66] or its latest implementation _Generic Algorithm_[58]. ## 4 Results ### Study on the nuclear deformation The centrality dependence of \(v_{2}\{2\}\) and \(v_{3}\{2\}\) in Xe-Xe collisions at 5.44 TeV are shown in Fig. 1. Here, \(v_{2}\{2\}\) increases significantly in the Ultra-Central Collisions (UCC) region when changing \(\beta_{2}\) from 0 (red diamonds) to 0.18 (the other markers). In the UCC region, where the two colliding nuclei almost fully overlap, the eccentricity of the overlapping region is determined by the shape or nuclear structure of the colliding nuclei. As it's well known, the \(\beta_{2}\) parameter in the Woods-Saxon distribution characterizes an elliptical shape of the Woods-Saxon nucleon density profile. A non-zero \(\beta_{2}\) of \({}^{129}\)Xe enhances the initial eccentricity \(\varepsilon_{2}\) compared to the one with a spherical shape where \(\beta_{2}=0\) and consequently leads to an increase in \(v_{2}\) because of the linear correlation between \(\varepsilon_{n}\) and \(v_{n}\), i.e., \(v_{n}\propto\varepsilon_{n}\), for \(n=2\) or 3 [67, 68]. In Fig. 1(b), \(v_{3}\{2\}\) has negligible differences when changing the \(\beta_{2}\) values, because the triangularity \(\varepsilon_{3}\) of the overlapping region does not depend on \(\beta_{2}\)[24]. Figure 1 also shows consistent results of \(v_{n}\{2\}(n=2,3)\) with variations in the triaxial parameter \(\gamma\) across the 0-30% centrality range. This is not a surprise, as probing the 3-D structure usually requests correlation involving more than two particles. In light of preceding investigations [22], a potential sensitivity of \(\varepsilon_{2}\) to \(\gamma\) was indicated for centrality range 0-0.2%, as concluded from simulations using the initial-state model. However, the current study refrains from probing such phenomena, relying on the AMPT model's final state information. The intrinsic computational demands inherent to capturing the entirety of the dynamic evolution process within AMPT serve as a significant impediment. The value of \(v_{2}\{2\}\) receives the contributions not only from the initial eccentricity \(\varepsilon_{2}\) but also from the event-by-event eccentricity fluctuations. To undertake a more comprehensive exploration of the ramifications imposed by the initial event-by-event eccentricity fluctuations and correspondingly the final state elliptic flow fluctuations, the utilization of both \(v_{2}\{2\}\) and \(v_{2}\{4\}\) is employed. This combined approach is particularly insightful as these two observables carry opposite contributions from flow fluctuations, as shown in Eqs. (8). As a result, the centrality dependence of \(v_{2}\{4\}\), as well as the \(\bar{v}_{2}\) and \(\sigma_{v_{2}}\) in Xe-Xe collisions at 5.44 TeV are presented in Fig. 2. As shown in Fig. 2(a), \(v_{2}\{4\}\) exhibits a slight increase when changing \(\beta_{2}\) from 0 (red Figure 1: Centrality dependence of \(v_{n}\{2\}(n=2,3)\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. diamonds) to 0.18 (the other markers), despite the sizable statistical uncertainties in the presented centrality region. It was previously suggested that \(v_{2}\{4\}\) is influenced by the nuclear diffuseness \(a_{0}\), with a weak sensitivity to \(\beta_{2}\) using different nuclei at the RHIC energy [25]. This aligns with our AMPT results shown in Figs. 2(a) and 9(a). In Fig. 2(b)(c), \(\bar{v}_{2}\) and \(\sigma_{v_{2}}\) have significant increases in UCC region when changing \(\beta_{2}\) from 0 to 0.18. The sensitivity of \(\bar{v}_{2}\) to \(\beta_{2}\) primarily arises from its linear correlation with \(\varepsilon_{2}\). In the case of flow fluctuations \(\sigma_{v_{2}}\), a deformed nucleus assumes varied orientations compared to a spherical shape. These random orientations result in stronger fluctuations of the elliptic flow. In Fig. 2, \(v_{2}\{4\}\), \(\bar{v}_{2}\), and \(\sigma_{v_{2}}\) stay unchanged with variations in \(\gamma\). These observables do not lend themselves to the task of constraining the unique triaxial parameter characterizing the nucleus \({}^{129}\)Xe. An evident sensitivity of \(v_{2}\{6\}\) to \(\beta_{3}\) is reported in a previous study [25]. The same work also discusses the correlation between the sensitivity of \(\beta_{n}\) and the number of particles used in the multi-particle correlations. Following the same concept, it is expected that observables like \(v_{2}\{6\}\) and \(v_{2}\{8\}\) may yield distinctive insights into the nuclear structure, compared to \(v_{2}\{2\}\) and \(v_{2}\{4\}\) discussed above. The centrality dependence of \(v_{2}\{6\}\) and \(v_{2}\{8\}\) are shown in Fig. 3. Amidst the sizable uncertainties, the current study refrains from drawing a firm conclusion regarding the sensitivity of these two observables to either \(\beta_{2}\) or \(\gamma\) across the 0-30% centrality interval. The \(v_{n}\) with multi-particle correlations, i.e., \(v_{2}\{4\}\), \(v_{2}\{6\}\), and \(v_{2}\{8\}\), were reported to exhibit a slight difference at the precision of 1-2% in Pb-Pb collisions [69]. This difference is believed to have originated from the deviations from a Bessel-Gaussian shape, particularly a non-zero skewness of the event-by-event \(v_{2}\) distribution. To determine whether a similar pattern persists in Xe-Xe collisions and explore the discrepancies between higher-order multi-particle cumulants, the ratios \(v_{2}\{m\}/v_{2}\{4\}\) for \(m=\) 2, 6, and 8 are presented as a function of centrality in Fig. 4. The ratio of \(v_{2}\{2\}/v_{2}\{4\}\) increases dramatically toward the central region in Fig. 4(a). Such an increase is dominated by flow fluctuations in the most central region. Although the sensitivity of \(v_{2}\{2\}/v_{2}\{4\}\) to \(\beta_{2}\) and \(\gamma\) is covered by uncertainties, a weak sensitivity to \(\beta_{2}\) was observed previously in a similar study in U-U collisions at \(\sqrt{s_{\rm NN}}=\) Figure 2: Centrality dependence of \(v_{2}\{4\},\bar{v}_{2},\sigma_{v_{2}}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=\) 5.44 TeV in AMPT. 193 GeV [26]. Subsequent investigations with increased statistics are anticipated to elucidate the presence of this sensitivity. However, the current study is constrained by the substantial computing demands involved. Concerning higher-order cumulants with \(m\)=6,8, the ratios \(v_{2}\{6\}/v_{2}\{4\}\) and \(v_{2}\{8\}/v_{2}\{4\}\) are close to unity within uncertainties and they stay unchanged when varying the nuclear structure configurations \(\beta_{2}\) and \(\gamma\). Figure 4: Centrality dependence of \(v_{2}\{2\}/v_{2}\{4\}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. Figure 3: Centrality dependence of \(v_{2}\{6\},v_{2}\{8\}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. In the context of the higher harmonic flow (\(n>3\)), such as \(v_{4}\), it consists of a linear constituent \(v_{4}^{\rm L}\) stemming from the initial \(\varepsilon_{4}\) and a nonlinear component \(v_{4}^{\rm NL}\) originating from \(\varepsilon_{2}^{2}\). The centrality dependence of \(v_{4}\{2\}\), \(v_{4}^{\rm L}\), and \(v_{4}^{\rm NL}\) or denoted as \(v_{4,22}\) are presented in Figure 5. In Fig. 5(a)(b), \(v_{4}\{2\}\) and \(v_{4,22}\) exhibit an increasing trend moving from the central to the mid-central collision, while \(v_{4}^{\rm L}\) in Fig. 5(c) has a flat distribution in the presented centrality region. Notably, a comparison of the magnitudes of \(v_{4}\{2\}\) and \(v_{4}^{\rm L}\) reveals the prevailing dominance of \(v_{4}^{\rm L}\) over \(v_{4,22}\) in central collisions, while in the mid-central region \(v_{4,22}\) converges with \(v_{4}^{\rm L}\). This is consistent with previous measurements in Pb-Pb collisions at the LHC [64]. The linear component \(v_{4}^{\rm L}\) shows an absence of sensitivity to the quadrupole deformation parameter \(\beta_{2}\) across the 0-30% centrality range. A similar observation has been reported recently in Ref. [26], where both \(v_{4}^{\rm L}\) and \(v_{4}\{2\}\) show no sensitivity to \(\beta_{2}\). In contrast, the sensitivity of both \(v_{4,22}\) and \(v_{4}^{\rm L}\) to the hexadecapole deformation parameter \(\beta_{4}\) is seen [26], owing to their shared provenance from \(\varepsilon_{4}\). The nonlinear component \(v_{4,22}\), as illustrated in Fig. 5(b), manifests a degree of 20% reductions with zero \(\beta_{2}\) within the UCC region, albeit with a magnitude smaller than that of \(v_{4}\{2\}\). Since the nonlinear flow \(v_{n,mk}\) probes a smaller spatial distribution than the flow coefficient \(v_{n}\), it can impose more stringent constraints on initial conditions and nuclear structure parameters. Variations in \(\gamma\) parameter show that \(v_{4,22}\), \(v_{4}^{\rm L}\), and the \(v_{4}\{2\}\) all exhibit diminished sensitivities to \(\gamma\) within the 0-30% centrality range, and thus they are unable to probe the triaxial nuclear structure. The nonlinear coefficient \(\chi_{4,22}\), defined in Eq. (10), is not solely affected by transport properties but also by initial conditions [70]. It previously exhibits a weak dependence on nuclear deformation \(\beta_{2}\) in central collisions at the RHIC isobar runs [26; 27; 71]. Verifying whether the same conclusion holds in the context of Xe-Xe collisions remains pertinent. Nonlinear correlation \(\rho_{4,22}\), introduced in Eq. (11), demonstrates an insensitivity to the influences of final state interactions [70], rendering it an effective tool for probing initial conditions. In Fig. 6, the centrality dependence of \(\chi_{4,22}\) and \(\rho_{4,22}\) are presented. In panel (a), \(\chi_{4,22}\) stays unchanged with different nuclear structure configurations in the presented centrality region despite the large uncertainties in the central collisions. Considering the relationship \(V_{4}^{\rm NL}=\chi_{4,22}\,(V_{2})^{2}\), the Figure 5: Centrality dependence of nonlinear modes including (a)\(v_{4}\), (b)\(v_{4,22}\) and (c)Linear \(v_{4}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. sensitivity of \(v_{4,22}\) to \(\beta_{2}\) primarily stems from the sensitivity conveyed by \((V_{2})^{2}\). A similar scenario can be observed in the case of \(v_{5,32}\) and \(\chi_{5,32}\), as shown in Fig.15 in the appendix, suggesting that the sensitivity arises from \(v_{2}v_{3}\). Furthermore, the results in Fig. 1 have already indicated that \(v_{3}\) is insensitive to \(\beta_{2}\), thereby emphasizing that the dominant sensitivity of \(v_{5,32}\) to \(\beta_{2}\) emerges from \(v_{2}\). In Fig. 6(b), \(\rho_{4,22}\) shows a distinct drop with \(\beta_{2}\)=0 in UCC region. This reduction is caused by the \(v_{4,22}\) in Fig. 5(b) with \(\beta_{2}\)=0, as \(\rho_{4,22}=v_{4,22}/v_{4}\{2\}\) and no such sensitivity is observed in \(v_{4}\{2\}\). Turning to a physics view, \(\rho_{4,22}\) signifies the correlation between \(\Psi_{4}\) and \(\Psi_{2}\). When the initial geometry is spherical (\(\beta_{2}=0\)), \(\Psi_{4}\) and \(\Psi_{2}\) could be arbitrary orientations and thus have negligible correlations. When the geometry has an anisotropy (\(\beta_{2}>0\)), they will have specific orientations. As a result, their correlation increases as well. Changing triaxial parameter \(\gamma\), the results of \(\chi_{4,22}\) and \(\rho_{4,22}\) remain consistent within errors, restraining themselves from an ideal probe of the triaxial structure of \({}^{129}\)Xe. Normalized symmetric cumulant (NSC) quantifies the correlation between different order flow coefficients and has been systematically studied in Pb-Pb collisions [72; 73; 74]. It was found that NSC(\(3,2\)) is insensitive to the dynamic evolution but carries unique sensitivity to initial conditions [72; 73]. Thus, it is potentially a good probe of the nuclear structure. In fact, NSC(\(3,2\)) has been previously studied in the U-U collisions, showing its sensitivity to \(\beta_{2}\)[26]. While for NSC(\(4,2\)), it is sensitive to both initial conditions and transport properties [72; 73]. In Figure 7(a), a tiny (and insignificant) increase can be observed in NSC(\(3,2\)) by decreasing the \(\beta_{2}\) value from 0.18 to 0, and no significant variation is observed when changing \(\gamma\). This indicates that the anti-correlation between \(\langle v_{3}^{2}\rangle\) and \(\langle v_{2}^{2}\rangle\) is reduced by quadrupole deformation and is independent of \(\gamma\). For Fig. 7(b), it is evident that NSC(\(4,2\)) exhibits little to no sensitivity to either \(\beta_{2}\) or \(\gamma\), suggests that the correlation between \(\langle v_{4}\rangle\) and \(\langle v_{2}\rangle\) remains relatively independent of nuclear deformation. Figure 6: Centrality dependence of nonlinear modes including (a)\(\chi_{4,22}\) and (b)\(\rho_{4,22}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. ### Study on the nuclear diffuseness The radial profile of the nucleus is determined by either the nuclear diffuseness \(a_{0}\) or the radius \(R_{0}\), both of which impact the initial spatial anisotropy and consequently influence the flow observables. Recent studies have highlighted the significant impact of \(a_{0}\) on various flow observables in the mid-central region (20-60% centrality)[25], underscoring the importance of verifying this within the context of Xe-Xe collisions using the AMPT model. This section delves into the potential of flow observables in investigating the \(a_{0}\) introduced in Eq. (4). Generally, a larger nuclear diffuseness \(a_{0}\) results in a significantly increased total hadronic cross-section, leading to a more diffused QGP. Fig. 8 shows the centrality dependence of \(v_{2}\{2\}\) and \(v_{3}\{2\}\) in Xe-Xe collisions at 5.44 TeV using two distinct values of \(a_{0}\). Calculations with larger \(a_{0}\) values (presented by black markers) lead to smaller \(v_{2}\) results in the semi-central collisions, which is agreed with the results in the isobar runs of Ru-Ru/Zr-Zr [25]. Meanwhile, no obvious change in \(v_{3}\) is seen after changing the \(a_{0}\) values across the 0-60% centrality range. As discussed in section 4.1, the increase of \(\sigma_{v_{2}}\) is mainly due to the random orientations of deformed nuclei and is expected to be less influenced by the diffuseness of nuclei. To verify this, the results of \(v_{2}\{4\}\), \(\bar{v}_{2}\), and \(\sigma_{v_{2}}\) are presented in Fig. 9. It is observed that \(v_{2}\{4\}\) and \(\bar{v}_{2}\) are reduced by larger \(a_{0}\) in the semi-central region, while \(\sigma_{v_{2}}\) remains the same. This observation aligns with our earlier discussion, affirming that the diffuseness primarily impacts the eccentricity rather than its fluctuations. In exploring \(a_{0}\), we can split \(v_{4}\) into its linear and nonlinear components to see how they respond differently to changes in \(a_{0}\). Figure 10 shows the centrality dependence of \(v_{4}\{2\}\) and its linear and nonlinear components. A larger \(a_{0}\) value reduces both \(v_{4}\{2\}\) and \(v_{4,22}\) in 20-40% centrality. Figure 10(c) does not show the sensitivity of \(v_{4}^{\rm L}\) to \(a_{0}\) due to the large uncertainties. The reduction of the nonlinear part \(v_{4,22}\) affects \(v_{4}\{2\}\). This is different from what we saw for \(\beta_{2}\). Also, it is seen that \(v_{4,22}\) is much Figure 7: Centrality dependence of NSC(\(m,n\)) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV in AMPT. smaller than \(v_{4}^{\rm L}\) in the most central region, so we do not see any effect of \(\beta_{2}\) on \(v_{4}\{2\}\) even though \(\beta_{2}\) increases \(v_{4,22}\) significantly in that region. For \(a_{0}\), more sensitivity is found in the mid-central collisions, where the nonlinear component \(v_{4,22}\) is about half of the total \(v_{4}\{2\}\), and how it changes with \(a_{0}\) has a bigger effect on \(v_{4}\{2\}\) in this region. Figure 8: Centrality dependence of \(v_{n}\{2\}(n=2,3)\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV in AMPT. As also introduced in section 4.1, \(\chi_{4,22}\) is not only affected by dynamic evolution but also by initial conditions, while \(\rho_{4,22}\) only depends on the initial state. It is crucial to check if the nonlinear modes can probe the initial conditions, particularly the diffuseness of nuclei. Here \(\chi_{4,22}\) and \(\rho_{4,22}\) with different \(a_{0}\) are presented in Fig. 11. Neither of them is affected by \(a_{0}\) in 0-60% centrality region. As \(\rho_{4,22}=v_{4,22}/v_{4}\{2\}\), the sensitivity to \(a_{0}\) is, to a large extent, canceled by taking this ratio. The insensitivity Figure 11: Centrality dependence of nonlinear modes including (a)\(\chi_{4,22}\) and (b)\(\rho_{4,22}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. Figure 10: Centrality dependence of nonlinear modes including (a)\(v_{4}\), (b)\(v_{4,22}\) and (c)Linear \(v_{4}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. of \(\chi_{4,22}\) to \(a_{0}\) can lead to similar conclusion as in the discussion of \(\beta_{2}\) that the sensitivity of \(v_{4,22}\) to \(a_{0}\) is mainly due to the \(V_{2}^{2}\) term in \(V_{4}^{\rm NL}\approx\chi_{4,22}(V_{2})^{2}\). Because of the limited statistics, there are large uncertainties in the results of NSC(3,2) and NSC(4,2), and no firm conclusion about how sensitive they are to the \(a_{0}\) parameter can be drawn. Thus, the results are not presented and discussed here but added in the appendix. ### Influence from the transport properties The results shown in this study have indicated that \(v_{2}\{2\}\), \(\bar{v}_{2}\), \(\sigma_{v_{2}}\), \(v_{4,22}\) and \(\rho_{4,22}\) can potentially probe the nuclear deformation, meanwhile \(v_{2}\{2\}\), \(v_{2}\{4\}\), \(\bar{v}_{2}\), \(v_{4}\{2\}\), \(v_{4,22}\) have the ability to study nuclear diffuseness \(a_{0}\). In order to achieve unbiased constraints on the nuclear structure parameters in experiments, one must ensure that the chosen observables are only sensitive to the initial conditions and not influenced by the dynamic evolution of the created system. In particular, it has been found in the study at RHIC isobar runs that the ratio observables in Zr-Zr and Ru-Ru can largely cancel out the effects of final state interactions and thus reflect mainly the impact from the initial stages. Here we present the mentioned observables (\(v_{2}\{2\}\), \(v_{2}\{4\}\), \(\bar{v}_{2}\), \(\sigma_{v_{2}}\), \(v_{4}\{2\}\), \(\rho_{4,22}\), \(\rho_{4,22}\)) in the ratio of Xe-Xe/Pb-Pb, checking if the ratio of two collision system minimizes the influence of dynamic evolution. In Fig. 12, the centrality dependence of \(v_{2}\{2\}\), \(v_{2}\{4\}\), \(\bar{v}_{2}\) and \(\sigma_{v_{2}}\) in the ratio of Xe-Xe/Pb-Pb collisions are presented, using two different partonic cross sections. The bottom panels of each figure show the ratio of the results using 3 mb and 6 mb ("double ratio"). The ratios in Fig. 12 are all consistent with unity, indicating that these observables, including \(v_{2}\{2\}\), \(v_{2}\{4\}\), \(\bar{v}_{2}\), \(\sigma_{v_{2}}\), in the ratio of Xe-Xe/Pb-Pb are less dependent on the dynamic evolution and potentially are good probes to initial conditions. In addition, \(v_{4}\{2\}\) and its nonlinear modes \(v_{4,22}\) and \(\rho_{4,22}\) in the ratio of Xe-Xe/Pb-Pb are also investigated, the centrality dependence is presented in Fig. 13. Despite large uncertainties, the results with different partonic cross sections are consistent with each other, and the "double ratio" is compatible with unity. Such results show that \(v_{4}\{2\}\), \(v_{4,22}\), \(\rho_{4,22}\) in the ratio of Xe-Xe/Pb-Pb might be not affected by dynamic evolution. Considering \(v_{4}\{2\}\) and \(v_{4,22}\) are sensitive to \(a_{0}\) in the mid-central region, while \(v_{4,22}\) and \(\rho_{4,22}\) are enhanced by non-zero \(\beta_{2}\) in the most central region, these observables are valuable in the exploration of initial conditions and nuclear structure parameters. ## 5 Summary This paper presents a comprehensive exploration of the influence of nuclear deformation and nuclear diffuseness on various flow observables in Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV, using AMPT event generator. We observe that the elliptic flow coefficients \(v_{2}\{2\}\), \(\bar{v}_{2}\), elliptic flow fluctuation \(\sigma_{v_{2}}\), as well as the nonlinear flow mode observables \(v_{4,22}\) and \(\rho_{4,22}\), are enhanced in central collisions in the presence of nuclear quadrupole deformation \(\beta_{2}\). These enhancements come from the increased eccentricities and their fluctuations in the initial geometry in the presence of a deformed shape of \({}^{129}\)Xe. Thus, the aforementioned flow observables can potentially constrain nuclear deformation through data-model comparisons in high-energy heavy-ion collisions in the near future. On the other hand, most flow observables do not exhibit much sensitivity to the triaxiality parameter, underscoring the importance of utilizing combined flow observables like \(v_{n}-[p_{\rm T}]\) correlations in future studies. In addition, larger nuclear diffuseness \(a_{0}\) for \({}^{129}\)Xe leads to smaller values in \(v_{2}\{2\}\), \(\bar{v}_{2}\), \(v_{4}\{2\}\), \(v_{4,22}\), and \(v_{2}\) with multi-particle correlations (\(v_{2}\{4\},v_{2}\{6\},v_{2}\{8\}\)) in mid-central collisions. Among them, \(v_{4}\{2\}\), \(v_{2}\{4\}\), \(v_{2}\{6\}\), and \(v_{2}\{8\}\) are only sensitive to \(a_{0}\), while \(\sigma_{v_{2}}\) and \(\rho_{4,22}\) are only sensitive to \(\beta_{2}\). The differing sensitivities of these observables allow for separating constraints on the values of \(a_{0}\) and \(\beta_{2}\), which will help fine-tune other model parameters and enable more precise predictions. We also investigate the influence of dynamic evolution on flow observables by varying the parton cross-section in the AMPT model. The parton cross-section does not significantly affect most of the observables in the ratio of Xe-Xe/Pb-Pb, indicating that the final state effect is canceled out when comparing the two collision systems. Future collisions of different nuclear species at varying energies will provide more insights into heavy ion collisions and improve our nuclear structure knowledge. ## 6 Acknowledgements M. Zhao and Y.Zhou are funded by the European Union (ERC, InitialConditions), VILLUM FONDEN (grant number 00025462), and Danmarks Frie Forskningsfond (Independent Research Fund Denmark). J. Jia's work is supported by the US Department of Energy (grant number DE-FG02-87ER40331). Figure 12: Centrality dependence of \(v_{2}\{2\}\), \(v_{2}\{4\}\), \(\bar{v}_{2}\), \(\sigma_{v_{2}}\) in Xe–Xe, Pb–Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV and \(\sqrt{s_{\rm NN}}\) = 5.02 TeV respectively in AMPT. The lower panels also present the ratio of different partonic cross sections. ## 7 Appendix ### nuclear deforamtion Figure 13: Centrality dependence of \(v_{4}\{2\}\), \(v_{4,22}\), \(\rho_{4,22}\) in Xe–Xe, Pb–Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV and \(\sqrt{s_{\rm NN}}\) = 5.02 TeV respectively in AMPT. The ratio of different partonic cross sections is also presented in the lower panels. Figure 14: Centrality dependence of nonlinear modes including (a)\(v_{5}\), (b)\(v_{5,32}\) and (c)Linear \(v_{5}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. Figure 15: Centrality dependence of nonlinear modes including (a)\(\chi_{5,32}\) and (b)\(\rho_{5,32}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}\) = 5.44 TeV in AMPT. ### nuclear diffuseness
2309.00121
Beyond Self-Attention: Deformable Large Kernel Attention for Medical Image Segmentation
Medical image segmentation has seen significant improvements with transformer models, which excel in grasping far-reaching contexts and global contextual information. However, the increasing computational demands of these models, proportional to the squared token count, limit their depth and resolution capabilities. Most current methods process D volumetric image data slice-by-slice (called pseudo 3D), missing crucial inter-slice information and thus reducing the model's overall performance. To address these challenges, we introduce the concept of \textbf{Deformable Large Kernel Attention (D-LKA Attention)}, a streamlined attention mechanism employing large convolution kernels to fully appreciate volumetric context. This mechanism operates within a receptive field akin to self-attention while sidestepping the computational overhead. Additionally, our proposed attention mechanism benefits from deformable convolutions to flexibly warp the sampling grid, enabling the model to adapt appropriately to diverse data patterns. We designed both 2D and 3D adaptations of the D-LKA Attention, with the latter excelling in cross-depth data understanding. Together, these components shape our novel hierarchical Vision Transformer architecture, the \textit{D-LKA Net}. Evaluations of our model against leading methods on popular medical segmentation datasets (Synapse, NIH Pancreas, and Skin lesion) demonstrate its superior performance. Our code implementation is publicly available at the: https://github.com/mindflow-institue/deformableLKA
Reza Azad, Leon Niggemeier, Michael Huttemann, Amirhossein Kazerouni, Ehsan Khodapanah Aghdam, Yury Velichko, Ulas Bagci, Dorit Merhof
2023-08-31T20:21:12Z
http://arxiv.org/abs/2309.00121v1
# Beyond Self-Attention: Deformable Large Kernel Attention for Medical Image Segmentation ###### Abstract Medical image segmentation has seen significant improvements with transformer models, which excel in grasping far-reaching contexts and global contextual information. However, the increasing computational demands of these models, proportional to the squared token count, limit their depth and resolution capabilities. Most current methods process D volumetric image data slice-by-slice (called pseudo 3D), missing crucial inter-slice information and thus reducing the model's overall performance. To address these challenges, we introduce the concept of **Deformable Large Kernel Attention (D-LKA Attention)**, a streamlined attention mechanism employing large convolution kernels to fully appreciate volumetric context. This mechanism operates within a receptive field akin to self-attention while sidestepping the computational overhead. Additionally, our proposed attention mechanism benefits from deformable convolutions to flexibly warp the sampling grid, enabling the model to adapt appropriately to diverse data patterns. We designed both 2D and 3D adaptations of the D-LKA Attention, with the latter excelling in cross-depth data understanding. Together, these components shape our novel hierarchical Vision Transformer architecture, the D-LKA Net. Evaluations of our model against leading methods on popular medical segmentation datasets (Synapse, NIH Pancreas, and Skin lesion) demonstrate its superior performance. Our code implementation is publicly available at the GitHub. ## 1 Introduction Medical image segmentation plays a crucial role in computer-assisted diagnostics, aiding medical professionals in analyzing complex medical images. This process not only reduces the laboriousness of manual tasks and the dependence on medical expertise but also enables faster and more accurate diagnoses. The automation of segmentation offers the potential for faster and more accurate diagnostic outcomes, facilitating appropriate treatment strategies and enabling the execution of image-guided surgical procedures. Thus, the imperative to create rapid and precise segmentation algorithms serves as a driving force behind the motivation for this research. Since the mid-2010s, Convolutional Neural Networks (CNNs) have become the preferred technique for many computer vision applications. Their ability to automatically extract complex feature representations from raw data without the need for manual feature engineering has generated significant interest within the medical image analysis community. Many successful CNN architectures such as U-Net [48], Fully Convolutional Networks [44], DeepLab [16], or SegCaps (segmentation capsules) [38] have been developed. These architectures have achieved great success in the task of semantic segmentation and have outperformed state-of-the-art (SOTA) methods previously [3, 35, 36]. The problem of identifying objects at different scales is a key concern in computer vision research [34, 41]. In CNNs, the size of a detectable object is closely linked to the receptive field dimensions of the corresponding network layer. If an object extends beyond the boundaries of this receptive field, this may lead to under-segmentation outcomes. Con versely, using excessively large receptive fields compared to an object's actual size can limit recognition, as background information may exert undue influence on predictions [27]. A promising approach to address this issue involves employing multiple kernels with distinct sizes in parallel, similar to the mechanism of an _Inception Block_[53]. However, increasing the kernel sizes to accommodate larger objects is limited in practice due to the exponential increase in parameters and computational requirements [27]. Consequently, various strategies, including pyramid pooling techniques [28] and dilated convolutions [63] at varying scales, have emerged to capture multi-scale contextual information. Another intuitive concept entails directly incorporating multi-scale image pyramids or their associated feature representations into the network architecture. Yet, this approach poses challenges, particularly concerning the feasibility of managing training and inference times [41]. The use of Encoder-Decoder networks, such as U-Net, has proven advantageous in this context. Such networks encode appearance and location in shallower layers, while deeper layers capture higher semantic information and context from the broader receptive fields of neurons [40]. Some methods combine features from different layers or predict features from layers of different sizes to use information from multiple scales [10]. Also, forecasting features from layers of varied scales have emerged, effectively enabling the integration of insights across multiple scales [41]. However, most Encoder-Decoder structures face a challenge: they frequently fail to maintain consistent features across different scales and mainly use the last decoder layer to generate segmentation results [3, 27, 37]. Semantic segmentation is a task that involves predicting the semantic category for each pixel in an image based on a predefined label set. This task requires extracting high-level features while preserving the initial spatial resolution [42, 46]. CNNs are well suited to capture local details and low-level information, albeit at the expense of overlooking global context. This gap in handling global information has been a focus of the vision transformer (ViT) architecture, which has achieved remarkable success in several computer vision tasks, including semantic segmentation. The cornerstone of the ViT is the _attention mechanism_, which facilitates the aggregation of information across the entire input sequence. This capability empowers the network to incorporate long-range contextual cues beyond the limited receptive field sizes of CNNs [24, 52]. However, this strategy usually limits the ability of ViTs to effectively model local information [9]. This limitation can impede their ability to detect local textures, which is crucial for various diagnostic and prognostic tasks. This lack of local representation can be attributed to the particular way ViT models process images. ViT models partition an image into a sequence of patches and model their dependencies using self-attention mechanisms. This approach may not be as effective as the convolution operations employed by CNN models for extracting local features within receptive fields. This difference in image processing methods between ViT and CNN models may explain the superior performance of CNN models in local feature extraction [6, 22]. In recent years, innovative approaches have been developed to address the insufficient representation of local textures within Transformer models. One such approach involves integrating CNN and ViT features through complementary methods to combine their strengths and mitigate any local representation shortcomings [15]. TransUNet [15] is an early example of this approach, incorporating Transformer layers within the CNN bottleneck to model both local and global dependencies. HiFormer [29] proposes a solution that combines a Swin Transformer module and a CNN-based encoder to generate two multi-scale feature representations that are integrated via a Double-Level Fusion module. UNETR [26] employs a Transformer-based encoder and a CNN decoder for 3D medical image segmentation. CoTr [61] and TransBTS [57] bridge the CNN encoder and decoder with the Transformer to enhance segmentation performance at low-resolution stages. An alternate strategy to enhance local feature representation is to redesign the self-attention mechanism within pure Transformer models. In this vein, Swin-Unet [13] integrates a Swin Transformer [43] block of linear computational complexity within a U-shaped structure as a multi-scale backbone. MISSFormer [32] employs the Efficient Transformer [60] to address parameter issues in vision transformers by incorporating a non-invertible downsampling operation on input blocks. D-Former [59] introduces a pure transformer-based pipeline featuring a double attention module to capture fine-grained local attention and interaction with diverse units in a dilated manner. Nonetheless, certain specific limitations remain. These include computational inefficiency, as evidenced in the TransUNet model, the heavy dependence on a CNN backbone, as observed in HiFormer, and the lack of consideration of multi-scale information. Additionally, current segmentation architectures often take a slice-by-slice approach to process 3D input volumes, inadvertently disregarding potential correlations between neighboring slices. This oversight limits the comprehensive use of volumetric information, consequently compromising both localization accuracy and context integration. Furthermore, it is crucial to recognize that lesions in the medical domain often exhibit deformations in their shape. Therefore, any learning algorithm intended for medical image analysis must be endowed with the ability to capture and comprehend these deformations. Simultaneously, the algorithm should maintain computational efficiency to facilitate the processing of 3D volumetric data. **Our contributions:** To address the challenges outlined above, we propose a solution in the form of the Deformable-LKA module (), which serves as a fundamental building block within our network design. This module is explicitly designed to effectively handle contextual information while simultaneously preserving local descriptors. This balance between the two aspects within our architecture enhances its ability to achieve precise semantic segmentation. Notably, our model introduces a dynamic adaptation of receptive fields based on the data, diverging from the conventional fixed filter masks found in traditional convolutional operations. This adaptive approach allows us to overcome the inherent limitations associated with static methods. This innovative approach extends to the development of both 2D and 3D versions of the D-LKA Net architecture (). In the case of the 3D model, the D-LKA mechanism is tailored to suit a 3D context, thus enabling the seamless exchange of information across different volumetric slices. (). Finally, our contribution is further emphasized by its computational efficiency. We achieve this through a design that leans solely on the D-LKA concept, resulting in a remarkable performance on various segmentation benchmarks that establish our method as a new SOTA approach. ## 2 Method In this section, we begin by providing an overview of the methodology. First, we revisit the concept of Large Kernel Attention (LKA) as introduced by Guo et al. [23]. Then, we introduce our innovative exploration of the deformable LKA module. Built up on this, we introduce both 2D and 3D network architectures for segmentation tasks. ### Large Kernel Attention Large convolution kernels provide a similar receptive field as the self-attention mechanism. A large convolution kernel can be constructed with much less parameters and computation by using a depth-wise convolution, a depth-wise dilated convolution, and a \(1\times 1\) convolution. The equations for the kernel sizes of the depthwise convolution and the depthwise dilated convolution to construct a \(K\times K\) kernel for an input of dimension \(H\times W\) and channels \(C\) are: \[DW=\left(2d-1\right)\times\left(2d-1\right), \tag{1}\] \[DW\text{-}D=\left\lceil\frac{K}{d}\right\rceil\times\left\lceil\frac{K}{d} \right\rceil, \tag{2}\] with a kernel size of \(K\) and a dilation rate of \(d\). The number of parameters \(P\left(K,d\right)\) and floating-point operations (FLOPs) \(F\left(K,d\right)\) is calculated by: \[P\left(K,d\right)=C\left(\left\lceil\frac{K}{d}\right\rceil^{2}+\left(2d-1 \right)^{2}+3+C\right), \tag{3}\] \[F\left(K,d\right)=P\left(K,d\right)\times H\times W. \tag{4}\] The number of FLOPs grows linearly with the size of the input image. The number of parameters increases quadratically with the number of channels and kernel size. However, as both are usually so small, they are not restricting factors. To minimize the number of parameters for a fixed kernel size \(K\), the derivative of equation 3 with respect to dilation rate \(d\) can be set to zero: \[\frac{d}{d\hat{d}}P\left(K,\hat{d}\right)\overset{!}{=}0=C\cdot\left(8\hat{d} +\frac{2K^{2}}{\hat{d}^{3}}-4\right). \tag{5}\] For example, when the kernel size is \(K=21\), this results in \(d\approx 3.37\). Extending the formulas to the three-dimensional case is straightforward. For an input of size \(H\times W\times D\) and channels \(C\), then the equations for the number of parameters \(P_{3d}\left(K,d\right)\) and FLOPs \(F_{3d}\left(K,d\right)\) are: \[P_{3d}\left(K,d\right)=C\left(\left\lceil\frac{K}{d}\right\rceil^{3}+\left(2d -1\right)^{3}+3+C\right), \tag{6}\] \[F_{3d}\left(K,d\right)=P_{3d}\left(K,d\right)\times H\times W\times D, \tag{7}\] with kernel size \(K\) and dilation \(d\). ### Deformable Large Kernel Attention The concept of utilizing Large Kernels for medical image segmentation is extended by incorporating Deformable Convolutions [20]. Deformable Convolutions enable adjusting the sampling grid with whole-numbered offsets for free deformation. An additional convolutional layer learns The deformation from the feature maps, which creates an offset field. Learning the deformation based on the features itself results in an adaptive convolution kernel. This flexible kernel shape can improve the representation of lesions or organ deformations, resulting in an enhanced definition of object boundaries. The convolutional layer responsible for calculating offsets follows the kernel size and dilation of its corresponding convolutional layer. Bilinear interpolation is employed to compute pixel values for offsets that are not found on the image grid. As shown in Figure 2, the D-LKA module can be formulated as: \[\text{Attention} =\text{Conv1}\times 1(\text{DDW-D-Conv}(\text{DDW-Conv}(\text{F}^{ \prime}))), \tag{8}\] \[\text{Output} =\text{Conv1}\times 1(\text{Attention}\otimes\text{F}^{\prime})+ \text{F},\] where the input feature is denoted by \(F\in\mathbb{R}^{C\times H\times W}\) and \(F^{\prime}=GELU(Conv(F))\). The attention component \(\in\mathbb{R}^{C\times H\times W}\) is represented as an attention map, with each value indicating the relative importance of corresponding features. The operator \(\otimes\) denotes an element-wise product operation. Notably, LKA departs from conventional attention methods by not requiring additional normalization functions such as sigmoid or Softmax. According to [56], these normalization functions tend to neglect high-frequency information, thereby decreasing the performance of self-attention-based methods. In the 2D version of the approach, the convolution layers are substituted with deformable convolutions because deformable convolutions improve the ability to capture objects characterized by irregular shapes and sizes. Such objects are commonly found in medical image data, making this augmentation especially significant. However, extending the concept of deformable LKA to the 3D domain presents certain challenges. The primary constraint arises from the additional convolution layer needed for offset generation. In contrast to the 2D case, this layer cannot be executed in a depth-wise manner due to the nature of input and output channels. In the 3D context, input channels correspond to features, and the output channels enlarge to \(3\cdot k\times k\times k\), with a kernel size of \(k\). The intricate nature of large kernels leads to an expansion of channel count along the third dimension, causing a substantial rise in parameters and FLOPs. Consequently, an alternative approach is implemented for the 3D scenario. A sole deformable convolution layer is introduced into the existing LKA framework, following the depth-wise convolutions. This strategic design adaptation aims to mitigate the challenges posed by the extension to three dimensions. ### 2D D-LKA Net The architecture of the 2D network is illustrated in Figure 1. The first variant uses a MaxViT [54] as the encoder component for efficient feature extraction, while the sec Figure 1: Proposed network architecture of the 3D D-LKA model on the left and the 2D D-LKA model on the right. Figure 2: Architecture of the deformable LKA module. ond variant incorporates deformable LKA layers for more refined, superior segmentation. In a more formal description, the encoder generates four hierarchical output representations. A convolutional stem first reduces the dimensions of the input image to \(\frac{H}{4}\times\frac{W}{4}\times C\). Subsequently, feature extraction is carried out through four stages of MaxViT blocks, each followed by downsampling layers. As the process advances to the decoder, four stages of D-LKA layers are implemented, each stage encompassing two D-LKA blocks. Patch-expanding layers are then applied to achieve resolution upsampling while also decreasing channel dimensions. Finally, a linear layer is responsible for generating the ultimate output. The structure of the 2D D-LKA block comprises LayerNorm, deformable LKA, and a Multi-Layer Perceptron (MLP). The integration of residual connections ensures effective feature propagation, even across deeper layers. This arrangement can be mathematically represented as: \[x_{1}=D\text{-}LKA\text{-}Attn(LN(x_{in}))+x_{in}, \tag{9}\] \[x_{out}=MLP(LN(x_{1}))+x_{1}, \tag{10}\] \[MLP=Conv_{1}(GeLU(Conv_{d}(Conv_{1}(x)))), \tag{11}\] with input features \(x_{in}\), layer normalization \(LN\), deformable LKA attention \(D\text{-}LKA\text{-}Attn\), depthwise convolution \(Conv_{d}\), linear layers \(Conv_{1}\) and GeLU activation function \(GeLU\). ### 3D D-LKA Net The 3D network architecture, depicted in Figure 1, is structured hierarchically using an encoder-decoder design. Initially, a patch embedding layer reduces the input image dimensions from \((H\times W\times D)\) to \((\frac{H}{4}\times\frac{W}{4}\times\frac{D}{2})\). Within the encoder, a sequence of three D-LKA stages is employed, each containing three D-LKA blocks. After each stage, a downsampling step reduces spatial resolution by half while doubling the channel dimension. The central bottleneck includes another set of two D-LKA blocks. The decoder structure is symmetric to that of the encoder. In order to double the feature resolution while reducing channel count, transpose convolutions are utilized. Each decoder stage employs three D-LKA blocks to promote long-range feature dependencies. The final segmentation output is produced by a \(3\times 3\times 3\) convolutional layer, succeeded by a \(1\times 1\times 1\) convolutional layer to match class-specific channel requirements. To establish a direct connection between the input image and the segmentation output, a skip connection is formed using convolutions. Additional skip connections perform a fusion of features from other stages based on simple addition. The ultimate segmentation map is produced through a combination of \(3\times 3\times 3\) and \(1\times 1\times 1\) convolutional layers. The 3D D-LKA block includes layer normalization followed by D-LKA Attention, with residual connections applied. The subsequent section employs a \(3\times 3\times 3\) convolutional layer, followed by a \(1\times 1\times 1\) convolutional layer, both accompanied by residual connections. This entire process can be summarized as follows: \[x_{1}=DAttn(LN(x_{in}))+x_{in}, \tag{12}\] \[x_{out}=Conv_{1}(Conv_{3}(x_{1}))+x_{1}, \tag{13}\] with input features \(x_{in}\), layer normalization \(LN\), deformable LKA \(DAttn\), convolutional layer \(Conv_{1}\), and output features \(x_{out}\). \(Conv_{3}\) refers to a feed forward network, with two convolutional layers and activation functions. ## 3 Experiments ### Experimental Setup We have implemented both 2D and 3D models using the PyTorch framework and performed training on a single RTX 3090 GPU. For the 2D method, a batch size of 20 was used, along with Stochastic Gradient Descent (SGD) employing a base learning rate of 0.05, a momentum of 0.9, and a weight decay of 0.0001. The training process consisted of 400 epochs, employing a combination of cross-entropy and Dice loss, represented as follows: \[\mathcal{L}_{total}=0.6\cdot\mathcal{L}_{dice}+0.4\cdot\mathcal{L}_{ce}. \tag{14}\] Consistent with [15], identical data augmentation techniques were applied. For the 3D model, a batch size of 2 was chosen, and stochastic gradient descent was employed with a base learning rate of 0.01 and a weight decay of \(3e^{-5}\). The input images were in the form of patches sized \(128\times 128\times 64\). The training process consisted of 1000 epochs, with 250 patches utilized per epoch. Data augmentation techniques consistent with nnFormer [64] and UNETR++ [51] were employed. ### Datasets **Synapse Multi-Organ Segmentation**: First, we evaluate the performance of our method using the well-established synapse multi-organ segmentation dataset [14]. This dataset consists of 30 cases with a total of 3779 axial abdominal clinical CT images. Each CT volume comprises a range of \(85\) to \(198\) slices, each with dimensions of \(512\times 512\) pixels. The voxel spatial resolution varies in the range of \(([0.54\sim 0.54]\times[0.98\sim 0.98]\times[2.5\sim 5.0])\)\(\mathrm{mm^{3}}\). Our evaluation follows the setting presented in [15, 51] for 2D and 3D versions. **Skin Lesion Segmentation**: Our comprehensive experiments also extend to skin lesion segmentation datasets. Specifically, we leverage the ISIC 2017 dataset [19], which contains 2000 dermoscopic images for training, 150 for validation, and 600 for testing. Furthermore, we adopt the division scheme described in previous literature [1, 2, 5, 21] for the ISIC 2018 dataset [18]. Additionally, the \(\mathrm{PH}^{2}\) dataset [45] is used as a dermoscopic image repository designed for both segmentation and classification tasks. This dataset consists of 200 dermoscopic images containing 160 nevi and 40 melanomas. **NIH Pancreas Segmentation**: The publicly available NIH Pancreas dataset consists of 82 contrast-enhanced 3D abdominal CT volumes, each accompanied by manual annotations [49]. In our configuration, we use 62 samples for training and reserve the remaining samples for testing. ### Quantitative and Qualitative Results #### 3.3.1 2D results **Synapse Dataset:** In Table 1, we present a comprehensive comparison of the leading performances achieved by other SOTA techniques in contrast to our proposed approach. The results in terms of the Dice Similarity Coefficient (DSC) reveal that D-LKA Net exhibits superiority over the previously established SOTA methods. Specifically, it outperforms ScaleFormer [31] by 1.41%, DAEFormer [4] by 1.64%, and by even larger margins when compared to other approaches. Notably, significant improvements are seen in the segmentation of specific anatomical regions such as the right kidney, left kidney, stomach, and pancreas. Notably, the pancreas achieves a significantly improved segmentation result, showing an impressive 2.04% improvement over the second-best performing method. Given that the segmentation of smaller abdominal organs, such as the gallbladder or pancreas, has historically been a challenge for existing SOTA approaches, this notable performance improvement represents a significant step forward in achieving more accurate segmentation results. A qualitative comparison of different methods is shown in Figure 3. Compared to DAEFormer [4], our approach results in less misclassifications for the _stomach_. While Unet [48] and Swin-UNet [12] sometimes classify distant tissue as parts of the _liver, gallbladder_ or _stomach_, our approach reduces the misclassifications and better represents the shape of the organs. **Skin Lesion Segmentation Results:** The comparative outcomes for skin lesion segmentation benchmarks, including ISIC 2017, ISIC 2018, and \(\mathrm{PH}^{2}\), in contrast to leading methods, are detailed in Table 2. Notably, our D-LKA Net consistently outperforms its competitors across various evaluation metrics. This consistent superiority observed across different datasets underscores D-LKA Net's robust generalization capabilities. A qualitative comparison of the results is presented in Figure 4. In comparison to the baseline method, D-LKA Net better follows the complex outline of the lesions. In contrast to Swin-UNet and HiFormer-B, which tend to over-or under-segment certain areas, our approach achieves a more accurate segmentation. #### 3.3.2 3D results **Synapse Dataset:** We compare our 3D approach with previous SOTA methods on the Synapse dataset. The results are presented in Table 3. We achieve a 0.27% improvement in _DSC_ over the previous SOTA methods UNETR++ [51]. Compared to nnFormer [64], an improvement of 0.92% is achieved. For the HD95 metric, D-LKA Net reaches the second-best result. In comparison to UNETR++, a small performance increase is observed for _spleen, left kidney_ and _aorta_. A significant increase is reported of _right kidney_ and the small organs _gallbladder_ and _pancreas_. An increase in the segmentation performance of these small organs is especially important. In terms of parameters, we have the lowest number of parameters with only 42.35 M while still achieving excellent segmentation performance. The number of FLOPs is 66.96 G, the second lowest. Only UNETR++ has less FLOPs. Compared to SOTA approaches like Swin-UNETR and nnFormer, we need only about 17% and 31% of the computations while achieving a better performance. Figure 4: 2D visualizations on the ISIC 2018 [18] dataset. Figure 3: Qualitative comparison of the results from UNet [48], Swin UNet [12], DAEFormer [4] and D-LKA Net. Figure 5 shows qualitative results on the Synapse dataset. In comparison to the nnFormer, we better capture the _aorta_ as a whole and don't confuse other tissue as the organ. In comparison to UNETR++, we better segment the _pancreas_, whereas UNETR++ tends to under-segment. Also, our approach segments the _liver_ and _stomach_ more accurately than UNETR++, which tends to over-segment these organs. **Pancreas Dataset:** The results of the NIH Pancreas dataset are presented in Table 4. Our approach achieves the best performance in all four metrics. In comparison to the closest competitor UNETR++, an increase of 0.63% in DSC, 0.82% in Jaccard, and a decrease of 1.04 in HD95 and 0.26 in ASD can be noted. D-LKA Net also has the lowest number of Parameters, with 62.07 M. Figure 6 shows a qualitative comparison of different approaches. UNETR fails to segment the Pancreas as a single \begin{table} \begin{tabular}{l|c c|c c c c c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params (M)} & \multirow{2}{*}{FLOPs (G)} & \multirow{2}{*}{Spl} & \multirow{2}{*}{RKid} & \multirow{2}{*}{LKid} & \multirow{2}{*}{Gal} & \multirow{2}{*}{Liv} & \multirow{2}{*}{Sto} & \multirow{2}{*}{Aor} & \multirow{2}{*}{Pan} & \multicolumn{2}{c}{Average} \\ \cline{5-12} & & & & & & & & & & DSC \(\uparrow\) & HD95 \(\downarrow\) \\ \hline \hline TransUNet [15] & 96.07 & 88.91 & 85.08 & 77.02 & 81.87 & 63.16 & 94.08 & 75.62 & 87.23 & 55.86 & 77.49 & 31.69 \\ Swin-UNet [11] & 27.17 & 6.16 & 90.66 & 79.61 & 83.28 & 66.53 & 94.29 & 76.60 & 85.47 & 56.58 & 79.13 & 21.55 \\ LeViT-UNet-384 [62] & 52.17 & 25.55 & 88.86 & 80.25 & 84.61 & 62.23 & 93.11 & 72.76 & 87.33 & 59.07 & 78.53 & 16.84 \\ MISSFormer [33] & 42.46 & 9.89 & 91.92 & 82.00 & 85.21 & 68.65 & 94.41 & 80.81 & 86.99 & 65.67 & 81.96 & 18.20 \\ ScaleFormer [31] & 11.6 & 48.93 & 89.40 & 83.31 & 86.36 & 74.97 & 95.12 & 80.14 & 88.73 & 64.85 & 82.86 & 16.81 \\ HiFormer-B [30] & 25.51 & 8.045 & 90.99 & 79.77 & 85.23 & 65.23 & 94.61 & 81.08 & 86.21 & 59.52 & 80.39 & 14.70 \\ DAEFormer [4] & 48.07 & 27.89 & 91.82 & 82.39 & 87.66 & 71.65 & 95.08 & 80.77 & 87.84 & 63.93 & 82.63 & 16.39 \\ TransDeepLab [7] & 21.14 & 16.31 & 89.00 & 79.88 & 84.08 & 69.16 & 93.53 & 78.40 & 86.04 & 61.19 & 80.16 & 21.25 \\ PVT-CASCADE [47] & 35.28 & 6.40 & 90.1 & 80.37 & 82.23 & 70.59 & 94.08 & 83.69 & 83.01 & 64.43 & 81.06 & 20.23 \\ \hline LKA Baseline & 85.82 & 13.62 & 91.45 & 81.93 & 84.93 & 71.05 & 94.87 & 83.71 & 87.48 & 66.76 & 82.77 & 17.42 \\ **2D D-LKA Net** & 101.64 & 19.92 & 91.22 & 84.92 & 88.38 & 73.79 & 94.88 & 84.94 & 88.34 & 67.71 & 84.27 & 20.04 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison results of the proposed method evaluated on the Synapse dataset. Blue indicates the best result, and red displays the second-best. Parameters are reported in millions (M) and FLOPS in billions (G). DSC is presented for abdominal organs spleen (Spl), right kidney (RKid), left kidney (LKid), gallbladder (Gal), liver (Liv), stomach (Sto), aorta (Aor), and pancreas (Pan). \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params (M)} & \multirow{2}{*}{FLOPs (G)} & \multirow{2}{*}{Spl} & \multirow{2}{*}{RKid} & \multirow{2}{*}{LKid} & \multirow{2}{*}{Gal} & \multirow{2}{*}{Liv} & \multirow{2}{*}{Sto} & \multirow{2}{*}{Aor} & \multirow{2}{*}{Pan} & \multicolumn{2}{c}{Average} \\ \cline{5-12} & & & & & & & & & DSC \(\uparrow\) & HD95 \(\downarrow\) \\ \hline \hline UNETR [26] & 92.49 & 75.76 & 85.00 & 84.52 & 85.60 & 56.30 & 94.57 & 70.46 & 89.80 & 60.47 & 78.35 & 18.59 \\ Swin-UNETR [25] & 62.83 & 384.2 & 95.37 & 86.26 & 86.99 & 66.54 & 95.72 & 77.01 & 91.12 & 68.80 & 83.48 & 10.55 \\ nnFormer [64] & 150.5 & 213.4 & 90.51 & 86.25 & 86.57 & 70.17 & 96.84 & 86.83 & 92.04 & 83.35 & 86.57 & 10.63 \\ UNETR++ [51] & 42.96 & 47.98 & 95.77 & 87.18 & 87.54 & 71.25 & 96.42 & 86.01 & 92.52 & 81.10 & 87.22 & 7.53 \\ \hline LKA Baseline & 28.94 & 48.79 & 90.49 & 87.54 & 87.57 & 63.81 & 96.96 & 84.89 & 93.22 & 82.46 & 85.87 & 14.35 \\ **D-LKA Net** & 42.35 & 66.96 & 95.88 & 88.50 & 87.64 & 72.14 & 96.25 & 85.03 & 92.87 & 81.64 & 87.49 & 9.57 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of the proposed method against the SOTA approaches on skin lesion segmentation benchmarks. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params (M)} & \multirow{2}{*}{FLOPs (G)} & \multirow{2}{*}{Spl} & \multirow{2}{*}{RKid} & \multirow{2}{*}{LKid} & \multirow{2}{*}{Gal} & \multirow{2}{*}{Liv} & \multirow{2}{*}{Sto} & \multirow{2}{*}{Aor} & \multirow{2}{*}{Pan} & \multicolumn{2}{c}{Average} \\ \cline{5-12} & & & & & & & & DSC \(\uparrow\) & HD95 \(\downarrow\) \\ \hline \hline UNETR [26] & 92.49 & 75.76 & 85.00 & 84.52 & 85.60 & 56.30 & 94.57 & 70.46 & 89.80 & 60.47 & 78.35 & 18.59 \\ Swin-UNETR [25] & 62.83 & 384.2 & 95.37 & 86.26 & 86.99 & 66.54 & 95.72 & 77.01 & 91.12 & 68.80 & 83.48 & 10.55 \\ nnFormer [64] & 150.5 & 213.4 & 90.51 & 86.25 & 86.57 & 70.17 & 96.84 & 86.83 & 92.04 & 83.35 & 86.57 & 10.63 \\ UNETR++ [51] & 42.96 & 47.98 & 95.77 & 87.18 & 87.54 & 71.25 & 96.42 & 86.01 & 92.52 & 81.10 & 87.22 & 7.53 \\ \hline LKA Baseline & 28.94 & 48.79 & 90.49 & 87.54 & 87.57 object. UNETR++ has smaller artifacts in the segmentation result. Our approach follows the highly irregular shape of the organ better than the other approaches. ### Ablations Studies **Robustness.** To enhance the robustness of our evaluation and to analyze statistical significance, we undertake 5 training runs for each method on the Synapse 2D version. This practice not only ensures a more comprehensive assessment but also enables us to visualize the variations in performance (Please refer to the supplementary file for the visualizations). In our evaluation, we observed a stable increase in performance for the aorta, gallbladder, left and right kidney, liver, pancreas, and stomach, where the median performance is higher than other SOTA methods. Solely the spleen segmentation performance is slightly worse. Furthermore, significant performance improvements are achieved for the gallbladder, pancreas, and stomach. **Deformable LKA Influence.** We continue our ablation study by determining the effectiveness of D-LKA. For this purpose, we construct a network employing 3D LKA without a deformable layer and another version using 3D LKA with an extra convolutional layer instead of the deformable one. The results of this analysis are presented in Table 5. Introducing the additional 3D convolutional layer results in a notable improvement in the performance of 0.99% in DSC when compared to the 3D LKA baseline. However, this modification also increases the number of parameters within the network. Replacing the 3D convolutional layer with a deformable convolutional layer leads to an additional performance boost, as indicated by a 0.63% increase in DSC. This alteration, similar to the previous one, also introduces more parameters and FLOPs into the network. Since the network's size remains relatively small, the increase in these metrics is acceptable. \begin{table} \begin{tabular}{l|c c|c c c c} \hline Methods & Params (M) & FLOPs (G) & DSC \(\uparrow\) & Jaccard \(\uparrow\) & HD95 \(\downarrow\) & ASDi \\ \hline \hline UNETR [26] & 92.45 & **63.53** & 77.42 & 63.95 & 15.07 & 5.09 \\ UNETR++ [51] & 96.77 & 102.44 & 80.59 & 68.08 & 8.63 & 2.25 \\ \hline **DLKA Net** & **62.07** & 166.63 & **81.22** & **68.90** & **7.59** & **1.99** \\ \hline \hline \end{tabular} \end{table} Table 4: Results on the Pancreas dataset. The dice score, Jaccard index, HD95, and ASD are reported. Figure 5: Qualitative results on the Synapse dataset. The boxes mark regions of failure. Figure 6: Qualitative results on the Pancreas Dataset. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline Methods & Params & FLOPs & DSC \(\uparrow\) & HD95 \(\downarrow\) \\ \hline \hline 3D LKA & 28.94 & 48.79 & 85.87 & 14.35 \\ 3D LKA + 3D conv. & 37.73 & 58.64 & 86.86 & 12.69 \\ 3D LKA + 3D deform. conv. & 42.35 & 66.96 & 87.49 & 9.57 \\ \hline \hline \end{tabular} \end{table} Table 5: Results for the experiments of the influence of the deformable convolution layer with the Synapse Dataset. **Skip Connections.** Lastly, we assess the effect of the skip connections on the segmentation process. The results are shown in Table 6. We remove all skip connections and gradually add them to the network, starting with the highest-level skip connection. The results yield that skip connections are crucial for obtaining optimal segmentation performance. Additionally, we highlight that the highest-level skip connection is vital for achieving the best segmentation result, improving the DSC performance by 0.42%. ## 4 Conclusion In this paper, we propose a novel hierarchical hybrid Vision Transformer and CNN architecture using Deformable Large Kernel Attention (D-LKA Net). This attention mechanism enables the network to learn a deformation grid for accessing more relevant information than conventional attention strategies. Furthermore, the Large Kernel Attention mechanism can aggregate global information similar to that of self-attention to overcome the local restrictions of CNN mechanisms. Further, we present a 3D version of our proposed network, which includes cross-slice feature extraction for an even stronger representation capability. Our models receive SOTA results on several publicly available segmentation datasets. Overall, we believe that the proposed D-LKA Net is a robust and powerful choice for medical image segmentation.
2309.12525
Restricting the Splitting Types of a Positive Density Set of Places in Number Field Extensions
We prove necessary and sufficient conditions for a finite group $G$ with an ordering of $G$-extensions to satisfy the following property: for every positive density set of places $A$ of a number field $K$ and every splitting type given by a conjugacy class $c$ in $G$, $0\%$ of $G$-extensions avoid this splitting type for each $p\in A$.
Brandon Alberts
2023-09-21T22:55:48Z
http://arxiv.org/abs/2309.12525v1
# Restricting the splitting types of a positive density set of places in number field extensions ###### Abstract. We prove necessary and sufficient conditions for a finite group \(G\) with an ordering of \(G\)-extensions to satisfy the following property: for every positive density set of places \(A\) of a number field \(K\) and every splitting type given by a conjugacy class \(c\) in \(G\), \(0\%\) of \(G\)-extensions avoid this splitting type for each \(p\in A\). ## 1. Introduction Let \(K\) be a number field, \(G\subseteq S_{n}\) a transitive group, and \(\mathcal{F}(K,G)\) the set of degree \(n\) extensions \(F/K\) with \(\operatorname{Gal}(\widetilde{F}/K)\cong G\). There has been significant study into the proportion of \(F\in\mathcal{F}(K,G)\) that satisfy certain local conditions. That is, if \(\Sigma_{p}\) is a set of isomorphism classes of \(G\)-etale algebras over \(K_{p}\) for each place \(p\) of \(K\), what proportion of the elements \(F\in\mathcal{F}(K,G)\) satisfy \(F\otimes K_{p}\in\Sigma_{p}\) for each \(p\)? More explicitly, determining the value of \[\lim_{X\to\infty}\frac{\#\{F\in\mathcal{F}(K,G):\forall p,F\otimes K_{p}\in \Sigma_{p},\ \operatorname{Ht}(F)\leqslant X\}}{\#\{F\in\mathcal{F}(K,G): \operatorname{Ht}(F)\leqslant X\}}, \tag{1}\] where \(\operatorname{Ht}:\mathcal{F}(K,G)\to\mathbb{R}^{+}\) is a height function satisfying the Northcott property: finitely many fields have bounded height. Most often \(\operatorname{Ht}\) is taken to be the discriminant, the product of ramified primes, or another multiplicative counting function as defined by Wood [10]. We give a partial answer to this question when \(\Sigma=(\Sigma_{p})\) restricts the splitting type at a positive proportion of places. Such a \(\Sigma\) is called **nonadmissible** by the author in [1], due to the divergence of a certain corresponding Dirichlet series. We measure the nonadmissibility of \(\Sigma\) with the following (lower) density: \[\delta_{\operatorname{NA}}(\Sigma)=\liminf_{x\to\infty}\frac{\#\{|p|\leqslant x :\exists\text{ an unramified $G$-etale algebra }F_{p}/K_{p}\text{ s.t. }F_{p}\notin\Sigma_{p}\}}{\pi_{K}(x)},\] where \(|p|:=\operatorname{Nm}_{K/\mathbb{Q}}(p)\) is the norm down to \(\mathbb{Q}\) of the prime, and \(\pi_{K}(x)=\#\{p\text{ place of }K:|p|\leqslant x\}\). **Theorem 1.1**.: _Let \(K\) be a number field, \(G\subset S_{n}\) a transitive group, \(\mathcal{F}(K,G)\) the set of degree \(n\) extensions \(F/K\) with \(\operatorname{Gal}(\widetilde{F}/K)\cong G\), and \(\operatorname{Ht}:\mathcal{F}(K,G)\to\mathbb{R}^{+}\) a height function. Suppose the inverse Galois problem has a positive solution for \(G\), that is \(\mathcal{F}(K,G)\neq\emptyset\). Then the following are equivalent:_ 1. _For every family of local conditions_ \(\Sigma=(\Sigma_{p})\) _with_ \(\delta_{\operatorname{NA}}(\Sigma)>0\)_,_ \[\lim_{X\to\infty}\frac{\#\{F\in\mathcal{F}(K,G):\forall p,F\otimes K_{p}\in \Sigma_{p},\ \operatorname{Ht}(F)\leqslant X\}}{\#\{F\in\mathcal{F}(K,G): \operatorname{Ht}(F)\leqslant X\}}=0.\] 2. \(\operatorname{Ht}\) _admits no accumulating subfields._ We say that \(\operatorname{Ht}\) **admits an accumulating subfield** if there exists a proper normal subgroup \(H\trianglelefteq G\) and a \(G/H\)-extension \(L/K\) such that \[\limsup_{X\to\infty}\frac{\#\{F\in\mathcal{F}(K,G):\widetilde{F}^{H}=L,\ \operatorname{Ht}(F) \leqslant X\}}{\#\{F\in\mathcal{F}(K,G):\operatorname{Ht}(F)\leqslant X\}}>0.\] In this case, the field \(L\) occurs as a subfield of the Galois closure \(\widetilde{F}\) for a positive proportion of extensions \(F\in\mathcal{F}(K,G)\). Importantly, \(L\) is a subfield of the Galois closure but not necessarily of \(F\). ### Acknowledgements The author thanks the anonymous referees for helpful suggestions. ### History Previous work on the asymptotic proportion of \(G\)-extensions satisfying certain local conditions has been primarily restricted to those \(G\) for which the asymptotic growth rate of \(\#\{F\in\mathcal{F}(K,G):\operatorname{Ht}(F)\leqslant X\}\) is known as \(X\) tends to infinity. When this growth rate is understood well enough, the techniques can be extended to evaluate the asymptotic rate of growth of the numerator and denominator of (1) separately. Such number field counting techniques have been used to compute the limit (1) in the following cases: * \(G\) abelian, \(\operatorname{Ht}\) a multiplicative counting function, and \(\Sigma\) is such that, for all but finitely many places \(p\), \(\Sigma_{p}\) contains all \(G\)-etale algebras over \(K_{p}\)[10]. * \(G=S_{n}\) for \(n=3,4,5\), \(\operatorname{Ht}=\operatorname{disc}\), and \(\Sigma\) is such that, for all but finitely many places \(p\), \(\Sigma_{p}\) contains all \(G\)-etale algebras with squarefree discriminant over \(K_{p}\)[1]. * \(G=S_{3}\), \(\operatorname{Ht}\) a multiplicative counting function, and \(\Sigma\) is such that, for all but finitely many places \(p\), \(\Sigma_{p}\) contains all \(G\)-etale algebras over \(K_{p}\)[2]. * \(G=D_{4}\), \(\operatorname{Ht}\) the conductor, and \(\Sigma\) is such that, for all places \(p\), \(\Sigma_{p}\) contains all \(G\)-etale algebras of squarefree conductor (with an extra condition at \(p=2\))[1]. Moreover, under extra hypotheses in the above cases the limit (1) is proven to be multiplicative of the form \[\left(\sum_{(F_{p})\in\mathbb{Q}_{p|2}\,\Sigma_{p}}\sigma_{2}((F_{p})_{p|2}) \right)\!\prod_{p|2}\left(\sum_{F_{p}\in\Sigma_{p}}\sigma_{p}(F_{p})\right)\] for explicit functions \(\sigma_{p}:\Sigma_{p}\to\mathbb{R}_{\geq 0}\). These extra assumptions require \(\{F\in\mathcal{F}(K,G):\forall p,\ F\otimes K_{p}\in\Sigma_{p}\}\neq\emptyset\), \(\operatorname{Ht}\) is a so-called fair counting function if \(G\) is abelian, and \(G\) is generated by minimal weight elements of \(\operatorname{Ht}\) if \(G=S_{3}\). The distinct behavior at primes above \(2\) accounts for certain local conditions that are not realizable by global extensions, called inviable by Wood [10]. When \(G\) and \(\operatorname{Ht}\) satisfy these hypotheses, the corresponding cases of Theorem 1.1 follow by bounding \(\Sigma\) above by larger families that, for all but finitely many places, contains all \(G\)-etale algebras of \(K_{p}\). The primary strength of Theorem 1.1 is that it applies to all groups \(G\) and all heights \(\operatorname{Ht}\), regardless of what is known about the asymptotic growth rate of \(\#\{F\in\mathcal{F}(K,G):\operatorname{Ht}(F)\leqslant X\}\). It is better compared to theorems that determine number fields by their split primes, for example the following result due to Bauer: **Proposition 1.2** (Bauer [12, Proposition 13.9]).: _If \(L/K\) is Galois and \(M/K\) is a finite extension, then_ \[\{p\text{ split in }L/K\}\supseteq\{p\text{ lying under at least one degree }1\text{ place }\beta\text{ of }M\}\] _if and only if \(L\subseteq M\)._ Suppose \(\Sigma=(\Sigma_{p})\) has \(\Sigma_{p}=\{K_{p}^{\mathbb{Q}n}\}\) for all \(p\) split in some finite extension \(M/K\). Then Bauer's result implies \[\{F\in\mathcal{F}(K,G):\forall p,\ F\otimes K_{p}\in\Sigma_{p},\ \operatorname{Ht}(F) \leq X\}\subseteq\{F/K:\widetilde{F}\subseteq M\},\] which is necessarily a finite set. With the numerator of (1) finite, the limit necessarily exists and whether it is zero or not depends on whether there are infinitely many \(G\)-extensions of \(K\) or not. Theorem 1.1 is best understood as a generalization of results like Bauer's, following from the Chebotarev Density Theorem. ### Accumulating Subfields The presence of accumulating subfields in Theorem 1.1 is worthy of recognition, as this concept plays an important role in number field counting. Malle predicted that \[\#\{F\in\mathcal{F}(K,G):|\operatorname{disc}(F/K)|\leq X\}\sim c(K,G)X^{1/a( G)}(\log X)^{b(K,G)-1}.\] as \(X\) tends to infinity [13, 14]. Malle gives \(a(G)\) and \(b(K,G)\) explicitly in terms of the index function \(\operatorname{ind}(g)=n=\#\{\text{orbits of }g\}\). This is known to hold when \(G\) is abelian or \(G=S_{n}\) with \(n=3,4,5\), and is an important step for studying (1) in these cases. However, it is notable that Malle's conjecture is wrong for some groups and base fields. Kluners famously demonstrated this by proving that \(G=C_{3}\wr C_{2}\subseteq S_{6}\) with \(K=\mathbb{Q}\) is a counter example [15]. Malle predicted a growth rate \(cX^{1/2}\), however Kluners proved that \[\#\{F\in\mathcal{F}(\mathbb{Q},C_{3}\wr C_{2}):\widetilde{F}^{C_{3}\times C_{ 3}}=\mathbb{Q}(\zeta_{3}),\ |\operatorname{disc}(F/\mathbb{Q})|\leq X\}\gg X^{1/2}\log X.\] The field \(\mathbb{Q}(\zeta_{3})\) is an accumulating subfield. There are more subtle examples where accumulating subfields are known to cause issues with heuristic predictions. The leading constant \(c(K,G)\) in Malle's prediction is more mysterious than \(a(G)\) and \(b(K,G)\). In the case that \(G=S_{n}\), Bhargava [1] predicted that \(c(K,S_{n})\) is a particular convergent Euler product. Bhargava's expression can be generalized to other groups \(G\), but it is known not to be equal to \(c(K,G)\) in some cases with accumulating subfields. Examples include * \(G=D_{4}\subseteq S_{4}\)[10] and \(G=H_{9}\subseteq S_{9}\)[14] for which \(c(\mathbb{Q},G)\) is explicitely expressed as a convergent sum of special values of \(L\)-functions, where only the first summand agrees with Bhargava's construction. * The cases \(G=C_{2}\wr H\) for certain groups \(H\)[15] and \(G=S_{n}\times A\) for \(n=3,4,5\) and \(A\) abelian [13, 16]. The authors for these cases do not give an explicit formula for the leading constant, but they do remark that their methods imply that leading constant is given by a sum of Euler products, only one of which agrees with Bhargava's construction. This is not an exhaustive list. At the time of this writing, all known counting results for cases with accumulating subfields are known to either disagree with Malle's predicted growth rate or have a leading coefficient that disagrees with Bhargava's construction. While accumulating subfields have been known to thwart existing heuristics, previous examples have only been proven in cases which we know more about the growth rate of \(\#\{F\in\mathcal{F}(K,G):|\operatorname{disc}(F/K)|\leq X\}\). We will detail the existing heuristics for (1), which predict the limit will be zero. Theorem 1.1 does not depend on knowledge of this growth rate, showing that _any_ accumulating subfield can violate these heuristics. ## 2. Heuristic Predictions The Malle-Bhargava principle [2, 14] generalizes Malle's predictions for counting \(G\)-extensions by proposing that the generating series \[\sum_{\begin{subarray}{c}F\in\mathcal{F}(K,G)\\ \forall p,\ F\otimes K_{p}\in\Sigma_{p}\end{subarray}}|\mathrm{disc}(F/K)|^{-s}\] should be arithmetically equivalent to the Euler product \[\prod_{p}\Bigg{(}\frac{1}{|G|}\sum_{\begin{subarray}{c}f_{p}\in\mathrm{Hom}(G _{K_{p}},G)\\ F_{p}\in\Sigma_{p}\end{subarray}}|\mathrm{disc}(F_{p}/K_{p})|^{-s}\Bigg{)},\] where \(F_{p}/K_{p}\) is the \(G\)-etale algebra corresponding to \(f_{p}\in\mathrm{Hom}(G_{K_{p}},G)\). Arithmetically equivalent here means that both series have their rightmost pole at the same place of the same order. Malle's prediction is reproduced via a Tauberian theorem on the Euler product. The constant terms of each Euler factor is determined by the unramified etale algebras in \(\Sigma_{p}\). For all finite places, there are exactly \(|G|\) possible unramified homomorphisms \(f_{p}:G_{K_{p}}/I_{p}\to G\) corresponding to unramified \(G\)-etale algebras, depending on the image of Frobenius. If \(\Sigma_{p}\) contains all unramified \(G\)-etale algebras then the constant term is \(1\), otherwise the constant term is \(<1\). In the event that \(\Sigma_{p}\) is nonadmissible, infinitely many \(p\) have constant term \(<1\). An Euler product of the form \[\prod_{p}\big{(}c_{p}+O(p^{-s})\big{)}\] with \(c_{p}\leq 1\) for all \(p\) and \(c_{p}<1\) for infinitely many \(p\) necessarily diverges to \(0\) for all \(s\) with positive real part. These cases are not typically considered, in part because it is not clear how one should interpret the divergence. Three possible interpretations come to mind immediately: 1. Divergence to \(0\) predicts that there are zero such extensions. 2. The function \(0\) has no poles, which suggests the generating Dirichlet series is holomorphic. A Tauberian theorem would then imply there are at most finitely many such extensions. 3. The Malle-Bhargava principle is only an expression of the "main term". The divergence to \(0\) suggests that \(0\%\) of \(G\)-extensions satisfy these local conditions. There is some credence to predicting that zero such extensions exist. There are uncountably many ways to choose mutually disjoint nonadmissible \(\Sigma\). For each place \(p\) for which \(\Sigma_{p}\) does not contain all \(G\)-etale algebras choose between \(\Sigma_{p}\) and the complement \(\Sigma_{p}^{c}\). Given that there are only countable many number fields, comparing cardinalities implies that uncountably many of these local restrictions are satisfied by exactly zero number fields. Theorem 1.1 suggests that the third interpretation applies best to _all_ families \(\Sigma\), instead of just most families, as long as there are no accumulating subfields. That the Malle-Bhargava principle is only a prediction for the "main term" can be seen in other settings as well, such as for counting cubic extensions. The number of \(S_{3}\)-cubic extensions of \(\mathbb{Q}\) with bounded discriminant admits a secondary term of order \(X^{5/6}\)[1, 10], however the corresponding Euler product has no pole at \(s=5/6\). This example shows that, at best, the Malle-Bhargava principle can make predictions only for the main term. ## 3. The Proof We call a sequence of field extensions \(F_{1},F_{2},...,F_{m}\) containing \(K\) independent over \(K\) if \[\left(\prod_{i\neq j}\widetilde{F}_{i}\right)\cap\widetilde{F}_{j}=K\] for each \(j=1,...,m\). These are precisely the sequences of fields for which \[\operatorname{Gal}(\widetilde{F}_{1}\widetilde{F}_{2}\cdots\widetilde{F}_{m}/ K)\cong\prod_{i=1}^{m}\operatorname{Gal}(\widetilde{F}_{i}/K).\] **Lemma 3.1**.: _Let \(K\) be a number field, \(G\subset S_{n}\) a transitive group, and \(\Sigma=(\Sigma_{p})\) a family of local conditions with \(\delta_{\operatorname{NA}}(\Sigma)>0\). Let \(F_{1},...,F_{m}\in\mathcal{F}(K,G)\) be independent over \(K\) such that for all places \(p\) and each \(i=1,2,...,n\), \(F_{i}\otimes K_{p}\in\Sigma_{p}\). Then there exists a constant \(C_{|G|,\delta_{\operatorname{NA}}(\Sigma)}\) depending only on \(|G|\) and \(\delta_{\operatorname{NA}}(\Sigma)\) such that \(m\leqslant C_{|G|,\delta_{\operatorname{NA}}(\Sigma)}\)._ Proof.: For each conjugacy class \(c\subseteq G\), set \[A_{c}=\{p:\text{ the unramified $\text{\'{e}tale}$ algebra $F_{p}/K_{p}$ with $\operatorname{Fr}_{p}(F_{p}/K_{p})\in c$ is in $\Sigma_{p}$}\}.\] We will bound the density of one of these sets using the pigeonhole principle. Certainly the upper density \[\delta^{+}(A_{c}):=\limsup_{x\to\infty}\frac{\#\{p\in A_{c}:p\leqslant x\}}{ \pi_{K}(x)}\leqslant 1.\] Let \(\kappa(G)\) be the number of conjugacy classes in \(G\). We can bound \[\sum_{c}\delta^{+}(A_{c}) =\limsup_{x\to\infty}\frac{1}{\#\pi_{K}(x)}\sum_{p\in x}|\Sigma_{ p}|\] \[\leqslant\kappa(G)-\liminf_{x\to\infty}\frac{1}{\#\pi_{K}(x)} \sum_{p\in x}\left(\kappa(G)-|\Sigma_{p}|\right)\] \[\leqslant\kappa(G)-\liminf_{x\to\infty}\frac{1}{\#\pi_{K}(x)} \sum_{\begin{subarray}{c}p\leqslant x\\ \Sigma_{p}\text{ is not everything}\end{subarray}}1\] \[=\kappa(G)-\delta_{\operatorname{NA}}(\Sigma).\] In particular, \[\delta_{\operatorname{NA}}(\Sigma)\leqslant\sum_{c}(1-\delta^{+}(A_{c}))\] is a sum of nonegative numbers. By the pigeonhole principle, at least one of these is larger than \(\frac{\delta_{\operatorname{NA}}(\Sigma)}{\kappa(G)}\), so that we conclude that there exists a conjugacy class \(c\) for which \[\delta^{+}(A_{c})\leqslant 1-\frac{\delta_{\operatorname{NA}}(\Sigma)}{ \kappa(G)}.\] The compositum \(M=F_{1}F_{2}\cdots F_{m}\) is necessarily an extension of degree \(n^{m}\) whose Galois closure has Galois group \(G^{m}\) by independence. Moreover, any prime for which \(\rho_{i}(\operatorname{Fr}_{p}(M/K))\in c\) for at least one projection map \(\rho_{i}:G^{m}\to G\) necessarily belongs to \(A_{c}\), as each \(F_{i}\) satisfies the local conditions of \(\Sigma\). Thus, we compare \[\{p:\rho_{i}(\operatorname{Fr}_{p}(M/K))\in c\text{ for at least one }i\}\subseteq A_{c}.\] By the Chebotarev density theorem, we can then compare the (upper) densities \[\frac{\#\{(g_{i})\in G^{m}:g_{i}\in c\text{ for at least one }i\}}{|G|^{m}}\leq 1- \frac{\delta_{\text{NA}}(\Sigma)}{\kappa(G)}.\] The lefthand side is given by \[\frac{\#\{(g_{i})\in G^{m}:g_{i}\in c\text{ for at least one }i\}}{|G|^{m}} =1-\frac{\#\{(g_{i})\in G^{m}:g_{i}\notin c\text{ for all }i\}}{|G|^{m}}\] \[=1-\frac{(|G|-|c|)^{m}}{|G|^{m}}\] \[=1-\left(1-\frac{|c|}{|G|}\right)^{m}.\] In particular, this implies the lefthand side is an increasing function with \[\lim_{m\to\infty}\frac{\#\{(g_{i})\in G^{m}:g_{i}\in c\text{ for at least one }i\}}{|G|^{m}}=1>1-\frac{\delta_{\text{NA}}(\Sigma)}{\kappa(G)}\] following from \(\delta_{\text{NA}}(\Sigma)>0\). Thus, \(m\) is bounded in terms of \(|c|\), \(\kappa(G)\), and \(\delta_{\text{NA}}(\Sigma)\). Given that there are at most \(|G|\) possibilities for \(|c|\) and \(\kappa(G)\), the dependence on \(|c|\) and \(\kappa(G)\) can be dropped in exchange for a dependence on \(|G|\). We now prove Theorem 1.1. Suppose first that \(\operatorname{Ht}\) admits no accumulating subfields. Let \(F_{1},F_{2},...,F_{m}\) be a maximal set of independent \(G\)-extensions satisfying the local conditions in \(\Sigma\), which exists by Lemma 3.1. Then any other \(G\)-extension \(F\) satisfying the local conditions in \(\Sigma\) has \[\left(\prod_{i=1}^{m}\widetilde{F}_{i}\right)\cap\widetilde{F}\neq K.\] In particular, the number of possible fields \(F\) is bounded by \[\sum_{\begin{subarray}{c}H\approx G\\ H\neq G\end{subarray}}\sum_{K\neq L\leq\prod_{l_{i}}\widetilde{F}_{i}}\#\{F \in\mathcal{F}(K,G):\widetilde{F}^{H}=L,\ \operatorname{Ht}(F)\leq X\}.\] The sums are finite. As \(\operatorname{Ht}\) admits no accumulating subfields, the result follows. Conversely, suppose \(\operatorname{Ht}\) has an accumulating subfield given by the proper subgroup \(H\trianglelefteq G\) and a \(G/H\) extension \(L/K\). Take \(\Sigma=(\Sigma_{p})\) such that \[\Sigma_{p}=\{F_{p}/K_{p}:L\otimes K_{p}\subseteq\widetilde{F}_{p}\}.\] Any prime which is not split in \(L\) necessarily has \(K_{p}^{\boxtimes n}\notin\Sigma_{p}\), so by \(H\neq G\) and Chebotarev density we must have \(\delta_{\text{NA}}(\Sigma)>0\). However, it is certainly the case that \[\{F\in\mathcal{F}(K,G):\forall p,F\otimes K_{p}\in\Sigma_{p},\ \operatorname{Ht}(F)\leq X\} \supseteq\{F\in\mathcal{F}(K,G):\widetilde{F}^{H}=L,\ \operatorname{Ht}(F)\leq X\}.\] Given that \(L\) is an accumulating subfield, this implies \[\limsup_{x\to\infty}\frac{\#\{F\in\mathcal{F}(K,G):\forall p,F \otimes K_{p}\in\Sigma_{p},\ \operatorname{Ht}(F)\leq X\}}{\#\{F\in\mathcal{F}(K,G):\operatorname{Ht}(F) \leq X\}}\] \[\geq\limsup_{x\to\infty}\frac{\#\{F\in\mathcal{F}(K,G):\widetilde{ F}^{H}=L,\ \operatorname{Ht}(F)\leq X\}}{\#\{F\in\mathcal{F}(K,G):\operatorname{Ht}(F)\leq X\}}\] \[>0.\]
2310.20392
Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed Automata
Timing information leakage occurs whenever an attacker successfully deduces confidential internal information by observing some timed information such as events with timestamps. Timed automata are an extension of finite-state automata with a set of clocks evolving linearly and that can be tested or reset, making this formalism able to reason on systems involving concurrency and timing constraints. In this paper, we summarize a recent line of works using timed automata as the input formalism, in which we assume that the attacker has access (only) to the system execution time. First, we address the following execution-time opacity problem: given a timed system modeled by a timed automaton, given a secret location and a final location, synthesize the execution times from the initial location to the final location for which one cannot deduce whether the secret location was visited. This means that for any such execution time, the system is opaque: either the final location is not reachable, or it is reachable with that execution time for both a run visiting and a run not visiting the secret location. We also address the full execution-time opacity problem, asking whether the system is opaque for all execution times; we also study a weak counterpart. Second, we add timing parameters, which are a way to configure a system: we identify a subclass of parametric timed automata with some decidability results. In addition, we devise a semi-algorithm for synthesizing timing parameter valuations guaranteeing that the resulting system is opaque. Third, we report on problems when the secret has itself an expiration date, thus defining expiring execution-time opacity problems. We finally show that our method can also apply to program analysis with configurable internal timings.
Étienne André, Engel Lefaucheux, Didier Lime, Dylan Marinho, Jun Sun
2023-10-31T12:10:35Z
http://arxiv.org/abs/2310.20392v1
# Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed Automata+ ###### Abstract Timing information leakage occurs whenever an attacker successfully deduces confidential internal information by observing some timed information such as events with timestamps. Timed automata are an extension of finite-state automata with a set of clocks evolving linearly and that can be tested or reset, making this formalism able to reason on systems involving concurrency and timing constraints. In this paper, we summarize a recent line of works using timed automata as the input formalism, in which we assume that the attacker has access (only) to the system execution time. First, we address the following execution-time opacity problem: given a timed system modeled by a timed automaton, given a secret location and a final location, synthesize the execution times from the initial location to the final location for which one cannot deduce whether the secret location was visited. This means that for any such execution time, the system is opaque: either the final location is not reachable, or it is reachable with that execution time for both a run visiting and a run not visiting the secret location. We also address the full execution-time opacity problem, asking whether the system is opaque for all execution times; we also study a weak counterpart. Second, we add timing parameters, which are a way to configure a system: we identify a subclass of parametric timed automata with some decidability results. In addition, we devise a semi-algorithm for synthesizing timing parameter valuations guaranteeing that the resulting system is opaque. Third, we report on problems when the secret has itself an expiration date, thus defining expiring execution-time opacity problems. We finally show that our method can also apply to program analysis with configurable internal timings. ## 1 Introduction Complex timed systems often combine hard real-time constraints with concurrency. Information leakage, notably through side channels (see, e.g., [24, 32]), can have dramatic consequences on the security of such systems. Among harmful information leaks, the _timing information leakage_ (see, e.g., [23, 26, 39, 34, 36]) is the ability for an attacker to deduce internal information depending on observable timing information. In this paper, we focus on timing leakage through the total execution time, i.e., when a system works as an almost black-box and the ability of the attacker is limited to know the model and observe the total execution time. We consider here the formalism of timed automata (TAs) [2], which is a popular extension of finite-state automata with clocks measuring time, i.e., variables evolving linearly at the same rate. Such clocks can be tested against integer constants in locations ("invariants") or along transitions ("guards"), and can be reset to 0 when taking transitions. Context and related worksFranck Cassez proposed in [19] a first definition of _timed_ opacity for TAs: the system is opaque if an attacker can never deduce whether some sequence of actions (possibly with timestamps) was performed, by only observing a given set of observable actions together with their timestamp. It is then proved in [19] that it is undecidable whether a TA is opaque, even for the restricted class of event-recording automata [2] (a subclass of TAs). This notably relates to the undecidability of timed language inclusion for TAs [1]. Security problems for TAs are surveyed in [13]. The aforementioned negative result leaves hope only if the definition or the setting is changed, which was done in three main lines of works. The different studied options were to reduce the expressiveness of the formalism [36, 37], to constrain the system to evolve in a time-bounded setting [4] or to consider a weaker attacker, who has access only to the _execution time_[9, 8], rather than to all observable actions with their timestamps. We present here a summary of our recent works in this latter setting [9, 8]. ContributionsIn the setting of TAs, we denote by _execution time_ the time from the system start to the time a given (final) location is entered. Therefore, given a secret location, a TA is execution-time opaque (ET-opaque) for an execution time \(d\) if there exist at least two runs of duration \(d\) from the initial location to a final location: one visiting the secret location, and another one _not_ visiting the secret location. In other words, if an attacker measures such an execution time from the initial location to the target location \(\ell_{\text{f}}\), then this attacker is not able to deduce whether the system visited \(\ell_{\text{priv}}\). Deciding whether at least one such \(d\) exists can be seen as an _existential_ version of ET-opacity (called \(\exists\)-ET-opacity). Then, a TA is _fully ET-opaque_ if it is ET-opaque _for all execution times_: that is, for each possible execution time \(d\), either the final location is unreachable, or the final location is reachable for at least two runs, one visiting the secret location, and another one not visiting it. We define a _weak_ version of ET-opacity by only requiring that runs visiting the secret location on the way to the final location have a counterpart of the same duration not visiting the secret location on the way to the final location, but not necessarily the opposite: the TA is _weakly ET-opaque_ if for each run visiting the secret location, there exists a run not visiting it with the same duration; the dual does not necessarily hold. We also consider an _expiring version_ of ET-opacity, where the secret is subject to an expiration date \(\Delta\). That is, we consider that an attack is successful only when the attacker can decide that the secret location was visited less than \(\Delta\) time units before the system completion. Conversely, if the attacker exhibits an execution time \(d\) for which it is certain that the secret location was visited, but this location was visited strictly more than \(\Delta\) time units prior to the system completion, then this attack is useless, and can be seen as a failed attack. The system is therefore _fully expiring ET-opaque_ if the set of execution times for which the private location was visited within \(\Delta\) time units prior to system completion (referred as "secret times") is exactly equal to the set of execution times for which the private location was either not visited or visited more than \(\Delta\) time units prior to system completion (referred as "non-secret times"). Moreover, it is _weakly expiring ET-opaque_ when the inclusion of the secret times into the non-secret ones is verified--and not necessarily the dual. Finally, we study the aforementioned problems for a _parametric_ extension of TAs, i.e., parametric timed automata (PTAs) [3], where integer constants compared to clocks can be made (rational-valued) timing parameters, i.e., unknown constants. Interesting problems include emptiness problems, i.e., the emptiness of the parameter valuations set such that (expiring) ET-opacity holds, and synthesis, i.e., the synthesis of all parameter valuations such that (expiring) ET-opacity holds. About this manuscriptThis manuscript mainly summarizes results from two recent works, providing unified notations and concept names for the sake of consistency: 1. defining and studying ET-opacity problems [9] in TAs (Section 3) and PTAs (Section 4); these notions from [9] are presented differently (including the problem names) in this paper for sake of consistency; and 2. defining and studying expiring execution-time opacity (exp-ET-opacity) problems [8] in both TAs and PTAs (Section 5). In addition, we prove a few original results on weak ET-opacity (that were not addressed in [9] because we had not yet defined the concept of weak ET-opacity when writing [9]) and on exp-ET-opacity. These original results are Propositions 2 and 4 and Theorems 5, 6 and 9. In Tables 1 and 2, we summarize the decidability results recalled in this paper for ET-opacity and exp-ET-opacity. We denote a problem with a green check if it is decidable, with a red cross if it is undecidable, and with a yellow question mark if it is open (or not considered in the aforementioned papers [9, 8]). We emphasize using a bold font the original results of this paper. The p-emptiness (resp. p-synthesis) problem asks for the synthesis (resp. for the non-existence) of a parameter valuation for which ET-opacity is enforced. The \(\Delta\)-p-synthesis (resp. emptiness) problem asks for the synthesis (resp. for the non-existence) of a parameter valuation and an expiring bound \(\Delta\) for which the exp-ET-opacity is enforced. L/U-PTA denote the lower-bound/upper-bound parametric timed automata [27] subclass of PTAs. These notions will be formally defined in the paper. OutlineSection 2 recalls the necessary preliminaries, notably (parametric) timed automata. Section 3 defines and reviews execution-time opacity problems in timed automata. Section 4 defines and reviews execution-time opacity problems in timed automata. Section 5 defines and reviews _expiring_ execution-time opacity problems in (parametric) timed automata. Section 6 briefly reports on our existing imple \begin{table} \begin{tabular}{|l c|l|l|l|} \hline & & \(\exists\)**-ET-opaque** & **weakly ET-opaque** & **fully ET-opaque** \\ \hline Decision & TA & \(\surd\)(Proposition 2) & \(\surd\)(**Proposition 4**) & \(\surd\)(Proposition 3) \\ \hline p-emptiness & LU-PTA & \(\surd\)(Theorem 2) & \(\timesd\)(**Theorem 6**) & \(\timesd\)(Theorem 4) \\ & PTA & \(\timesd\)(Theorem 1) & \(\timesd\)(**Theorem 5**) & \(\timesd\)(Theorem 3) \\ \hline p-synthesis & LU-PTA & \(\timesd\)(Proposition 5) & \(\timesd\)(**Corollary 5**) & \(\timesd\)(Corollary 3) \\ & PTA & \(\timesd\)(Corollary 1) & \(\timesd\)(**Corollary 4**) & \(\timesd\)(Corollary 2) \\ \hline \end{tabular} \end{table} Table 1: Summary of the results for ET-opacity [9] \begin{table} \begin{tabular}{|l c|l|l|l|} \hline & & \(\exists\)**-exp-ET-opaque** & **weakly exp-ET-opaque** & **fully exp-ET-opaque** \\ & & & **opaque** & **opaque** \\ \hline Decision & TA & \(\surd\)(**Theorem 9**) & \(\surd\)(Theorem 8) & \(\surd\)(Theorem 8) \\ \hline \(\Delta\)-emptiness & TA &? & \(\surd\)(Corollary 6) & \(\surd\)(Theorem 11) \\ \(\Delta\)-computation & TA &? & \(\surd\)(Theorem 10) &? \\ \hline \(\Delta\)-p-emptiness & LU-PTA &? & \(\timesd\)(Theorem 12) & \(\timesd\)(Theorem 12) \\ & PTA &? & \(\timesd\)(Theorem 13) & \(\timesd\)(Theorem 13) \\ \hline \(\Delta\)-p-synthesis & LU-PTA &? & \(\timesd\)(Corollary 7) & \(\timesd\)(Theorem 12) \\ & PTA &? & \(\timesd\)(Corollary 8) & \(\timesd\)(Corollary 8) \\ \hline \end{tabular} \end{table} Table 2: Summary of the results for exp-ET-opacity [8] mentation of some of the problems using the parametric timed model checker IMITATOR[7]. Section 7 concludes the paper and reports on perspectives. ## 2 Preliminaries We denote by \(\mathbb{N},\mathbb{Z},\mathbb{Q}_{\geq 0},\mathbb{R}_{\geq 0}\) the sets of non-negative integers, integers, non-negative rationals and non-negative reals, respectively. ### Clocks, parameters and constraints _Clocks_ are real-valued variables that all evolve over time at the same rate. Throughout this paper, we assume a set \(\mathbb{X}=\{x_{I},\ldots,x_{H}\}\) of _clocks_. A _clock valuation_ is a function \(\mu:\mathbb{X}\to\mathbb{R}_{\geq 0}\), assigning a non-negative value to each clock. We write \(\vec{0}\) for the clock valuation assigning \(0\) to all clocks. Given a constant \(d\in\mathbb{R}_{\geq 0}\), \(\mu+d\) denotes the valuation s.t. \((\mu+d)(x)=\mu(x)+d\), for all \(x\in\mathbb{X}\). A _(timing) parameter_ is an unknown rational-valued constant of a model. Throughout this paper, we assume a set \(\mathbb{P}=\{p_{1},\ldots,p_{M}\}\) of _parameters_. A _parameter valuation_\(v\) is a function \(v:\mathbb{P}\to\mathbb{Q}_{\geq 0}\). As often, we choose _real_-valued clocks and _rational_-valued parameters, because irrational constants render reachability undecidable in TAs [31] (see [6] for a survey on the impact of these domains in (P)TAs). We assume \(\bowtie\in\{<,\leq,=,\geq,>\}\). A constraint \(C\) is a conjunction of inequalities over \(\mathbb{X}\cup\mathbb{P}\) of the form \(x\bowtie\sum_{1\leq i\leq M}\alpha_{i}p_{i}+d\), with \(p_{i}\in\mathbb{P}\), and \(\alpha_{i},d\in\mathbb{Z}\). Given \(C\), we write \(\mu\models v(C)\) if the expression obtained by replacing each \(x\) with \(\mu(x)\) and each \(p\) with \(v(p)\) in \(C\) evaluates to true. ### Timed automata A TA is a finite-state automaton extended with a finite set of real-valued clocks. We also add to the standard definition of TAs a special private location, which will be used to define our subsequent opacity concepts. **Definition 1** (Timed automaton [2]).: A TA \(\mathcal{A}\) is a tuple \(\mathcal{A}=(\Sigma,L,\ell_{0},\ell_{priv},\ell_{\ell},\mathbb{X},I,E)\), where: 1. \(\Sigma\) is a finite set of actions, 2. \(L\) is a finite set of locations, 3. \(\ell_{0}\in L\) is the initial location, 4. \(\ell_{priv}\in L\) is a special private location, 5. \(\ell_{\ell}\in L\) is the final location, 6. \(\mathbb{X}\) is a finite set of clocks, 7. \(I\) is the invariant, assigning to every \(\ell\in L\) a constraint \(I(\ell)\) over \(\mathbb{X}\) (called _invariant_), 8. \(E\) is a finite set of edges \(e=(\ell,g,a,R,\ell^{\prime})\) where \(\ell,\ell^{\prime}\in L\) are the source and target locations, \(a\in\Sigma\), \(R\subseteq\mathbb{X}\) is a set of clocks to be reset, and \(g\) is a constraint over \(\mathbb{X}\) (called _guard_). **Example 1**.: In Fig. 1, we give an example of a TA with three locations \(\ell_{0}\), \(\ell_{1}\) and \(\ell_{2}\), three edges, three actions \(\{a,b,c\}\), and one clock \(x\). \(\ell_{0}\) is the initial location, \(\ell_{2}\) is the private location, while \(\ell_{1}\) is the final location. \(\ell_{0}\) has an invariant \(x\leq 3\) and the edge from \(\ell_{0}\) to \(\ell_{2}\) has a guard \(x\geq 1\). Concrete semantics of timed automataWe recall the concrete semantics of a TA using a timed transition system (TTS) [27]. **Definition 2** (Semantics of a TA).: Given a TA \(\mathcal{A}=(\Sigma,L,\ell_{0},\ell_{priv},\ell_{\mathrm{f}},\mathbb{X},I,E)\), the semantics of \(\mathcal{A}\) is given by the TTS \(\mathfrak{T}_{\mathcal{A}}=(\mathfrak{S},\mathfrak{s}_{0},\Sigma\cup\mathbb{ R}_{\geq 0},\rightarrow)\), with 1. \(\mathfrak{S}=\{(\ell,\mu)\in L\times\mathbb{R}_{\geq 0}^{H}\mid\mu\models I(\ell)v\}\), 2. \(\mathfrak{s}_{0}=(\ell_{0},\vec{0})\), 3. \(\rightarrow\) consists of the discrete and (continuous) delay transition relations: 1. discrete transitions: \((\ell,\mu)\stackrel{{ e}}{{\mapsto}}(\ell^{\prime},\mu^{\prime})\), if \((\ell,\mu),(\ell^{\prime},\mu^{\prime})\in\mathfrak{S}\), and there exists \(e=(\ell,g,a,R,\ell^{\prime})\in E\), such that \(\mu^{\prime}=[\mu]_{R}\), and \(\mu\models v(g)\). 2. delay transitions: \((\ell,\mu)\stackrel{{ d}}{{\mapsto}}(\ell,\mu+d)\), with \(d\in\mathbb{R}_{\geq 0}\), if \(\forall d^{\prime}\in[0,d],(\ell,\mu+d^{\prime})\in\mathfrak{S}\). Moreover we write \((\ell,\mu)\stackrel{{(d,e)}}{{\longrightarrow}}(\ell^{\prime}, \mu^{\prime})\) for a combination of a delay and discrete transition if \(\exists\mu^{\prime\prime}:(\ell,\mu)\stackrel{{ d}}{{\mapsto}}( \ell,\mu^{\prime\prime})\stackrel{{ e}}{{\mapsto}}(\ell^{\prime}, \mu^{\prime})\). Given a TA \(\mathcal{A}\) with concrete semantics \((\mathfrak{S},\mathfrak{s}_{0},\Sigma\cup\mathbb{R}_{\geq 0},\rightarrow)\), we refer to the states of \(\mathfrak{S}\) as the _concrete states_ of \(\mathcal{A}\). A _run_ of \(\mathcal{A}\) is an alternating sequence of concrete states of \(\mathcal{A}\) and pairs of edges and delays starting from the initial state \(\mathfrak{s}_{0}\) of the form \((\ell_{0},\mu_{0}),(d_{0},e_{0}),(\ell_{1},\mu_{1}),\cdots\) with \(i=0,1,\ldots\), \(e_{i}\in E\), \(d_{i}\in\mathbb{R}_{\geq 0}\) and \((\ell_{i},\mu_{i})\stackrel{{(d_{i},e_{i})}}{{\longrightarrow}}( \ell_{i+1},\mu_{i+1})\). **Definition 3** (Duration of a run).: Given a finite run \(\rho:(\ell_{0},\mu_{0}),(d_{0},e_{0}),(\ell_{1},\mu_{1}),\cdots,(d_{i-1},e_{i-1 }),(\ell_{\mathrm{n}},\mu_{n})\), the _duration_ of \(\rho\) is \(\textit{dur}(\rho)=\sum_{0\leq i\leq n-1}d_{i}\). We also say that \(\ell_{\mathrm{n}}\) is reachable in time \(\textit{dur}(\rho)\). **Example 2**.: Consider again the TA \(\mathcal{A}\) in Fig. 1. Consider the following run \(\rho\) of \(\mathcal{A}\): \((\ell_{0},x=0),(1.4,a),(\ell_{2},x=1.4),(0.4,b),(\ell_{1},x=1.8)\) Note that we write "\(x=1.4\)" instead of "\(\mu\) such that \(\mu(x)=1.4\)". We have \(\textit{dur}(\rho)=1.4+0.4=1.8\). ### Parametric timed automata A PTA is a TA extended with a finite set of timing parameters allowing to model unknown constants. **Definition 4** (Parametric timed automaton [4]).: A PTA \(\mathcal{P}\) is a tuple \(\mathcal{P}=(\Sigma,L,\ell_{0},\ell_{priv},\ell_{\mathrm{f}},\mathbb{X}, \mathbb{P},I,E)\), where: 1. \(\Sigma\) is a finite set of actions; 2. \(L\) is a finite set of locations; 3. \(\ell_{0}\in L\) is the initial location; 4. \(\ell_{priv}\in L\) is a special private location, 5. \(\ell_{\mathrm{f}}\in L\) is the final location; Figure 1: A TA example 6. \(\mathbb{X}\) is a finite set of clocks; 7. \(\mathbb{P}\) is a finite set of parameters; 8. \(I\) is the invariant, assigning to every \(\ell\in L\) a constraint \(I(\ell)\) over \(\mathbb{X}\cup\mathbb{P}\) (called _invariant_); 9. \(E\) is a finite set of edges \(e=(\ell,g,a,R,\ell^{\prime})\) where \(\ell,\ell^{\prime}\in L\) are the source and target locations, \(a\in\Sigma\), \(R\subseteq\mathbb{X}\) is a set of clocks to be reset, and \(g\) is a constraint over \(\mathbb{X}\cup\mathbb{P}\) (called _guard_). **Example 3**.: In Fig. 2, we give an example of a PTA with three locations \(\ell_{0}\), \(\ell_{1}\) and \(\ell_{2}\), three edges, three actions \(\{a,b,c\}\), one clock \(x\) and two parameters \(\{p_{1},p_{2}\}\). \(\ell_{0}\) is the initial location, \(\ell_{2}\) is the private location, while \(\ell_{1}\) is the final location. \(\ell_{0}\) has an invariant \(x\leq 3\) and the edge from \(\ell_{0}\) to \(\ell_{2}\) has a guard \(x\geq p_{1}\). **Definition 5** (Valuation of a PTA).: Given a parameter valuation \(v\), we denote by \(v(\mathcal{P})\) the non-parametric structure where all occurrences of a parameter \(p_{i}\) have been replaced by \(v(p_{i})\). _Remark 1_.: We have a direct correspondence between the valuation of a PTA and the definition of a TA given in Definition 1. TAs were originally defined with integer constants in [1] (as done in Definition 1), while our definition of PTAs allows _rational_-valued constants. By assuming a rescaling of the constants (i.e., by multiplying all constants in a TA by the least common multiple of their denominators), we obtain an equivalent (integer-valued) TA, as defined in Definition 1. So we assume in the following that \(v(\mathcal{P})\) is a TA. **Example 4**.: Consider again the PTA in Fig. 2 and let \(v\) be such that \(v(p_{1})=1\) and \(v(p_{2})=2\). Then \(v(\mathcal{P})\) is the TA depicted in Fig. 1. Lower/upper parametric timed automatonWhile most decision problems are undecidable for the general class of PTAs (see [5] for a survey), lower/upper parametric timed automata (L/U-PTAs) [27] is the most well-known subclass of PTAs with some decidability results: for example, reachability-emptiness ("the emptiness of the valuations set for which a given location is reachable"), which is undecidable for PTAs [3], becomes decidable for L/U-PTAs [27]. Various other results were studied for this subclass (e.g., [17, 28, 12]). **Definition 6** (Lower/upper parametric timed automaton [27]).: An L/U-PTA is a PTA where the set of parameters is partitioned into lower-bound parameters and upper-bound parameters, where each upper-bound (resp. lower-bound) parameter \(p_{i}\) must be such that, for every guard or invariant constraint \(x\bowtie\sum_{1\leq i\leq M}\alpha_{i}p_{i}+d\), we have: * \(\bowtie\in\{\leq,<\}\) implies \(\alpha_{i}\geq 0\) (resp. \(\alpha_{i}\leq 0\)), and * \(\bowtie\in\{\geq,>\}\) implies \(\alpha_{i}\leq 0\) (resp. \(\alpha_{i}\geq 0\)). **Example 5**.: The PTA in Fig. 2 is an L/U-PTA with \(\{p_{1}\}\) as lower-bound parameter set, and \(\{p_{2}\}\) as upper-bound parameter set. Figure 2: A PTA example ## 3 Execution-time opacity problems in timed automata Throughout this paper, the attacker model is as follows: the attacker knows the TA modeling the system, and can only observe the execution time between the start of the system and the time it reaches the final location. The attacker cannot observe actions, nor the values of the clocks, nor whether some locations are visited. Its goal will be to deduce from its observations whether the private location was visited. ### Defining the execution times Let us first introduce two key concepts necessary to define our notion of execution-time opacity. Given a TA \(\mathcal{A}\) and a run \(\rho\), we say that \(\ell_{\mathit{priv}}\) is _visited on the way to \(\ell_{\mathrm{f}}\) in \(\rho\)_ if \(\rho\) is of the form \[(\ell_{0},\mu_{0}),(d_{0},e_{0}),(\ell_{1},\mu_{1}),\cdots,(\ell_{\mathrm{m}}, \mu_{m}),(d_{m},e_{n}),\cdots(\ell_{\mathrm{n}},\mu_{n})\] for some \(m,n\in\mathbb{N}\) such that \(\ell_{\mathrm{m}}=\ell_{\mathit{priv}}\), \(\ell_{\mathrm{n}}=\ell_{\mathrm{f}}\) and \(\forall 0\leq i\leq n-1,\ell_{\mathrm{i}}\neq\ell_{\mathrm{f}}\). We denote by \(\mathit{Visit}^{\mathit{priv}}(\mathcal{A})\) the set of those runs, and refer to them as _private_ runs. We denote by \(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})\) the set of all the durations of these runs. Conversely, we say that \(\ell_{\mathit{priv}}\) is _avoided on the way to \(\ell_{\mathrm{f}}\) in \(\rho\)_ if \(\rho\) is of the form \[(\ell_{0},\mu_{0}),(d_{0},e_{0}),(\ell_{1},\mu_{1}),\cdots,(\ell_{\mathrm{n}}, \mu_{n})\] with \(\ell_{\mathrm{n}}=\ell_{\mathrm{f}}\) and \(\forall 0\leq i<n,\ell_{\mathrm{i}}\notin\{\ell_{\mathit{priv}},\ell_{\mathrm{ f}}\}\). We denote the set of those runs by \(\overline{\mathit{Visit}}^{\mathit{priv}}(\mathcal{A})\), referring to them as _public_ runs, and by \(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})\) the set of all the durations of these public runs. Therefore, \(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})\) (resp. \(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})\)) is the set of all the durations of the runs for which \(\ell_{\mathit{priv}}\) is visited (resp. avoided) on the way to \(\ell_{\mathrm{f}}\). These concepts can be seen as the set of execution times from the initial location \(\ell_{0}\) to the final location \(\ell_{\mathrm{f}}\) while visiting (resp. not visiting) a private location \(\ell_{\mathit{priv}}\). Observe that, from the definition of the duration of a run (Definition 3), this "execution time" does not include the time spent in \(\ell_{\mathrm{f}}\). **Example 6**.: Consider again the TA in Fig. 1. We have \(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})=[1,2]\) and \(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})=[0,3]\). ### Defining execution-time opacity We now introduce formally the concept of "ET-opacity for a set of durations (or execution times) \(D\)": a system is _ET-opaque for execution times \(D\)_ whenever, for any duration in \(D\), it is not possible to deduce whether the system visited \(\ell_{\mathit{priv}}\) or not. In other words, if an attacker measures an execution time within \(D\) from the initial location to the target location \(\ell_{\mathrm{f}}\), then this attacker is not able to deduce whether the system visited \(\ell_{\mathit{priv}}\). **Definition 7** (Execution-time opacity (ET-opacity) for \(D\)).: Given a TA \(\mathcal{A}\) and a set of execution times \(D\), we say that \(\mathcal{A}\) is _execution-time opaque (ET-opaque) for execution times \(D\)_ if \(D\subseteq(\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})\cap\mathit{DVisit}^{ \mathit{priv}}(\mathcal{A}))\). In the following, we will be interested in the existence of such an execution time. We say that a TA is \(\exists\)-ET-opaque if it is ET-opaque for a non-empty set of execution times. **Definition 8** (\(\exists\)-Et-opacity).: A TA \(\mathcal{A}\) is _\(\exists\)-ET-opaque_ if \((\mathit{DVisit}^{\mathit{priv}}(\mathcal{A})\cap\mathit{DVisit}^{\mathit{priv}}( \mathcal{A}))\neq\emptyset\). If one does not have the ability to tune the system (i.e., change internal delays, or add some Thread.sleep() statements in a program), one may be first interested in knowing whether the system is ET-opaque for all execution times. In other words, if a system is _fully ET-opaque_, for any possible measured execution time, an attacker is not able to deduce whether \(\ell_{\mathit{priv}}\) was visited or not. **Definition 9** (full ET-opacity).: A TA \(\mathcal{A}\) is _fully ET-opaque_ if \(\textit{DVisit}^{priv}(\mathcal{A})=D\overline{\textit{Visit}}^{priv}(\mathcal{A})\). That is, a system is fully ET-opaque if, for any execution time \(d\), a run of duration \(d\) reaches \(\ell_{\text{f}}\) after visiting \(\ell_{priv}\) iff another run of duration \(d\) reaches \(\ell_{\text{f}}\) without visiting \(\ell_{priv}\). _Remark 2_.: This definition is symmetric: a system is not fully ET-opaque iff an attacker can deduce \(\ell_{priv}\) or \(\neg\ell_{priv}\). For instance, if there is no run to \(\ell_{\text{f}}\) visiting \(\ell_{priv}\), but still a run to \(\ell_{\text{f}}\) (not visiting \(\ell_{priv}\)), a system is not fully ET-opaque w.r.t. Definition 9. We finally define weak ET-opacity, not considered in [10], but defined in the specific context of expiring opacity [9]. We therefore reintroduce this definition in the "normal" opacity setting considered in this section, in the following: **Definition 10** (weak ET-opacity).: A TA \(\mathcal{A}\) is _weakly ET-opaque_ if \(\textit{DVisit}^{priv}(\mathcal{A})\subseteq D\overline{\textit{Visit}}^{priv}( \mathcal{A})\). That is, a TA is weakly ET-opaque whenever, for any run reaching the final location after visiting the private location, there exists another run of the same duration reaching the final location but not visiting the private location; but the converse does not necessarily hold. _Remark 3_.: Our notion of weak ET-opacity may still leak some information: on the one hand, if a run indeed visits the private location, there exists an equivalent run not visiting it, and therefore the system is ET-opaque; _but_ on the other hand, there may exist execution times for which the attacker can deduce that the private location was _not_ visited. This remains acceptable in some cases, and this motivates us to define a weak version of ET-opacity. Also note that the "initial-state opacity" for real-time automata considered in [37] can also be seen as _weak_ in the sense that their language inclusion is also unidirectional. **Example 7**.: Consider again the PTA \(\mathcal{P}\) in Fig. 2 and let \(v\) such that \(v(p_{1})=1\) while \(v(p_{2})=2\) (i.e., the TA in Fig. 1). Recall that \(\textit{DVisit}^{priv}(v(\mathcal{P}))=[1,2]\) and \(D\overline{\textit{Visit}}^{priv}(v(\mathcal{P}))=[0,3]\). Hence, it holds that \(\textit{DVisit}^{priv}(v(\mathcal{P}))\subseteq D\overline{\textit{Visit}}^{ priv}(v(\mathcal{P}))\) and therefore \(v(\mathcal{P})\) is weakly ET-opaque. However, \(\textit{DVisit}^{priv}(v(\mathcal{P}))\neq D\overline{\textit{Visit}}^{priv}(v (\mathcal{P}))\) and therefore \(v(\mathcal{P})\) is not fully ET-opaque. Now consider again the PTA \(\mathcal{P}\) in Fig. 2 and let \(v^{\prime}\) such that \(v^{\prime}(p_{1})=0\) while \(v^{\prime}(p_{2})=3\). This time, \(\textit{DVisit}^{priv}(v^{\prime}(\mathcal{P}))=D\overline{\textit{Visit}}^{ priv}(v^{\prime}(\mathcal{P}))=[0,3]\) and therefore \(v^{\prime}(\mathcal{P})\) is fully ET-opaque. ### Decision and computation problems #### 3.3.1 Computation problem for ET-opacity We can now define the ET-opacity t-computation problem, which consists in computing the possible execution times ensuring ET-opacity. **ET-opacity t-computation problem:** Input: A TA \(\mathcal{A}\) Problem: Compute the execution times \(D\) such that \(\mathcal{A}\) is ET-opaque for \(D\). Let us illustrate that this computation problem is certainly not easy. For the TA \(\mathcal{A}\) in Fig. 3, the execution times \(D\) for which \(\mathcal{A}\) is ET-opaque is exactly \(\mathbb{N}\); that is, only integer times ensure ET-opacity (as the system can only leave \(\ell_{priv}\) and hence enter \(\ell_{\text{f}}\) at an integer time), while non-integer times violate ET-opacity. #### 3.3.2 Decision problems We define the three following decision problems: ### Answering the ET-opacity t-computation problem **Proposition 1** (Solvability of the ET-opacity t-computation problem [10, Proposition 5.2]).: _The ET-opacity t-computation problem is solvable for TAs._ This positive result can be put in perspective with the negative result of [20] that proves that it is undecidable whether a TA (and even the more restricted subclass of event-recording automata (ERAs) [3]) is opaque, in a sense that the attacker can deduce some actions, by looking at observable actions together with their timing. The difference in our setting is that only the global time is observable, which can be seen as a single action, occurring once only at the end of the computation. In other words, our attacker is less powerful than the attacker in [20]. ### Checking for \(\exists\)-Et-opacity The following result was not strictly speaking proved in [10], and we provide here an original proof for it. **Proposition 2** (Decidability of the \(\exists\)-Et-opacity decision problem).: _The \(\exists\)-ET-opacity decision problem is decidable in \(\mathtt{5EXPTIME}\) for TAs._ Proof.: Let \(\mathcal{A}\) be a TA. Suppose we add a Boolean variable _priv_ to \(\mathcal{A}\) which is initially false and set to true on every edge going into the location \(\ell_{\textit{priv}}\). This Boolean variable (not strictly part of the TA syntax) can also be simulated by adding a copy of \(\mathcal{A}\) instead, and jumping to that copy on edges going into location \(\ell_{\textit{priv}}\). Then the \(\exists\)-ET-opacity decision problem amounts to checking the following parametric TCTL formula [21], with \(p\) a parameter: \[\exists p\big{(}\exists\Diamond_{=p}(\ell_{\text{f}}\wedge\textit{priv})\wedge \exists\Diamond_{=p}(\ell_{\text{f}}\wedge\neg\textit{priv})\big{)}\] Figure 3: TA for which the set of execution times ensuring ET-opacity is \(\mathbb{N}\) From [19], this can be checked in 5EXPTIME, since the size of the TA it is checked on is at most twice that of \(\mathcal{A}\), and the size of the formula is constant w.r.t. the size of \(\mathcal{A}\). ### Checking for full ET-opacity The following result matches [10, Proposition 5.3] but we provide an original proof, also fixing a complexity issue in [10, Proposition 5.3]. **Proposition 3** (Decidability of the full ET-opacity decision problem).: _The full ET-opacity decision problem is decidable in 5EXPTIME for TAs._ Proof.: As before, we can write a parametric TCTL formula for this problem, with \(p\) a parameter: \[\forall p\big{(}\exists\Diamond_{=p}(\ell_{\mathrm{f}}\wedge\mathit{priv}) \Leftrightarrow\exists\Diamond_{=p}(\ell_{\mathrm{f}}\wedge\neg\mathit{priv}) \big{)}\] This formula can be checked in 5EXPTIME[19]. ### Checking for weak ET-opacity The _weak_ notion of ET-opacity had not been defined in [10]. Nevertheless, the proof of Proposition 3 can be adapted in a very straightforward manner to prove its weak counterpart as follows: **Proposition 4** (Decidability of the weak ET-opacity decision problem).: _The weak ET-opacity decision problem is decidable in 5EXPTIME for TAs._ Proof.: Let \(\mathcal{A}\) be a TA. As before, we can write a parametric TCTL formula for this problem, with \(p\) a parameter: \[\forall p\big{(}\exists\Diamond_{=p}(\ell_{\mathrm{f}}\wedge\mathit{priv}) \Rightarrow\exists\Diamond_{=p}(\ell_{\mathrm{f}}\wedge\neg\mathit{priv}) \big{)}\] This formula can be checked in 5EXPTIME[19]. ## 4 Execution-time opacity problems in parametric timed automata We now extend opacity problems to parametric timed automata. We first address the parametric problems related to \(\exists\)-ET-opacity in Section 4.1. The decision problems associated to full ET-opacity and weak ET-opacity will then be considered in Sections 4.2 and 4.3 respectively. Following the usual concepts for parametric timed automata, we consider both _emptiness_ and _synthesis_ problems. An emptiness problem aims at _deciding_ whether the set of parameter valuations for which a given property holds in the valuated TA is empty, while a synthesis problem aims at _synthesizing_ the set of parameter valuations for which a given property holds in the valuated TA. ### \(\exists\)-Et-opacity #### 4.1.1 Problems Emptiness problem for \(\exists\)-Et-opacityLet us consider the following decision problem, i.e., the problem of checking the _emptiness_ of the set of parameter valuations guaranteeing \(\exists\)-ET-opacity. \(\exists\)**-ET-opacity p-emptiness problem:** Input: A PTA \(\mathcal{P}\) Problem: Decide the emptiness of the set of parameter valuations \(v\) such that \(v(\mathcal{P})\) is \(\exists\)**-ET-opaque**. The negation of the \(\exists\)-ET-opacity p-emptiness problem consists in deciding whether there exists at least one parameter valuation for which \(v(\mathcal{P})\) is \(\exists\)-ET-opaque. Synthesis problem for \(\exists\)**-ET-opacity**The synthesis counterpart allows for a higher-level problem by also synthesizing the internal timings guaranteeing \(\exists\)-ET-opacity. \(\exists\)**-ET-opacity p-synthesis problem:** Input: A PTA \(\mathcal{P}\) Problem: Synthesize the set \(V\) of parameter valuations such that \(v(\mathcal{P})\) is \(\exists\)-ET-opaque, for all \(v\in V\). #### 4.1.2 Undecidability in general With the rule of thumb that all non-trivial decision problems are undecidable for general PTAs [5], the following result is not surprising, and follows from the undecidability of reachability-emptiness for PTAs [3]. **Theorem 1** (Undecidability of the \(\exists\)-ET-opacity p-emptiness problem [9, Theorem 6.1]).: _The \(\exists\)-ET-opacity p-emptiness problem is undecidable for general PTAs._ Since the emptiness problem is undecidable, the synthesis problem is immediately unsolvable as well. **Corollary 1**.: _The \(\exists\)-ET-opacity p-synthesis problem is unsolvable for general PTAs._ Nevertheless, in [9] we proposed a procedure solving this problem. While this procedure is not guaranteed to terminate, its result is correct when termination can be achieved. See [9, Section 8] for details. #### 4.1.3 The subclass of L/U-PTAs DecidabilityWe now show that the \(\exists\)-ET-opacity p-emptiness problem is decidable for L/U-PTAs. Despite early positive results for L/U-PTAs [27, 17], more recent results (notably [28, 11, 12]) mostly proved undecidable properties of L/U-PTAs, and therefore this positive result is welcome. **Theorem 2** (Decidability of the \(\exists\)-ET-opacity p-emptiness problem [9, Theorem 6.2]).: _The \(\exists\)-ET-opacity p-emptiness problem is decidable for L/U-PTAs._ Intractability of synthesis for lower/upper parametric timed automataEven though the \(\exists\)-ET-opacity p-emptiness problem is decidable for L/U-PTAs (Theorem 2), the _synthesis_ of the parameter valuations remains intractable in general, as shown in the following Proposition 5. By intractable we mean more precisely that the solution, if it can be computed, cannot (in general, i.e., for some sufficiently complex solutions) be represented using any formalism for which the emptiness of the intersection with equality constraints is decidable. That is, a formalism in which it is decidable to decide "the emptiness of the valuation set of the computed solution intersected with an equality test between variables" cannot be used to represent the solution. For example, let us question whether we could represent the solution of the \(\exists\)-ET-opacity p-synthesis problem for L/U-PTAs using the formalism of a _finite union of polyhedra_: testing whether a finite union of polyhedra intersected with "equality constraints" (typically \(p_{1}=p_{2}\)) is empty or not _is_ decidable. The Parma polyhedra library [15] can typically compute the answer to this question. Therefore, from the following Proposition 5, finite unions of polyhedra cannot be used to represent the solution of the \(\exists\)-ET-opacity p-synthesis problem for L/U-PTAs. As finite unions of polyhedra are a very common formalism (not to say the _de facto_ standard) to represent the solutions of various timing parameters synthesis problems, the synthesis is then considered to be infeasible in practice, or _intractable_ (following the vocabulary used in [29, Theorem 2]). **Proposition 5** (Intractability of the \(\exists\)-ET-opacity p-synthesis problem [10, Proposition 6.4]).: _In case a solution to the \(\exists\)-ET-opacity p-synthesis problem for L/U-PTAs can be computed, this solution may be not representable using any formalism for which the emptiness of the intersection with equality constraints is decidable._ ### Parametric full ET-opacity We address here the following decision problem, which asks about the emptiness of the parameter valuation set guaranteeing full ET-opacity. We also define the full ET-opacity p-synthesis problem, this time _synthesizing_ the timing parameters guaranteeing full ET-opacity. #### 4.2.1 Problem definitions **Full ET-opacity p-emptiness problem:** Input: A PTA \(\mathcal{P}\) Problem: Decide the emptiness of the set of parameter valuations \(v\) such that \(v(\mathcal{P})\) is fully ET-opaque. Equivalently, we are interested in deciding whether there exists at least one parameter valuation for which \(v(\mathcal{P})\) is fully ET-opaque. We also define the _full ET-opacity p-synthesis problem_, aiming at synthesizing (ideally the entire set of) parameter valuations \(v\) for which \(v(\mathcal{P})\) is fully ET-opaque. **Full ET-opacity p-synthesis problem:** Input: A PTA \(\mathcal{P}\) Problem: Synthesize the set \(V\) of parameter valuations such that \(v(\mathcal{P})\) is fully ET-opaque, for all \(v\in V\). #### 4.2.2 Undecidability for general PTAs Considering that Theorem 1 shows the undecidability of the \(\exists\)-ET-opacity p-emptiness problem, the undecidability of the full ET-opacity p-emptiness problem is not surprising, but does not follow immediately. **Theorem 3** (Undecidability of the full ET-opacity p-emptiness problem [10, Theorem 7.2]).: _The full ET-opacity p-emptiness problem is undecidable for general PTAs._ The proof relies on a reduction from the problem of reachability-emptiness in constant time, a result proved itself undecidable in the same paper [10, Lemma 7.1]. Since the emptiness problem is undecidable, the synthesis problem is immediately unsolvable as well. **Corollary 2**.: _The full ET-opacity p-synthesis problem is unsolvable for PTAs._ #### 4.2.3 Undecidability for lower/upper parametric timed automata Let us now study the full ET-opacity p-emptiness problem for L/U-PTAs. While it is well-known that L/U-PTAs enjoy a monotonicity for reachability properties ("enlarging an upper-bound parameter or decreasing a lower-bound parameter preserves reachability") [28], we can show in the following example that this is not the case for full ET-opacity. **Example 8**.: Consider the PTA in Fig. 4. First assume \(v\) such that \(v(p)=0.5\). Then, \(v(\mathcal{P})\) is not fully ET-opaque: indeed, \(\ell_{\mathrm{f}}\) can be reached in 1 time unit by visiting \(\ell_{\mathit{priv}}\), but not without visiting \(\ell_{\mathit{priv}}\). Second, assume \(v^{\prime}\) such that \(v^{\prime}(p)=1\). Then, \(v^{\prime}(\mathcal{P})\) is fully ET-opaque: indeed, \(\ell_{\mathrm{f}}\) can be reached for any duration in \([0,1]\) by runs both visiting and not visiting \(\ell_{\mathit{priv}}\). Finally, let us enlarge \(p\) further, and assume \(v^{\prime\prime}\) such that \(v^{\prime\prime}(p)=2\). Then, \(v^{\prime\prime}(\mathcal{P})\) becomes again not fully ET-opaque: indeed, \(\ell_{\mathrm{f}}\) can be reached in 2 time units without visiting \(\ell_{\mathit{priv}}\), but cannot be reached in 2 time units by visiting \(\ell_{\mathit{priv}}\). As a side note, remark that this PTA is actually an upper-bound parametric timed automaton (U-PTA) [18], that is, monotonicity for this problem does not even hold for U-PTAs. In fact, we show that, while the \(\exists\)-ET-opacity p-emptiness problem is decidable for L/U-PTAs (Theorem 2), the full ET-opacity p-emptiness problem becomes undecidable for this same class. This confirms (after previous works in [18, 29, 12, 13]) that L/U-PTAs stand at the frontier between decidability and undecidability. **Theorem 4** (Undecidability of the full ET-opacity p-emptiness problem for L/U-PTAs [10, Theorem 7.4]).: _The full ET-opacity p-emptiness problem is undecidable for L/U-PTAs._ Since the emptiness problem is undecidable, the synthesis problem is immediately unsolvable as well. **Corollary 3**.: _The full ET-opacity p-synthesis problem is unsolvable for L/U-PTAs._ _Remark 4_.: Since L/U-PTAs are a subclass of PTAs (put it differently: "any L/U-PTA is a PTA"), the negative results proved for L/U-PTAs (Theorem 4 and Corollary 3) immediately imply those previously shown for general PTAs (Theorem 3 and Corollary 2). However, in [10], a smaller number of clocks and parameters is needed to prove the aforementioned negative results for general PTAs, which justifies the two versions of the proofs in [10]. ### Parametric weak ET-opacity #### 4.3.1 Problem definitions **Weak ET-opacity p-emptiness problem:** Input: A PTA \(\mathcal{P}\) Problem: Decide the emptiness of the set of parameter valuations \(v\) such that \(v(\mathcal{P})\) is weakly ET-opaque. Figure 4: No monotonicity for full ET-opacity in L/U-PTAs #### 4.3.2 Undecidability for general PTAs We provide below an original result in the context of _weak_ opacity, but partially inspired by the construction used in the proof of Theorem 3. **Theorem 5** (Undecidability of the weak ET-opacity p-emptiness problem).: _The weak ET-opacity p-emptiness problem is undecidable for general PTAs._ Proof.: We reduce from the reachability-emptiness problem in bounded time, which is undecidable from [11, Theorem 3.12]. (This is different from the proof of [10, Theorem 7.2], which reduces from the reachability-emptiness problem in _constant_ time, which is undecidable according to [10, Lemma 7.1].) Consider an arbitrary PTA \(\mathcal{P}\), with initial location \(\ell_{0}\) and a final location \(\ell_{\mathrm{f}}\). We add the following locations and transitions in \(\mathcal{P}\) to obtain a PTA \(\mathcal{P}^{\prime}\), as in Fig. 5: (\(i\)) a new initial location \(\ell_{0}{}^{\prime}\), with outgoing transitions in \(0\)-time (due to their guard \(x=0\), where \(x\) is a new clock not belonging to \(\mathcal{P}\), and never reset in \(\mathcal{P}^{\prime}\)) to \(\ell_{0}\) and to a new location \(\ell_{priv}\), (\(ii\)) a new location \(\ell_{pub}\) with an incoming transition from \(\ell_{\mathrm{f}}\) guarded by \(x=1\), and (\(iii\)) a new final location \(\ell_{\mathrm{f}}{}^{\prime}\) with incoming transitions from \(\ell_{pub}\) and \(\ell_{priv}\) both guarded by \(x=1\). First, note that, due to the guarded transitions, \(\ell_{\mathrm{f}}{}^{\prime}\) is reachable for any parameter valuation via runs visiting \(\ell_{priv}\), (only) for an execution time equal to \(1\). That is, for all \(v\), \(\mathit{DVisitpriv}(v(\mathcal{P}^{\prime}))=\{1\}\). We now show that there exists a valuation \(v\) such that \(v(\mathcal{P}^{\prime})\) is weakly ET-opaque (with \(\ell_{priv}\) as private location, and \(\ell_{\mathrm{f}}{}^{\prime}\) as final location) iff there exists a valuation \(v\) such that \(\ell_{\mathrm{f}}\) is reachable in \(v(\mathcal{P})\) for an execution time \(\leq 1\). * Assume there exists some valuation \(v\) such that \(\ell_{\mathrm{f}}\) is reachable from \(\ell_{0}\) in \(\mathcal{P}\) for an execution time \(\leq 1\). Then, due to our construction, \(\ell_{pub}\) is visited on the way to \(\ell_{\mathrm{f}}{}^{\prime}\) in \(v(\mathcal{P}^{\prime})\) (only) for the execution time \(1\). Therefore, \(\mathit{DVisitpriv}(v(\mathcal{P}^{\prime}))=\{1\}=\mathit{DVisitpriv}(v( \mathcal{P}^{\prime}))\) and then \(v(\mathcal{P}^{\prime})\) is weakly ET-opaque (and also fully ET-opaque, which plays no role here). * Conversely, if \(\ell_{\mathrm{f}}\) is not reachable from \(\ell_{0}\) in \(\mathcal{P}\) for any valuation for an execution time \(\leq 1\), then no run reaches \(\ell_{\mathrm{f}}{}^{\prime}\) in time \(1\) without visiting \(\ell_{priv}\), for any valuation of \(\mathcal{P}^{\prime}\). Therefore, for any valuation \(v\), \(\mathit{DVisitpriv}(v(\mathcal{P}^{\prime}))=\{1\}\not\subseteq\mathit{DVisitpriv} (v(\mathcal{P}^{\prime}))=\emptyset\). Therefore, there is no valuation \(v\) such that \(v(\mathcal{P}^{\prime})\) is weakly ET-opaque. Figure 5: Reduction from reachability-emptiness for the proof of Theorem 5 Therefore, there exists a valuation \(v\) such that \(v(\mathcal{P}^{\prime})\) is weakly ET-opaque iff there exists a valuation \(v\) such that \(\ell_{\mathrm{f}}\) is reachable in \(v(\mathcal{P})\) for an execution time \(\leq 1\)--which is undecidable from [10, Theorem 3.12]. This concludes the proof. Since the emptiness problem is undecidable, the synthesis problem is immediately unsolvable as well. **Corollary 4**.: _The weak ET-opacity p-synthesis problem is unsolvable for general PTAs._ #### 4.3.3 Undecidability for lower/upper parametric timed automata We provide below another original result in the context of _weak_ opacity, this time for L/U-PTAs, largely inspired by the proof of Theorem 4, even though our construction needed to be changed. **Theorem 6** (Undecidability of the weak ET-opacity p-emptiness problem for L/U-PTAs).: _The weak ET-opacity p-emptiness problem is undecidable for L/U-PTAs._ Proof.: Let us recall from [10, Theorem 3.12] that the reachability-emptiness problem is undecidable over bounded time for PTAs with (at least) 3 clocks and 2 parameters. Assume a PTA \(\mathcal{P}\) with 3 clocks and 2 parameters, say \(p_{1}\) and \(p_{2}\), and a final location \(\ell_{\mathrm{f}}\). Take 1 as a time bound. From [10, Theorem 3.12], it is undecidable whether there exists a parameter valuation for which \(\ell_{\mathrm{f}}\) is reachable in \(\mathcal{P}\) in time \(\leq 1\). The idea of our proof is that, as in [28, 9], we "split" each of the two parameters used in \(\mathcal{P}\) into a lower-bound parameter (\({p_{1}}^{l}\) and \({p_{2}}^{l}\)) and an upper-bound parameter (\({p_{1}}^{u}\) and \({p_{2}}^{u}\)). Each constraint of the form \(x<p_{i}\) (resp. \(x\leq p_{i}\)) is replaced with \(x<{p_{i}}^{u}\) (resp. \(x\leq{p_{i}}^{u}\)) while each constraint of the form \(x>p_{i}\) (resp. \(x\geq p_{i}\)) is replaced with \(x>{p_{i}}^{l}\) (resp. \(x\geq{p_{i}}^{l}\)); \(x=p_{i}\) is replaced with \({p_{i}}^{l}\leq x\leq{p_{i}}^{u}\). The idea is that the PTA \(\mathcal{P}\) is exactly equivalent to our construction with duplicated parameters only when \({p_{1}}^{l}={p_{1}}^{u}\) and \({p_{2}}^{l}={p_{2}}^{u}\). The crux of the rest of this proof is that we will "rule out" any parameter valuation not satisfying these equalities, so as to use directly the undecidability result of [10, Theorem 3.12]. Now, consider the extension of \(\mathcal{P}\) given in Fig. 6, and let \(\mathcal{P}^{\prime}\) be this extension. We assume that \(x\) is an extra clock not used in \(\mathcal{P}\). The syntax "\(\mathbb{X}\setminus\{x\}\gets 0\)" denotes that all clocks of the original PTA \(\mathcal{P}\) are reset--but not the new clock \(x\). The guard on the transition from \({\ell_{0}}^{\prime}\) to \(\ell_{4}\) stands for 2 different transitions guarded with \({p_{1}}^{l}<x\leq{p_{1}}^{u}\), and \({p_{2}}^{l}<x\leq{p_{2}}^{u}\), respectively. Let us first make the following observations: Figure 6: Undecidability of full ET-opacity p-emptiness problem for L/U-PTAs 1. for any parameter valuation, one can take the transition from \({\ell_{0}}^{\prime}\) to \({\ell_{\mathit{priv}}}\) at time 2 and then to \({\ell_{\ell}}^{\prime}\) in 0-time (i.e., at time 2), i.e., \({\ell_{\ell}}^{\prime}\) is always reachable in time 2 while visiting location \({\ell_{\mathit{priv}}}\); put differently, \(\{2\}\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))\) for any parameter valuation \(v\); 2. the original automaton \({\mathscr{P}}\) can only be entered whenever \({p_{1}}^{l}\leq{p_{1}}^{u}\) and \({p_{2}}^{l}\leq{p_{2}}^{u}\); going from \({\ell_{0}}^{\prime}\) to \({\ell_{0}}\) takes exactly 1 time unit (due to the \(x=1\) guard); 3. to reach \({\ell_{\ell}}^{\prime}\) without visiting \({\ell_{\mathit{priv}}}\), a run must go through \({\mathscr{P}}\) and visit \({\ell_{\mathrm{f}}}\), and its duration is necessarily 2; put differently, \(\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))\subseteq\{2\}\) for any parameter valuation \(v\); 4. from [10, Theorem 3.12], it is undecidable whether there exists a parameter valuation for which there exists a run reaching \({\ell_{\mathrm{f}}}\) from \({\ell_{0}}\) in time \(\leq 1\), i.e., reaching \({\ell_{\mathrm{f}}}\) from \({\ell_{0}}^{\prime}\) in time \(\leq 2\). Let us consider the following cases depending on the valuations: 1. for valuations \(v\) such that \({p_{1}}^{l}>{p_{1}}^{u}\) or \({p_{2}}^{l}>{p_{2}}^{u}\), then thanks to the transitions from \({\ell_{0}}^{\prime}\) to \({\ell_{0}}\), there is no way to enter the original PTA \({\mathscr{P}}\) (and therefore to reach \({\ell_{\ell}}^{\prime}\) without visiting \({\ell_{\mathit{priv}}}\)); hence, \(\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))=\emptyset\), and therefore \(\{2\}\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime})) \not\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))\), i.e., \({\mathscr{P}}^{\prime}\) is not weakly ET-opaque for any of these valuations. 2. for valuations \(v\) such that \({p_{1}}^{l}<{p_{1}}^{u}\) or \({p_{2}}^{l}<{p_{2}}^{u}\), then the transition from \({\ell_{0}}^{\prime}\) to \({\ell_{4}}\) can be taken, and therefore there exist runs reaching \({\ell_{\ell}}^{\prime}\) after a duration \(>2\) (for example of duration 3) and visiting \({\ell_{\mathit{priv}}}\). Since no run can reach \({\ell_{\ell}}^{\prime}\) without visiting \({\ell_{\mathit{priv}}}\) for a duration \(\neq 2\), then \(\{3\}\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime})) \not\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))\subseteq \{2\}\) and again \({\mathscr{P}}^{\prime}\) is not weakly ET-opaque for any of these valuations. 3. for valuations such that \({p_{1}}^{l}={p_{1}}^{u}\) and \({p_{2}}^{l}={p_{2}}^{u}\), then the behavior of the modified \({\mathscr{P}}\) (with duplicate parameters) is exactly the one of the original \({\mathscr{P}}\). Also, note that the transition from \({\ell_{0}}^{\prime}\) to \({\ell_{4}}\) cannot be taken. In contrast, the transition from \({\ell_{0}}^{\prime}\) to \({\ell_{\mathit{priv}}}\) can still be taken, and therefore there exists a run of duration 2 visiting \({\ell_{\mathit{priv}}}\) and reaching \({\ell_{\ell}}^{\prime}\). Hence, \(\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))=\{2\}\) for any such valuation \(v\). * Now, assume there exists such a parameter valuation \(v\) for which there exists a run of \(v({\mathscr{P}})\) of duration \(\leq 1\) reaching \({\ell_{\mathrm{f}}}\). And, as a consequence, there exists a run of \(v({\mathscr{P}}^{\prime})\) of duration 2 (including the 1 time unit to go from \({\ell_{0}}^{\prime}\) to \({\ell_{0}}\)) reaching \({\ell_{\ell}}^{\prime}\) without visiting \({\ell_{\mathit{priv}}}\). Hence, \(\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))=\{2\}\). Therefore \(\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))=\mathit{DVisit}^{ \mathit{priv}}(v({\mathscr{P}}^{\prime}))=\{2\}\). As a consequence, the modified automaton \({\mathscr{P}}^{\prime}\) is weakly ET-opaque (and actually fully ET-opaque--which plays no role in this proof) for such a parameter valuation. * Conversely, assume there exists no parameter valuation for which there exists a run of \({\mathscr{P}}\) of duration \(\leq 1\) reaching \({\ell_{\mathrm{f}}}\). In that case, \({\ell_{\mathrm{f}}}^{\prime}\) can never be reached without visiting \({\ell_{\mathit{priv}}}\): \(\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))=\emptyset\), and therefore \(\{2\}\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime})) \not\subseteq\mathit{DVisit}^{\mathit{priv}}(v({\mathscr{P}}^{\prime}))\), i.e., \(v({\mathscr{P}}^{\prime})\) is not fully ET-opaque for any such parameter valuation \(v\). As a consequence, there exists a parameter valuation \(v^{\prime}\) for which \(v^{\prime}({\mathscr{P}}^{\prime})\) is weakly ET-opaque iff there exists a parameter valuation \(v\) for which there exists a run in \(v({\mathscr{P}})\) of duration \(\leq 1\) reaching \({\ell_{\mathrm{f}}}\)--which is undecidable from [10, Theorem 3.12]. **Corollary 5**.: _The weak ET-opacity \(p\)-synthesis problem is unsolvable for L/U-PTAs._ ## 5 Expiring execution-time opacity problems In [5], the authors consider a time-bounded notion of the opacity of [20], where the attacker has to disclose the secret before an upper bound, using a partial observability. This can be seen as a secrecy with an _expiration date_. The rationale is that retrieving a secret "too late" is useless; this is understandable, e.g., when the secret depends of the status of the memory; if the cache was overwritten since, then knowing the secret is probably useless in most situations. In addition, the analysis in [5] is carried over a time-bounded horizon; this means there are two time bounds in [5]: one for the secret expiration date, and one for the bounded-time execution of the system. In this section, we review a recent work of ours [9] in which we incorporate this secret expiration date into our notion of ET-opacity: we only consider the former notion of time bound from [5] (the secret expiration date), and lift the assumption regarding the latter (the bounded-time execution of the system). More precisely, we consider an _expiring version of ET-opacity_, where the secret is subject to an expiration date; this can be seen as a combination of both concepts from [10] and [5]. That is, we consider that an attack is successful only when the attacker can decide that the secret location was entered less than \(\Delta\) time units before the system completion. Conversely, if the attacker exhibits an execution time \(d\) for which it is certain that the secret location was visited, but this location was entered strictly more than \(\Delta\) time units prior to the system completion, then this attack is useless, and can be seen as a failed attack. The system is therefore _fully_ exp-ET-opaque if the set of execution times for which the private location was entered within \(\Delta\) time units prior to system completion is exactly equal to the set of execution times for which the private location was either not visited or entered \(>\Delta\) time units prior to system completion. In addition, when the former (secret) set of execution times is _included_ into the latter (non-secret) set of times, we say that the system is _weakly_ exp-ET-opaque; this encodes situations when the attacker might be able to deduce that no secret location was visited, but is not able to confirm that the secret location _was_ indeed visited. On the one hand, our attacker model is _less powerful_ than [5], because our attacker has only access to the execution time (and to the input model); in that sense, our attacker capability is identical to [10]. On the other hand, we lift the time-bounded horizon analysis from [5], allowing to analyze systems without any assumption on their execution time; therefore, we only import from [5] the notion of _expiring secret_. We summarize in Table 3 our different notions of ET-opacity and expiring ET-opacity; we will define \begin{table} \begin{tabular}{|p{56.9pt}|p{142.3pt}|p{142.3pt}|} \hline & **Secret runs** & **Non-secret runs** \\ \hline ET-opacity & Runs visiting the private location (= private runs) & Runs not visiting the private location (= public runs) \\ \hline exp-ET-opacity & Runs visiting the private location \(\leq\Delta\) time units before the system completion & (i) Runs not visiting the private location and (ii) Runs visiting the private location \(>\Delta\) time units before the system completion \\ \hline & **The system is (resp. expiring)** & **if** \\ \hline ET-opaque & \{secret runs\} \(\cap\) \{non-secret runs\} \(\neq\emptyset\) \\ weakly ET-opaque & \{secret runs\} \(\subseteq\) \{non-secret runs\} \\ full ET-opacity & \{secret runs\} \(=\) \{non-secret runs\} \\ \hline \end{tabular} \end{table} Table 3: Summary of the definitions for ET-opacity and expiring ET-opacity [10, 9] formally expiring ET-opacity in the following. ### Exp-ET-opacity Let us first introduce some notions dedicated to expiring ET-opacity (hereafter referred to as exp-ET-opacity). Let \(\mathbb{R}^{\infty}_{\geq 0}=\mathbb{R}_{\geq 0}\cup\{+\infty\}\). Given a TA \(\mathcal{A}\) and a finite run \(\rho\) in \(\mathfrak{T}_{\mathcal{A}}\), the _duration_ between two states of \(\rho:\mathfrak{s}_{0},(\tilde{d}_{0},e_{0}),\mathfrak{s}_{1},\cdots,\mathfrak{ s}_{k}\) is \(\textit{d}ur_{\rho}(\mathfrak{s}_{i},\mathfrak{s}_{j})=\sum_{i\leq m\leq j-1}d_{m}\). We also define the _duration_ between two locations \(\ell_{1}\) and \(\ell_{2}\) as the duration \(\textit{d}ur_{\rho}(\ell_{1},\ell_{2})=\textit{d}ur_{\rho}(\mathfrak{s}_{i}, \mathfrak{s}_{j})\) with \(\rho:\mathfrak{s}_{0},(\tilde{d}_{0},e_{0}),\mathfrak{s}_{1},\cdots,\mathfrak{ s}_{i},\cdots,\mathfrak{s}_{j},\cdots,\mathfrak{s}_{k}\) where \(\mathfrak{s}_{j}\) the first occurrence of a state with location \(\ell_{2}\) and \(\mathfrak{s}_{i}\) is the last state of \(\rho\) with location \(\ell_{1}\) before \(\mathfrak{s}_{j}\). We choose this definition to coincide with the definitions of opacity that we will define in the following Definition 11. Indeed, we want to make sure that revealing a secret (\(\ell_{1}\) in this definition) is not a failure if it is done after a given time. Thus, as soon as the system reaches its final state (\(\ell_{2}\)), we will be interested in knowing how long the secret has been present, and thus the last time it was entered (\(\mathfrak{s}_{i}\)). Given \(\Delta\in\mathbb{R}^{\infty}_{\geq 0}\), we define \(\textit{Visit}^{priv}_{\leq\Delta}(\mathcal{A})\) (resp. \(\textit{Visit}^{priv}_{>\Delta}(\mathcal{A})\)) as the set of runs \(\rho\in\textit{Visit}^{priv}(\mathcal{A})\) s.t. \(\textit{d}ur_{\rho}(\ell_{priv},\ell_{\ell})\leq\Delta\) (resp. \(\textit{d}ur_{\rho}(\ell_{priv},\ell_{\ell})>\Delta\)). We refer to the runs of \(\textit{Visit}^{priv}_{\leq\Delta}(\mathcal{A})\) as _secret_ runs; their durations are denoted by \(\textit{DVisit}^{priv}_{\leq\Delta}(\mathcal{A})\). Similarly, the durations of the runs of \(\textit{Visit}^{priv}_{>\Delta}(\mathcal{A})\) are denoted by \(\textit{DVisit}^{priv}_{>\Delta}(\mathcal{A})\). We define below two notions of ET-opacity w.r.t. a time bound \(\Delta\). We will compare two sets: 1. the set of execution times for which the private location was entered at most \(\Delta\) time units prior to system completion; and 2. the set of execution times for which either the private location was not visited at all, or it was last entered more than \(\Delta\) time units prior to system completion (which, in our setting, is somehow similar to _not_ visiting the private location, in the sense that entering it "too early" is considered of little interest). If both sets match, the system is fully (\(\leq\Delta\))-ET-opaque. If the former is included into the latter, then the system is weakly (\(\leq\Delta\))-ET-opaque. **Definition 11** (Expiring Execution-time opacity).: Given a TA \(\mathcal{A}\) and a bound (i.e., an expiration date for the secret) \(\Delta\in\mathbb{R}^{\infty}_{\geq 0}\) we say that \(\mathcal{A}\) is _fully exp-ET-opaque_ w.r.t. the expiration date \(\Delta\), denoted by _fully (\(\leq\Delta\))-ET-opaque_, if \[\textit{DVisit}^{priv}_{\leq\Delta}(\mathcal{A})=\textit{DVisit}^{priv}_{> \Delta}(\mathcal{A})\cup\textit{DVisit}^{priv}(\mathcal{A}).\] Moreover, \(\mathcal{A}\) is _weakly exp-ET-opaque_ w.r.t. the expiration date \(\Delta\), denoted by _weakly (\(\leq\Delta\))-ET-opaque_, if \[\textit{DVisit}^{priv}_{\leq\Delta}(\mathcal{A})\subseteq\textit{DVisit}^{priv }_{>\Delta}(\mathcal{A})\cup\textit{DVisit}^{priv}(\mathcal{A}).\] Finally, \(\mathcal{A}\) is \(\exists\)-_ET-opaque_ w.r.t. the expiration date \(\Delta\), denoted by \(\exists\)-(\(\leq\Delta\))-_ET-opaque_, if \[\textit{DVisit}^{priv}_{\leq\Delta}(\mathcal{A})\cap(\textit{DVisit}^{priv}_{> \Delta}(\mathcal{A})\cup\textit{DVisit}^{priv}(\mathcal{A}))\neq\emptyset.\] **Example 9**.: Consider again the PTA in Fig. 2; let \(v\) be such that \(v(p_{1})=1\) and \(v(p_{2})=2.5\). Fix \(\Delta=1\). We have: * \(\textit{DVisit}^{priv}_{>\Delta}(v(\mathcal{P}))=[0,3]\) * \(\textit{DVisit}^{priv}_{>\Delta}(v(\mathcal{P}))=(2,2.5]\) * \(\textit{DVisit}^{priv}_{\leq\Delta}(v(\mathcal{P}))=[1,2.5]\) Therefore, we say that \(v(\mathcal{P})\) is: * \(\exists\)-(\(\leq 1\))-ET-opaque, as \([1,2.5]\cap\big{(}(2,2.5]\cup[0,3]\big{)}\neq\emptyset\) * weakly (\(\leq 1\))-ET-opaque, as \([1,2.5]\subseteq\big{(}(2,2.5]\cup[0,3]\big{)}\) * not fully (\(\leq 1\))-ET-opaque, as \([1,2.5]\neq\big{(}(2,2.5]\cup[0,3]\big{)}\) As noted in Remark 3, despite the weak (\(\leq 1\))-ET-opacity of \(\mathcal{A}\), the attacker can deduce some information about the visit of the private location for some execution times. For example, if a run has a duration of 3 time units, it cannot be a private run, and therefore the attacker can deduce that the private location was not visited at all. ### Exp-ET-opacity problems in timed automata #### 5.2.1 Problem definitions We define seven different problems in the context of (non-parametric) TAs: \(\exists\)**-exp-ET-opacity decision problem:** Input: A TA \(\mathcal{A}\) and a bound \(\Delta\in\mathbb{R}_{>0}^{\infty}\) Problem: Decide whether \(\mathcal{A}\) is \(\exists\)-(\(\leq\Delta\))-ET-opaque. **Full (resp. weak) exp-ET-opacity decision problem:** Input: A TA \(\mathcal{A}\) and a bound \(\Delta\in\mathbb{R}_{>0}^{\infty}\) Problem: Decide whether \(\mathcal{A}\) is fully (resp. weakly) (\(\leq\Delta\))-ET-opaque. **Full (resp. weak) exp-ET-opacity \(\Delta\)-emptiness problem:** Input: A TA \(\mathcal{A}\) Input: A TA \(\mathcal{A}\) Input: Decide the emptiness of the set of bounds \(\Delta\) such that \(\mathcal{A}\) is fully (resp. weakly) (\(\leq\Delta\))-ET-opaque. **Full (resp. weak) exp-ET-opacity \(\Delta\)-computation problem:** Input: A TA \(\mathcal{A}\) Problem: Compute the maximal set \(\mathcal{D}\) of bounds such that \(\mathcal{A}\) is fully (resp. weakly) (\(\leq\Delta\))-ET-opaque for all \(\Delta\in\mathcal{D}\). **Example 10**.: Consider again the PTA in Fig. 2; let \(v\) be such that \(v(p_{1})=1\) and \(v(p_{2})=2.5\) (as in Example 9). Let us exemplify some of the problems defined above. * Given \(\Delta=1\), the weak exp-ET-opacity decision problem asks whether \(v(\mathcal{P})\) is weakly (\(\leq 1\))-ET-opaque--the answer is "yes" from Example 9. * The answer to the weak exp-ET-opacity \(\Delta\)-emptiness problem is therefore "no" because the set of bounds \(\Delta\) such that \(v(\mathcal{P})\) is weakly (\(\leq\Delta\))-ET-opaque is not empty. * Finally, the weak exp-ET-opacity \(\Delta\)-computation problem asks to compute all the corresponding bounds: in this example, the solution is \(\Delta\in\mathbb{R}_{\geq 0}^{\infty}\), i.e., the solution is the set all possible (non-negative) values for \(\Delta\). Relations with the ET-opacity problemsNote that, when considering \(\Delta=+\infty\), \(\textit{DVisit}_{>\Delta}^{priv}(\mathcal{A})=\emptyset\) and all the execution times of runs visiting \(\ell_{priv}\) are in \(\textit{DVisit}_{\leq\Delta}^{priv}(\mathcal{A})\). Therefore, full (\(\leq+\infty\))-ET-opacity matches the full ET-opacity. We can therefore notice that answering the full exp-ET-opacity decision problem for \(\Delta=+\infty\) is decidable (Proposition 3). However, the emptiness and computation problems cannot be reduced to full ET-opacity problems from Section 4.1.3. Conversely, it is possible to answer the full ET-opacity decision problem by checking the full exp-ET-opacity decision problem with \(\Delta=+\infty\). Moreover, the ET-opacity t-computation problem reduces to the full exp-ET-opacity \(\Delta\)-computation problem: if \(+\infty\in\mathcal{D}\), we get the answer. Recall that we summarize our different definitions of (expiring) ET-opacity in Table 3. #### 5.2.2 Results In general, the link between the full and weak notions of the three aforementioned problems is not obvious. However, for a fixed value of \(\Delta\), we establish the following theorem. **Theorem 7** ([9, Theorem 1]).: _The full exp-ET-opacity decision problem reduces to the weak exp-ET-opacity decision problem._ We can now study the aforementioned problems. **Theorem 8** (Decidability of full (resp. weak) exp-ET-opacity decision problem [9, Theorems 2 and 5]).: _The full (resp. weak) exp-ET-opacity decision problem is decidable in_ NEXPTIME_._ _Remark 5_.: In Proposition 3, we established that the full (\(\leq+\infty\))-ET-opacity decision problem is in \(\mathsf{5EXPTIME}\). Theorem 8 thus extends our former results in three ways: 1. by including the parameter \(\Delta\), 2. by reducing the complexity and 3. by considering as well the _weak_ notion of ET-opacity (considered separately in Proposition 4). We complete these results from [9] with the following result analog to Proposition 2. **Theorem 9** (Decidability of \(\exists\)-exp-ET-opacity decision problem).: _The \(\exists\)-exp-ET-opacity decision problem is decidable in \(\mathsf{PSPACE}\)._ Proof.: The full (resp. weak) exp-ET-opacity decision problem was solved in [9] by building two non-deterministic finite automata whose languages represented the secret and the non-secret durations of the system, respectively. These automata being of exponential size and with a unary language, testing the equality or inclusion of languages led to the \(\mathsf{NEXPTIME}\) algorithm quoted in Theorem 8. Similarly, the \(\exists\)-exp-ET-opacity decision problem can be decided by testing whether the intersection of the languages of these automata is empty. This can be done in \(\mathsf{NLOGSPACE}\) in the size of the automata (classically, by first building the product between these two automata, and then by checking the reachability of a pair of final states), hence the \(\mathsf{PSPACE}\) algorithm. **Theorem 10** (Solvability of weak exp-ET-opacity \(\Delta\)-computation problem [9, Theorems 3 and 5]).: _The weak exp-ET-opacity \(\Delta\)-computation problem is solvable._ **Corollary 6** (Decidability of weak exp-ET-opacity \(\Delta\)-emptiness problem [9, Corollary 1]).: _The weak exp-ET-opacity \(\Delta\)-emptiness problem is decidable._ In contrast to the weak exp-ET-opacity \(\Delta\)-computation problem, we only show below that the full exp-ET-opacity \(\Delta\)-emptiness problem is decidable; the computation problem remains open. **Theorem 11** (Decidability of the full exp-ET-opacity \(\Delta\)-emptiness problem [9, Theorems 4 and 5]).: _The full exp-ET-opacity \(\Delta\)-emptiness problem is decidable._ ### Exp-ET-opacity in parametric timed automata We now study exp-ET-opacity problems for PTAs: we will be interested in the synthesis and in the emptiness of the valuations set ensuring that a system is fully (resp. weakly) exp-ET-opaque. #### 5.3.1 Definitions We define the following problems, where we ask for parameter valuations \(v\) and for valuations of \(\Delta\) s.t. \(v(\mathcal{P})\) is fully (resp. weakly) (\(\leq\Delta\))-ET-opaque. **Full (resp. weak) exp-ET-opacity \(\Delta\)-p-emptiness problem:** Input: A PTA \(\mathcal{P}\) Problem: Decide whether the set of parameter valuations \(v\) and valuations of \(\Delta\) such that \(v(\mathcal{P})\) is fully (resp. weakly) (\(\leq\Delta\))-ET-opaque is empty **Full (resp. weak) exp-ET-opacity \(\Delta\)-p-synthesis problem:** Input: A PTA \(\mathcal{P}\) Problem: Synthesize the set of parameter valuations \(v\) and valuations of \(\Delta\) such that \(v(\mathcal{P})\) is fully (resp. weakly) (\(\leq\Delta\))-ET-opaque **Example 11**.: Consider again the PTA \(\mathcal{P}\) in Fig. 2. For this PTA, the answer to the weak exp-ET-opacity \(\Delta\)-p-emptiness problem is false, as there exists such a valuation (e.g., the valuation given in Example 10). Moreover, we can show that, for all \(\Delta\) and \(v\): * \(D\overline{\mathit{Visit}}^{priv}(v(\mathcal{P}))=[0,3]\) * if \(v(p_{1})>3\) or \(v(p_{1})>v(p_{2})\), it is not possible to reach \(\ell_{\mathrm{f}}\) with a run visiting \(\ell_{\mathit{priv}}\) and therefore \(D\mathit{Visit}^{priv}_{>\Delta}(v(\mathcal{P}))=D\overline{\mathit{Visit}}^{ priv}_{\leq\Delta}(v(\mathcal{P}))=\emptyset\) * if \(v(p_{1})\leq 3\) and \(v(p_{1})\leq v(p_{2})\) * \(D\mathit{Visit}^{priv}_{\leq\Delta}(v(\mathcal{P}))=[v(p_{1}),\min(\Delta+3,v( p_{2}))]\) Recall that the full exp-ET-opacity \(\Delta\)-p-synthesis problem aims at synthesizing the valuations such that \(D\mathit{Visit}^{priv}_{\leq\Delta}(v(\mathcal{P}))=D\mathit{Visit}^{priv}_{> \Delta}(v(\mathcal{P}))\cup D\overline{\mathit{Visit}}^{priv}(v(\mathcal{P}))\). The answer to this problem is therefore the set of valuations of timing parameters and of \(\Delta\) s.t.: \[v(p_{1})=0\wedge\Big{(}\big{(}\Delta\leq 3\wedge 3\leq v(p_{2})\leq\Delta+3 \big{)}\vee\big{(}v(p_{2})<\Delta\wedge v(p_{2})=3\big{)}\Big{)}.\] #### 5.3.2 Results **The subclass of lower/upper parametric timed automata** **Theorem 12** (Undecidability of full (resp. weak) exp-ET-opacity \(\Delta\)-p-emptiness problem [8, Theorem 6]).: _The full (resp. weak) exp-ET-opacity \(\Delta\)-p-emptiness problem is undecidable for L/U-PTAs._ The synthesis problems are therefore immediately unsolvable as well. **Corollary 7** ([8, Corollary 2]).: _The full (resp. weak) exp-ET-opacity \(\Delta\)-p-synthesis problem is unsolvable for L/U-PTAs._ The full class of parametric timed automataThe undecidability of the emptiness problems for L/U-PTAs proved above (Theorem 12) immediately implies undecidability for the larger class of PTAs. However, as in Remark 4, the full proof (given in [9]) of the result stated below uses less clocks and parameters than for L/U-PTAs (Theorem 12). **Theorem 13** (Undecidability of full (resp. weak) exp-ET-opacity \(\Delta\)-p-emptiness problem [9, Theorem 7]).: _The full (resp. weak) exp-ET-opacity \(\Delta\)-p-emptiness problem is undecidable for general PTAs._ Again, the synthesis problems are therefore immediately unsolvable as well. **Corollary 8** ([9, Corollary 3]).: _The full (resp. weak) exp-ET-opacity \(\Delta\)-p-synthesis problem is unsolvable for PTAs._ ## 6 Implementation and application to Java programs A motivation for the works on ET-opacity (described in Sections 3 and 4) is the analysis of programs. More precisely, we are interested in deciding whether a program, e.g., written in Java, is ET-opaque, i.e., whether an attacker is incapable of deducing internal behavior by only looking at its execution time. A second motivation is the _configuration_ of internal timing values from a program, e.g., changing some internal delays, or tuning some Thread.sleep() statements in the program, so that the program becomes ET-opaque--justifying notably the results in Section 4. Semi-algorithm and implementationDespite the negative theoretical results (notably Theorem 1), we addressed in [10] the \(\exists\)-ET-opacity p-synthesis problem for the full class of PTAs. Our method may not terminate (due to the undecidability) but, if it does, its result is correct. Our workflow [10] can be summarized as follows. 1. We slightly modify the original PTA (by adding a Boolean flag \(b\) and a final synchronization action); 2. We perform _self-composition_ (i.e., parallel composition with a copy of itself) of this modified PTA, a method commonly used in security analyses [33, 16]; 3. We perform reachability-synthesis (i.e., the synthesis of parameter valuations for which a given location is reachable) on \(\ell_{\mathrm{f}}\) with contradictory values of \(b\). Reachability-synthesis is implemented in IMITATOR[7], a parametric timed model checker taking as inputs networks of (extensions of) parametric timed automata, and synthesizing parameter valuations for which a number of properties (including reachability) hold. Analysis of Java programsIn addition, we are interested in analyzing programs too. In order to apply our method to the analysis of programs, we need a systematic way of translating a program (e.g., a Java program) into a PTA. In general, precisely modeling the execution time of a program using models like TA is highly non-trivial due to complication of hardware pipelining, caching, OS scheduling, etc. The readers are referred to the rich literature in, e.g., [30, 31]. In [10], we instead make the following simplistic assumption on execution time of a program statement and focus on solving the parameter synthesis problem. We assume that the execution time of a program statement other than Thread.sleep(n) is within a range \([0,\epsilon]\) where \(\epsilon\) is a small integer constant (in milliseconds), whereas the execution time of statement Thread.sleep(n) is within a range \([n,n+\epsilon]\). In fact, we choose to keep \(\epsilon\)_parametric_ to be as general as possible, and to not depend on particular architectures. Our test subject is a set of benchmark programs from the DARPA Space/Time Analysis for Cybersecurity (STAC) program.1 These programs are being released publicly to facilitate researchers to develop methods and tools for identifying STAC vulnerabilities in the programs. These programs are simple yet non-trivial, and were built on purpose to highlight vulnerabilities that can be easily missed by existing security analysis tools. We _manually_ translated these programs to PTAs, following the method described above, and using a number of assumptions (such as collapsing loops with predefined duration). Footnote 1: [https://github.com/Apogee-Research/STAC/](https://github.com/Apogee-Research/STAC/) In addition, we applied our method to a set of PTAs examples from the literature, notably from [27, 24, 16, 34]. Experiments reported in [9] show that we can decide whether these benchmarks (including the programs) are fully ET-opaque or \(\exists\)-ET-opaque. When adding timing parameters, we additionally answer the \(\exists\)-ET-opacity p-synthesis problem, i.e., we synthesize the parameter valuations \(v\) and the associated execution times \(D\) such that \(v(\mathcal{P})\) is ET-opaque. Our method allows to exhibit cases when the system can never be made ET-opaque, including by tuning internal delays, or is always ET-opaque, or is ET-opaque only for some execution times and internal timing parameters. To summarize, the following problems can be answered using our framework: * \(\exists\)-ET-opacity decision problem * full ET-opacity decision problem * weak ET-opacity decision problem (not considered in our experiments in [9], but can be easily adapted) * \(\exists\)-ET-opacity p-synthesis problem, but without guarantee of termination, due to the undecidability of Theorem 1. However, our procedure cannot in its current form answer neither the full ET-opacity p-synthesis problem nor the weak ET-opacity p-synthesis problem. The expiring opacity problems in Section 5 were not addressed either. ## 7 Conclusion and perspectives In this paper, we recalled (and proved a few original) results related to the ET-opacity in TAs. Our notion of ET-opacity consists in considering an attacker model that can only observe the execution time of the system, i.e., the time from the initial location to a final location. The secret consists in deciding whether a special private location was visited or not. In contrast to another notion of opacity with a more powerful attacker able to observe some actions together with their timestamps, which led to the undecidability of the decision problem for TAs [19], our notion of ET-opacity yields decidability results for TAs. Parameterizing the problems using timing parameters brings undecidability for PTAs, but the subclass of L/U-PTAs gives mildly positive results. When in addition we consider that the secret has an expiration date, similarly to the concepts introduced in [4], we are able to not only _decide_ problems for TAs, but also to _synthesize_ valuations for the expiration date such that the TA is weakly exp-ET-opaque. However, problems extended with timing parameters all become undecidable. Recall that we summarized in Tables 1 and 2 the decidability results recalled in this paper, with a bold emphasis on the original results of this paper. We also reported here on an implementation using IMITATOR, which is able to answer non-parametric problems (\(\exists\)-ET-opacity decision problem, full ET-opacity decision problem, weak ET-opacity decision problem), and also answering a parameter synthesis problem (\(\exists\)-ET-opacity p-synthesis problem) without guarantee of termination for the latter problem. PerspectivesThe main theoretical future work is the open problems in Table 2 (mainly the full exp-ET-opacity \(\Delta\)-computation problem): it is unclear whether we can _compute_ the exact set of expiration dates \(\Delta\) for which a TA is fully (\(\leq\Delta\))-ET-opaque. In terms of synthesis, we have so far no procedure able (whenever it terminates) to answer the full ET-opacity p-synthesis problem or the weak ET-opacity p-synthesis problem. Synthesis procedures to answer expiring opacity problems (defined in Section 5) for PTAs remain to be designed too. These procedures cannot be both exact and guaranteed to terminate due to the aforementioned undecidability results. Exact analysis of opacity for programs, including a more precise modeling of the cache, is also on our agenda, following works such as [21, 22]. A different direction is that of _control_: can we turn a non-opaque system into an opaque system, by restraining its possible behaviors? A first step with our notion of ET-opacity was presented in [8], with only an _untimed_ controller. In addition, in [25], Gardey _et al._ propose several definitions of non-interference, related to various notions of simulation: they consider not only the _verification_ problem ("is the system non-interferent?") but also the (timed) _control_ problem ("synthesize a controller that will restrict the system in order to enforce non-interference"). Extending our current line works on ET-opacity to _timed_ controllers remains to be done. AcknowledgmentsWe are grateful to Clemens Dubslaff and Maurice ter Beek for the opportunity to give an invited talk at TiCSA 2023, and for useful suggestions on this manuscript.
2309.12243
Electrical operation of hole spin qubits in planar MOS silicon quantum dots
Silicon hole quantum dots have been the subject of considerable attention thanks to their strong spin-orbit coupling enabling electrical control. The physics of silicon holes is qualitatively different from germanium holes and requires a separate theoretical description. In this work, we theoretically study the electrical control and coherence properties of silicon hole dots with different magnetic field orientations. We discuss possible experimental configurations to optimize the electric dipole spin resonance (EDSR) Rabi time, the phonon relaxation time, and the dephasing due to random telegraph noise. Our main findings are: (i) The in-plane $g$-factor is strongly influenced by the presence of the split-off band, as well as by any shear strain. The $g$-factor is a non-monotonic function of the top gate electric field, in agreement with recent experiments. This enables coherence sweet spots at specific values of the top gate field and specific magnetic field orientations. (ii) Even a small ellipticity (aspect ratios $\sim 1.2$) causes significant anisotropy in the in-plane $g$-factor, which can vary by $50\% - 100\%$ as the magnetic field is rotated in the plane. (iii) EDSR Rabi frequencies are comparable to Ge, and the ratio between the relaxation time and the EDSR Rabi time $\sim 10^5$. For an out-of-plane magnetic field the EDSR Rabi frequency is anisotropic with respect to the orientation of the driving electric field, varying by $\approx 20\%$ as the driving field is rotated in the plane. Our work aims to stimulate experiments by providing guidelines on optimizing configurations and geometries to achieve robust, fast and long-lived hole spin qubits in silicon.
Zhanning Wang, Abhikbrata Sarkar, S. D. Liles, Andre Saraiva, A. S. Dzurak, A. R. Hamilton, Dimitrie Culcer
2023-09-21T16:42:00Z
http://arxiv.org/abs/2309.12243v1
# Electrical operation of hole spin qubits in planar MOS silicon quantum dots ###### Abstract Silicon hole quantum dots have been the subject of considerable attention thanks to their strong spin-orbit coupling enabling electrical control, a feature that has been demonstrated in recent experiments combined with the prospects for scalable fabrication in CMOS foundries. The physics of silicon holes is qualitatively different from germanium holes and requires a separate theoretical description, since many aspects differ substantially: the effective masses, cubic symmetry terms, spin-orbit energy scales, magnetic field response, and the role of the split-off band and strain. In this work, we theoretically study the electrical control and coherence properties of silicon hole dots with different magnetic field orientations, using a combined analytical and numerical approach. We discuss possible experimental configurations required to obtain a sweet spot in the qubit Larmor frequency, to optimize the electric dipole spin resonance (EDSR) Rabi time, the phonon relaxation time, and the dephasing due to random telegraph noise. Our main findings are: (i) The in-plane \(g\)-factor is strongly influenced by the presence of the split-off band, as well as by any shear strain that is typically present in the sample. The \(g\)-factor is a non-monotonic function of the top gate electric field, in agreement with recent experiments. This enables coherence sweet spots at specific values of the top gate field and specific magnetic field orientations. (ii) Even a small ellipticity (aspect ratios \(\sim 1.2\)) causes significant anisotropy in the in-plane \(g\)-factor, which can vary by \(50\%-100\%\) as the magnetic field is rotated in the plane. This is again consistent with experimental observations. (iii) EDSR Rabi frequencies are comparable to Ge, and the ratio between the relaxation time and the EDSR Rabi time \(\sim 10^{5}\). For an out-of-plane magnetic field the EDSR Rabi frequency is anisotropic with respect to the orientation of the driving electric field, varying by \(\approx 20\%\) as the driving field is rotated in the plane. Our work aims to stimulate experiments by providing guidelines on optimizing configurations and geometries to achieve robust, fast and long-lived hole spin qubits in silicon. ## I Introduction Silicon quantum devices have emerged as an ideal platform for scalable quantum computation, with remarkable advancements both theoretically and experimentally in recent years [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Silicon devices offer several advantages, including weak hyperfine interaction with the possibility of isotopic purification to eliminate the hyperfine coupling altogether [36; 37; 38; 39; 40; 41; 15], absence of piezoelectric phonons [42; 43], and mature silicon micro-fabrication technology [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54], making them competitive candidates to realize industrial-level scalable quantum computing architectures. Over the past few decades, numerous design proposals for qubits utilizing silicon quantum devices have been actively investigated, including the singlet-triplet transition qubit [46], single electron spin qubit [45; 56; 57], and acceptor or donor spin qubit [57; 58; 10; 59; 51; 57; 59]. Among the various platforms, silicon hole spin qubits exhibit additional desirable properties [60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76]. Firstly, hole systems possess strong spin-orbit coupling [77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92], which enables pure electrical manipulation of spin states via EDSR [93; 94; 95; 96], while the hole spin-3/2 is responsible for physics with no counterpart in electron systems [17; 80; 97; 98; 100; 99; 11; 101; 102]. Secondly, the absence of valley degeneracy avoids complications associated with the increase in Hilbert space that occurs for electrons [103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113]. Thirdly, whereas the hyperfine interaction is a strong decoherence source in other materials such as III-V group semiconductors [37; 38; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124], silicon can be isotopically purified [12; 125; 126; 55; 56; 127; 128; 129; 130; 131; 132; 133; 134; 135]. Recent years have witnessed key experiments on silicon hole qubits, including successful demonstrations on industrial standard complementary metal-oxide-semiconductor (CMOS) technologies [12; 13; 13; 49; 68; 127; 128; 129; 130; 132; 133], control of the number of holes and shell filling [63; 134], \(g\)-tensor manipulation in both nanowire and quantum dot systems [135; 136; 68; 129; 131], qubit operation at 25 K in the few-hole regime [136], single qubit operation above 4 K [137], long coherence time up to 10 ms in Si:B acceptors [24], dispersive readout [138; 139; 140; 53; 69; 74], Pauli spin blockade [141; 142], coupling between photons and hole spins [143], and the demonstration of the coupling between two hole qubits via anisotropic exchange [144]. In parallel with developments in silicon, considerable attention has been devoted to hole spin qubits in germanium [145; 146; 147; 20]. This includes spin state measurement and readout [61; 63; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167], electrical control of spin states [172], \(g\)-tensor manipulation [178; 173; 174; 175; 176; 177; 178], coupling to a superconducting microwave resonator [179], fast EDSR Rabi oscillations up to 540 MHz [180] and relaxation times of up to 32 ms [181; 153]. Neverthe less, the understanding gained from the study of germanium hole qubits cannot be directly translated to silicon. In silicon, the split-off energy is very small (44 meV) compared with gallium arsenide (340 meV) and germanium (296 meV) [182; 183], necessitating the use of a full six-band Luttinger-Kohn model[184; 185; 186; 187; 188; 189; 190; 191; 192; 193]. Additionally, the larger effective mass of holes in silicon requires smaller dots to achieve the same orbital confinement energy splitting. The strength of the spin-orbit coupling in silicon is weaker than in germanium, while the cubic symmetry parameters are very strong and cannot be accounted for perturbatively. Moreover, previous studies have shown that, for silicon two-dimensional hole gases at experimentally relevant densities, the Schrieffer-Wolff transformation cannot be used to reduce the three-dimensional Hamiltonian to an effective two-dimensional one since the criteria for the applicability of the Schrieffer-Wolff transformation are not satisfied [193; 194]. Furthermore, the orbital magnetic field terms are not captured by such a Schrieffer-Wolff transformation, which will play an important role when the magnetic field is applied perpendicular to the gate electric field. Finally, strain effects in silicon are different from those in other materials, with axial and shear strains strongly affecting both spin dynamics and the in-plane \(g\)-factor in planar quantum dots. In particular, spatial strain gradients caused by thermal contraction of the gate electrodes have a very large effect in silicon due to the thin gate oxide [135]. Theoretical studies on silicon hole qubits have also advanced rapidly in line with experimental progress. The physics of spin-orbit coupling in silicon hole systems have been investigated with the aim of realizing fully electrically operated spin qubits [70; 193]. Studies have examined device geometry, dot orientation, and strain in the silicon quantum dot to optimize the quality of the EDSR Rabi frequency and qubit Larmor frequency. These studies have identified optimal operation points and the possibility of achieving fast EDSR Rabi oscillations even with small spin-orbit coupling in silicon hole systems [72; 195; 196; 197; 198; 199; 195; 196; 197; 198; 199; 200; 201]. Decoherence due to hyperfine interactions and dephasing due to charge noise have also been studied, identifying experimental configurations for ultra-fast and highly coherent silicon hole spin qubits [39; 40; 75; 76; 43]. However, despite significant progress in both experiment and theory, a critical question remains unanswered: _When is it possible to minimize unwanted decoherence effects without reducing the efficiency of EDSR?_ In this paper, we focus on electrically-driven single hole spin qubits in planar silicon quantum dots and describe qubit dynamics in both perpendicular and in-plane magnetic fields. We adopt a hybrid analytical and computational approach which enables us to treat quantum dots with arbitrary confinement in a magnetic field of arbitrary orientation. For a perpendicular magnetic field we show that coherence sweet spots exist at certain values of the top gate field, which reflect the coupling of heavy- and light-hole states by the gate electric field. The EDSR Rabi frequency exhibits a maximum as a function of the top gate field, as does the relaxation rate. The large Rabi ratios (the ratio between the phonon relaxation time and the EDSR Rabi time) can be achieved, in excess of \(10^{6}\) at very small in-plane driving electric fields of 1 kV/m. For an in-plane magnetic field, we demonstrate that the qubit Zeeman splitting exhibits a large modulation as a function of the top gate electric field. Although extrema in the qubit Zeeman splitting exist as a function of the top gate field, these do not protect against charge noise, and one cannot identify coherence sweet spots, since the qubit is exposed to all three components of the noise electric field. At the same time, we find that the EDSR Rabi frequency reaches a maximum of about 100 MHz, with a minimum relaxation time of 1 ms, yielding a Rabi ratio of approximately \(10^{5}\) for an in-plane driving electric field of 1 kV/m. Importantly, the \(g\)-factor of elliptical dots is strongly anisotropic, with a very small aspect ratio (1.2) yielding a factor of 0.7-1.6 variation as the magnetic field is rotated in the plane. This is consistent with recent experimental observations [135]. Finally, we compare the properties and fabrication technologies of silicon and germanium and demonstrate that shear strain and axial strain are key factors leading to a large modulation of in-plane \(g\)-factors and the large Rabi ratio. The EDSR Rabi frequencies for a given in-plane driving electric field are comparable in the two materials, which may reflect the fact that, while Ge has stronger spin-orbit coupling, Si has larger cubic symmetry terms \(\propto(\gamma_{3}-\gamma_{2})\), which enhance the effective spin-orbit coupling experienced by planar dots. Whereas we use characteristic values of the strain tensor components extracted from ex Figure 1: **Schematic planar silicon quantum dot**. In this specific design, we focus on a single hole quantum dot in the silicon layer. By applying a gate electric field \(F_{z}\) via gate P1, holes accumulate in silicon and are confined vertically against the silicon oxide (indicated at the location of the two-dimensional hole gas). The single quantum dot is formed using gates P2-P5. The gates P2 and P4 provide confinement in the \(\hat{x}\)-direction, while gate P5 provides confinement in the y direction. P3 is used as the top gate of the quantum dot, accumulating a single hole in the potential well beneath. The resulting potential is indicated schematically below the gates. periment, further investigation is needed to understand the role of strain and of the strain distribution throughout the sample. The manuscript is organized as follows. In Section II, we introduce the Hamiltonian for the silicon hole quantum dot with an arbitrary magnetic field orientation, discuss the diagonalization technique, and outline the methodology used to determine the EDSR Rabi frequency, relaxation time due to phonons, and dephasing time due to random telegraph noise. In Section III, we present the results in the presence of a perpendicular magnetic field as well as an in-plane magnetic field. We discuss the effect of ellipticity of the quantum dot and \(g\)-factor anisotropy in line with experimental observations. In Section IV, we compare the properties of silicon hole qubits with germanium hole qubits from the perspective of material parameters and fabrication details. We end with a summary and conclusions. ## II Model and methodology In this section, we elucidate the properties of a single silicon hole spin qubit by introducing the model device and relevant experimental parameters. Furthermore, we provide a detailed discussion on the physical origin of the strain Hamiltonian, confinement Hamiltonian, and Zeeman Hamiltonian, respectively, in the context of the envelope function approximation Hamiltonian \(H\). We also introduce the details of the numerical diagonalization used to obtain the relevant energy levels and wavefunctions of the system. Then, we present the formalism used to estimate the EDSR Rabi frequency, relaxation time, and dephasing time. A schematic diagram of a possible realization of a silicon hole spin qubit is described in Fig. 1. The gate electric field is applied along the \(\hat{z}\)-direction, denoted by \(\mathbf{F}=(0,0,F_{z})\). Our model is designed to describe a generic magnetic field \(\mathbf{B}=(B_{x},B_{y},B_{z})\) as illustrated in Fig. 1, using the vector potential \(\mathbf{A}=-(B_{z}y,B_{x}z,B_{y}x)\). We consider magnetic fields either in the \(xy\)-plane or parallel to the \(\hat{z}\)-direction (perpendicular to the qubit plane); we do not consider magnetic fields tilted out of the plane in this work. ### Diagonalization of silicon hole spin qubit Hamiltonian The total Hamiltonian for a single silicon hole quantum dot qubit is given by \(H=H_{\rm LKBP}+H_{\rm conf}+H_{\rm gate}+H_{\rm Zeeman}\). The perpendicular electric confinement potential is represented by \(H_{\rm gate}=eF_{z}z\) for z \(\in[-L/2,L/2]\), where \(L\) denotes the width of the quantum well in the \(\hat{z}\)-direction. This gate field induces structural inversion asymmetry (SIA) in the silicon hole system, thereby leading to a Rashba spin-orbit coupling. Moreover, the symmetry of the diamond lattice ensures there is no Dresselhaus-type spin-orbit coupling [202; 203; 204; 205; 206; 207; 208]. Although interface misalignment may induce Dresselhaus-type spin-orbit coupling, it is expected to be negligible for the purposes of this paper [193], and is therefore not considered. In-plane confinement is modeled by a two-dimensional harmonic oscillator potential \(H_{\rm conf}=\hbar^{2}x^{2}/(2m^{*}a_{x}^{4})+\hbar^{2}y^{2}/(2m^{*}a_{x}^{4})\), where \(m^{*}\) is the in-plane effective mass, and \(a_{x}\) and \(a_{y}\) are the two axes of an elliptical dot. The Zeeman Hamiltonian can be written as \(H_{\rm Zeeman}=-2\kappa_{1}\mu_{B}\mathbf{J}\cdot\mathbf{B}-2\kappa_{2}\mu_{B}\mathbf{J}_ {3}\cdot\mathbf{B}\), where \(\mathbf{J}=(J_{x},J_{y},J_{z})\) represents the angular momentum matrices for the direct sum of the spin-3/2 and spin-1/2 systems. The explicit matrix form of expressions for \(\mathbf{J}\) can be found in the supplementary material. Additionally, in the anisotropic term in the Zeeman Hamiltonian, we have \(\mathbf{J}_{3}=(J_{x}^{3},J_{y}^{3},J_{z}^{3})\). \(\mu_{B}\) is the Bohr magneton, and \(\kappa_{1}=-0.42\), \(\kappa_{2}=0.01\) for silicon. The quantum dot studied in this paper is produced by confining a two-dimensional hole gas in a metal-oxide-semiconductor structure grown along the \(\hat{z}\parallel\!\!\parallel\!\!\parallel\!\!001\) direction. The strong heavy hole - light hole splitting results in angular momentum quantization perpendicular to the two-dimensional plane. The heavy-hole states are characterized by \(\hat{z}\)-component of the angular momentum \(|J=3/2,M=\pm 3/2\rangle\), the light-hole states are characterized by \(|J=3/2,M=\pm 1/2\rangle\), while the split-off valence band has \(|J=1/2,M=\pm 1/2\rangle\). We orient the wave vectors \(k_{x},k_{y},k_{z}\) along [100], [010], and [001], respectively. The valence band and the effect of strain can be described by the Luttinger-Kohn-Bir-Pikus (LKBP) Hamiltonian in the basis of total angular momentum eigenstates \(\{\left|\frac{3}{2},\frac{3}{2}\right\rangle,\left|\frac{3}{2},\frac{1}{2} \right\rangle,\left|\frac{3}{2},-\frac{1}{2}\right\rangle,\left|\frac{3}{2},- \frac{3}{2}\right\rangle,\left|\frac{1}{2},\frac{1}{2}\right\rangle,\left| \frac{1}{2},-\frac{1}{2}\right\rangle\}\): \[H_{\rm LKBP}= \tag{1}\] \[\left[\begin{array}{cccccc}P+Q&0&-S&R&-\frac{1}{\sqrt{2}}S& \sqrt{2}R\\ 0&P+Q&R^{*}&S^{*}&-\sqrt{2}R^{*}&-\frac{1}{\sqrt{2}}S^{*}\\ &&\\ -S^{*}&R&P-Q&0&-\sqrt{2}Q&\sqrt{\frac{3}{2}}S\\ R^{*}&S&0&P-Q&\sqrt{\frac{3}{2}}S^{*}&\sqrt{2}Q\\ -\frac{1}{\sqrt{2}}S^{*}&-\sqrt{2}R&-\sqrt{2}Q^{*}&\sqrt{\frac{3}{2}}S&P+\Delta _{0}&0\\ \sqrt{2}R^{*}&-\frac{1}{\sqrt{2}}S&\sqrt{\frac{3}{2}}S^{*}&\sqrt{2}Q^{*}&0&P+ \Delta_{0}\end{array}\right]\] where \(P=P_{k}+P_{\varepsilon},Q=Q_{k}+Q_{\varepsilon},R=R_{k}+R_{\varepsilon},S=S_{k }+S_{\varepsilon}\), \(\Delta_{0}=44\,\)meV is the energy splitting between the heavy-hole band and the split-off band, which is small enough to lead to the necessity of extending calculations to the six-band LKBP Hamiltonian (in germanium this is usually not necessary). Terms with subscripts \(k\) are matrix elements from the Luttinger-Kohn (LK) Hamiltonian [184]: \(P_{k}=\frac{\hbar^{2}}{2m_{0}}\gamma_{1}\left(k_{x}^{2}+k_{y}^{2}+k_{z}^{2}\right)\), \(Q_{k}=-\frac{\hbar^{2}}{2m_{0}}\gamma_{2}\left(2k_{z}^{2}-k_{x}^{2}-k_{y}^{2}\right)\) appears in the diagonal elements; while \(R_{k}=\frac{\sqrt{3}\hbar^{2}}{2m_{0}}[-\gamma_{2}\left(k_{x}^{2}-k_{y}^{2} \right)\ +2i\gamma_{3}k_{x}k_{y}]\), \(S_{k}=\frac{\sqrt{3}\hbar^{2}}{m_{0}}\gamma_{3}\left(k_{x}-ik_{y}\right)k_{z}\) appears in the off-diagonal elements which couple the heavy-hole bands to light-hole bands and split-off bands. Here \(\gamma_{1}\,=\,4.285,\gamma_{2}\,=\,0.339,\gamma_{3}\,=\,1.446\) are the Luttinger parameters for silicon [209; 210], \(m_{0}\) is the bare electron mass and \(\hbar\) is the Planck constant. Terms with subscripts \(\varepsilon\) are matrix elements from the Bir-Pikus (BP) Hamiltonian: \(P_{\varepsilon}\,=\,-a_{v}\left(\varepsilon_{xx}+\varepsilon_{yy}+\varepsilon_ {zz}\right)\), \(Q_{\varepsilon}\,=\,-\frac{b_{v}}{2}\left(\varepsilon_{xx}+\varepsilon_{yy}-2 \varepsilon_{zz}\right)\), \(R_{\varepsilon}\,=\,\frac{\sqrt{2}}{2}b_{v}\left(\varepsilon_{xx}-\varepsilon_ {yy}\right)-id_{v}\varepsilon_{xy}\), \(S_{\varepsilon}\,=\,-d_{v}\left(\varepsilon_{xz}-i\varepsilon_{yz}\right)\). The material parameter \(a_{v}\,=\,2.38\,\)eV is the hydro-static deformation potential constant, \(b_{v}\,=\,-2.10\,\)eV is the uni-axial deformation potential constant, \(d_{v}\,=\,-4.85\,\)eV is the shear deformation potential constant [211; 212; 213]. The strain \(\varepsilon_{i,j}\) where \(i,j\in\{x,y,z\}\) is determined by experimental configurations and fabrication processes. In a quantum dot placed in a magnetic field, the momentum operators in the Luttinger-Kohn Hamiltonian are modified by the gauge potentials. The new canonical conjugate momentum operators are given by \(\mathbf{p}+e\mathbf{A}\). To numerically diagonalize the total Hamiltonian \(H\), the wave functions we used are as follows: \[\Psi_{n_{x},n_{y},n_{z},i}(x,y,z)\,=\,\phi_{n_{z}}(z)\phi_{n_{x}}(x)\phi_{n_{y} }(y)\left|\chi_{i}\right\rangle \tag{2}\] where \(n_{x},n_{y},n_{z}\) are the level numbers of the spatial wave functions and \(\left|\chi_{i}\right\rangle\) is the i-th spinor. The selection of wave functions depends on the shape of the confinement potentials, which necessitates the self-consistent solution of Poisson and Schrodinger equations to account for the density-dependent properties of a device. Previous studies have demonstrated the efficiency of the variational approach in gallium arsenide and germanium [193]. However, the numerical generalization of the variational method becomes challenging as the number of energy levels increases. Our approach is to determine a set of complete wave functions in all directions, using a sufficient number of energy levels to include the geometry of the quantum confinements. For \(\hat{z}\)-direction, where our focus is on a triangular quantum well \(eF_{z}z\), we select sinusoidal wave functions derived from an infinite square well positioned symmetrically between \(z\in[-L/2,L/2]\). Incorporating the boundary conditions, these orthonormal complete set of wave functions are \[\phi_{n_{z}}(z)\,=\,\sqrt{\frac{2}{L}}\cos\left(\frac{n_{z}\pi z}{L}\right) \quad\text{ for }n_{z}=1,3,5,\ldots \tag{3}\] \[\phi_{n_{z}}(z)\,=\,\sqrt{\frac{2}{L}}\sin\left(\frac{n_{z}\pi z}{L}\right) \quad\text{ for }n_{z}=2,4,6,\ldots \tag{4}\] The in-plane wave functions we use are eigenstates of the two-dimensional harmonic oscillator: \[\phi_{n_{x}}(x)=\frac{1}{\sqrt{2^{n}n!}}\sqrt[4]{\frac{1}{\pi}}\frac{1}{\pi \frac{1}{\sqrt{a_{x}}}}\exp\left(-\frac{x^{2}}{2a_{x}^{2}}\right)H_{n_{x}} \left(\frac{x}{a_{x}}\right), \tag{5}\] \[\phi_{n_{y}}(y)=\frac{1}{\sqrt{2^{n}n!}}\sqrt[4]{\frac{1}{\pi}}\frac{1}{\sqrt{ a_{y}}}\exp\left(-\frac{y^{2}}{2a_{y}^{2}}\right)H_{n_{y}}\left(\frac{y}{a_{y}} \right), \tag{6}\] where the confinement frequency can be expressed as \(\omega_{x,y}\,=\,\hbar/(m^{*}a_{x,y}^{2})\), \(H_{n}\) represents Hermitian polynomials. To obtain high accuracy in numeric, we adopt 8 levels of Eq. 3, 14 levels of Eq. 5, and 14 levels of 6. We can then diagonalize the Hamiltonian \(H\) for specific device geometries, strains, electric fields, and magnetic fields. The ground state of the qubit Hamiltonian will be denoted by \(\left|0\right\rangle\) with energy \(E_{0}\), the first excited state will be denoted by \(\left|1\right\rangle\) with energy \(E_{4}\) and the qubit Zeeman splitting will be defined by \(\Delta E_{z}\,=\,E_{4}-E_{0}\). Consequently, the total Hamiltonian \(H\) will be diagonalized in the basis \(\{\left|0\right\rangle,\left|1\right\rangle,\left|2\right\rangle,...,\left| \right\rangle\}\). ### EDSR frequency and phonon relaxation time Electric dipole spin resonance (EDSR) methods are widely used to coherently drive transitions between spin states in silicon hole spin qubits. Within the ground state orbital, the spin of a hole qubit can be rotated by an alternating microwave signal (an alternating in-plane electric field), the frequency of this microwave signal should be matched with the qubit Zeeman splitting \(\Delta E_{z}\). Therefore, the EDSR frequency can be calculated by evaluating the transition matrix element \[f\,=\,\frac{e}{\hbar}\|\langle 4|\mathbf{r}\cdot\mathbf{E}_{\rm AC}|0\rangle\| \tag{7}\] In our case, the in-plane alternating driving electric field \(E_{\rm AC}\) is set to be \(1\,\)kV/m. The phonon relaxation time can be calculated using the method detailed in numerous studies [188; 189; 190; 188; 192; 189; 193; 188; 194; 189; 188; 195; 196; 187; 189; 197; 198; 199]. Unlike III-V semiconductors, silicon and germanium do not have piezoelectric phonons due to their non-polar nature [42; 43]. We assume that the silicon hole spin qubit is coupled with a thermal bath of bulk acoustic phonons along the polarization direction \(\alpha\in\ell,t_{1},t_{2}\) (one longitudinal direction and two transverse directions), with phonon wave vectors denoted by \(\mathbf{q}\). The energy of the acoustic phonons is \(\hbar\omega_{\alpha,\mathbf{q}}\). To calculate the phonon relaxation time, we consider the hole-phonon interaction Hamiltonian \(H_{\rm hp}^{\alpha}=\sum_{i,j}D_{i,j}\varepsilon_{i,j}^{\alpha}(\mathbf{r})\) for \(i,j\in\{x,y,z\}\), where \(D_{i,j}\) are deformation potential matrices. The detailed matrix elements can be found in the supplemental material. The local strain \(\varepsilon_{i,j}^{\alpha}(\mathbf{r})\) has the form \[\varepsilon_{i,j}^{\alpha}\,=\,\frac{i}{2}q\sqrt{\frac{\hbar}{2\rho V_{c} \omega_{\alpha,\mathbf{q}}}}\epsilon_{i,j}^{\alpha}\big{(}e^{-i\mathbf{q}\cdot\mathbf{r}} \hat{a}_{\alpha,\mathbf{q}}^{\dagger}+e^{i\mathbf{q}\cdot\mathbf{r}}\hat{a}_{\alpha,\mathbf{q}} \big{)} \tag{8}\] where \[\epsilon_{i,j}^{\alpha}\,=\,\hat{e}_{i}^{\alpha}\frac{q_{j}}{q}+\hat{e}_{j}^{ \alpha}\frac{q_{i}}{q} \tag{9}\] is a symmetric \(3\times 3\) matrix, \(\hat{e}\) is the unit vector in the direction of phonon propagation. The transition rate between the first excited state \(E_{1}\) and the ground state \(E_{0}\) due to spontaneous phonon emission can be obtained from Fermi's golden rule: where \(V_{c}/(2\pi)^{3}\) is the reciprocal space density of states, \(N_{q}^{\alpha}\) is the phonon occupation number following Bose-Einstein statistics: \(N_{q}^{\alpha}=\left[\exp\left(\hbar\omega_{q}^{\alpha}/\beta\right)-1\right]^{-1}\), \(\beta=k_{B}T\), where T = 0.1 K, and \(k_{B}\) is the Boltzmann constant. The phonon propagation velocity is different for different polarization directions, with \(v_{\ell}\,=\,9000\) m/s for the longitudinal direction, \(v_{t_{1}}\,=\,v_{t_{2}}\,=\,5400\) m/s for the other two transversal directions [211, 220, 212]. ### Random telegraph noise coherence time In a silicon hole quantum dot system, the spin-orbit coupling induced by the top gate field exposes the hole spin qubit to charge noise, primarily from charge defects. The charge defects can lead to fluctuation in the qubit energy spectrum, resulting in qubit dephasing [221, 222, 75, 223, 224, 225, 226]. In our model, we particularly focus on two key sources of charge defects-induced dephasing: screened single charge defects and dipole charge defects. For the purposes of this discussion we have chosen to focus on defects whose electric fields lie primarily in the plane of the qubit. This is because, as will emerge below, regardless of the orientation of the magnetic field, the qubit Larmor frequency exhibits extrema as a function of the top gate electric field. By operating the qubit at these extrema one can protect against fluctuations in the out-of-plane electric field. Thus fluctuations in the in-plane electric field are most detrimental to the qubit, and this is what our model focuses on. The potential of a single defect can be modelled as [159, 224]: \[U_{\mathrm{scr}}(q)=\frac{e^{2}}{2\epsilon_{0}\epsilon_{r}}e^{-qd}\frac{ \Theta\left(2k_{F}-q\right)}{q+q_{TF}}. \tag{11}\] where \(q\) is the wave-vector, \(q_{\mathrm{TF}}\) is the Thomas-Fermi wave-vector for silicon, which is independent of the density of holes, and \(k_{F}\) is the Fermi wave vector. The relevant values can be found in Table 1. \(\Theta\) is the Heaviside Figure 3: **EDSR Rabi frequency for an out-of-plane magnetic field B\({}_{z}\).** a) The EDSR Rabi frequency is plotted as a function of the gate electric field \(F_{z}\) for two different out-of-plane magnetic field strengths: B\({}_{z}\) = 0.1 T (solid line with square markers) and B\({}_{z}\) = 0.2 T (dashed line with diamond markers). b) The EDSR Rabi frequency is shown as a function of the out-of-plane magnetic field \(B_{z}\) for two different gate electric field strengths: F\({}_{z}\) = 20 MV/m (solid line with square markers) and F\({}_{z}\) = 30 MV/m (dashed line with diamond markers). In this case, an in-plane AC electric field of 1 kV/m is applied. c) The EDSR Rabi frequency is plotted against the angle of the applied in-plane E\({}_{\mathrm{AC}}\) alternating driving electric field. The magnetic field is along the \(\hat{z}\)-direction with magnitude B\({}_{z}\) = 0.1 T. The top gate field is F\({}_{z}\) = 10 MV/m. Figure 2: **Qubit Zeeman splitting for an out-of-plane magnetic field B\({}_{z}\).** a) The qubit Zeeman splitting is plotted as a function of the gate electric field for two different out-of-plane magnetic field strengths, B\({}_{z}\) = 1 T (solid line with square markers) and B\({}_{z}\) = 0.8 T (dashed line with diamond markers). A flat local minimum of the qubit Zeeman splitting is observed as a function of the gate electric field. b) The qubit Zeeman splitting is plotted as a function of the out-of-plane magnetic field B\({}_{z}\) = 1 T for two different top gate field strengths, F\({}_{z}\) = 20 MV/m (solid line with square markers) and F\({}_{z}\) = 30 MV/m (dashed line with diamond markers). The parameters used to generate all the figures in this paper are provided in Table 1 step function. In position space, the single charge defect potential can be written as [159, 224] \[U_{s}(\mathbf{r})=\frac{e^{2}}{4\pi\epsilon_{0}\epsilon_{r}}\frac{1}{q_{\rm FF}^{2}} \Bigg{(}\frac{1}{\|\mathbf{r}-\mathbf{r}_{D}\|^{3}}\Bigg{)} \tag{12}\] where \(\mathbf{r}_{D}\,=\,(x_{D},y_{D},z_{D})\) is the position vector of the single charge defect, taking the center of the quantum dot as the origin. For silicon, the relative electrical permeability is \(\epsilon_{r}\,=\,11.68\); \(\epsilon_{0}\) is the vacuum electrical permeability. The defect location \(x_{D}\) is set to be \(30\,\mathrm{nm}\) from the center of the quantum dot. The resulting change in the dot's orbital splitting (the energy difference between the orbital ground state and first excited state) due to the defect is \(1\mu\)eV at F\({}_{z}\) = 1 MV/m, consistent with Refs. [228, 227]. We also include dipole charge defects, with \[U_{\rm dip}\left(\mathbf{r}\right)=\frac{\mathbf{p}\cdot(\mathbf{r}-\mathbf{r}_{D})}{4\pi \epsilon_{0}\epsilon_{r}(\mathbf{r}-\mathbf{r}_{D})^{3}} \tag{13}\] The dipole moment is \(\mathbf{p}\,=\,e\mathbf{l}\), where \(\mathbf{l}\) is the dipole vector. In our calculations, we assume the size of the dipole is \(0.1\,\mathrm{nm}\). The influence of the dipole charge defects on the dephasing time is typically much smaller than the influence due to single charge defects [159, 223]. By considering both single charge defect and dipole charge defects, we can calculate the dephasing time in the quasi-static limit that estimates the upper bound of the dephasing time, denoted \(T_{2}^{\star}\,=\,2\pi/\delta\omega\). We note that this analysis is applicable only to a single defect giving rise to random telegraph noise. The realistic but more complicated case of \(1/f\) noise for multiple defects will be considered in a different study. ## III Results and discussion In this section, we present the main findings of our numerical diagonalization. The section is divided into three parts, which focus on qubit dynamics in an out-of-plane magnetic field, dynamics in an in-plane magnetic field, and \(g\)-factor anisotropy respectively. Our numerical model has identified extreme values of the qubit Zeeman splitting as a function of the top gate field for various parameters in different magnetic field orientations. This is because the vertical electric field creates two opposing Stark effects: it simultaneously increases the HH-LH gap and enhances the HH-LH coupling. The sweet spot is the point where these two sources of Stark shift cancel each other. At this point the variation of the qubit Zeeman splitting vanishes in the first order as a function of the gate electric field, as shown in Fig. 2 and Fig. 6. This non-linear behavior of the qubit Zeeman splitting leads to similar non-linearities in other important properties of the silicon hole spin qubits. ### Out-of-plane magnetic field The qubit Zeeman splitting \(\Delta E_{z}\) as a function of the gate electric field F\({}_{z}\) and the out-of-plane magnetic field B\({}_{z}\) is plotted in Fig. 2. When B\({}_{z}\) = 0.1 T, the electric field can change the qubit Zeeman splitting by about 30%, and there is a local minimum in the qubit Zeeman splitting as a function of top gate field. Next we discuss the EDSR Rabi frequency, shown in Fig. 3. We found that with an in-plane AC electric field of \(\sim 1\,\mathrm{kV/m}\) applied along the \(\hat{x}\)-direction, the EDSR Rabi frequency is about \(50\,\mathrm{MHz}\) at B\({}_{z}\) = 0.1 T. There also exists a local maximum of EDSR frequency as a function of the gate electric field, and the EDSR Rabi frequency is linear in B\({}_{z}\). With an out-of-plane field, the Zeeman splitting term is typically of the order of \(\mu\)eV, which can be treated perturbatively. Therefore, when B\({}_{z}\) is small, Figure 4: **Relaxation time and Rabi ratio for an out-of-plane magnetic field B\({}_{z}\)**. a) The single phonon relaxation time is calculated for an out-of-plane magnetic field B\({}_{z}\) = 0.1 T. This relaxation time represents the characteristic time scale for the decay of the qubit due to phonon-hole interactions. b) The Rabi ratio is plotted as a function of the top gate electric field \(F_{z}\). The Rabi ratio is \(\approx 10^{6}\), which provides an indication of the efficiency of the qubit operation, with higher values indicating a stronger qubit coherence. Figure 5: **Dephasing time in the quasi-static limit in an out-of-plane magnetic field B\({}_{z}\)**. Both single charge defects and dipole charge defects are taken into account, but the dominant contribution to the dephasing potential comes from single charge defects. The dephasing time, which is estimated to be around \(2\,\mu\)s, exhibits a local maximum, suggesting the presence of an optimal operation point. the EDSR Rabi rate can be expanded as a function of \(\mathrm{B}_{z}\), with the leading order \(\propto\)\(\mathrm{B}_{z}\), which is similar to the finding of Ref. [156] for Ge. Interestingly, as shown in Fig. 3c, the EDSR Rabi frequency exhibits a slight anisotropy as the electric field is rotated in the plane, varying in magnitude by \(\sim 20\%\). This is traced to the presence of the cubic-symmetry \(\gamma_{3}\) terms in the Luttinger Hamiltonian. To study the number of operations allowed in one relaxation time, we plot the phonon-induced relaxation time and the Rabi ratio in Fig. 4. At \(\mathrm{B}_{z}\) = 0.1 T the relaxation time is several milliseconds, which allows \(\sim 10^{6}\) operations. The long relaxation time reflects the weak hole-phonon interactions for silicon as a lighter material with a fast phonon propagation speed compared with germanium. The relaxation rate \(\propto B^{5}\), which is consistent with Ref. [43]. Our results Fig. 2 and Fig. 6 show that extrema in the qubit Zeeman splitting as a function of the top gate field \(\mathrm{F}_{z}\) exist for all magnetic field orientations. Nevertheless, proper coherence sweet spots only exist for an out-of-plane magnetic field, as shown in Fig. 5. To understand this we must highlight the difference between extrema in the qubit Zeeman splitting and \(T_{2}^{*}\) sweet spots. The qubit states are Kramers conjugates. Time-reversal symmetry implies that the matrix elements giving rise to pure dephasing, which represent an energy difference between the up and down spin states, must involve the magnetic field - they cannot come from charge noise and spin-orbit coupling alone. Because the magnetic field enters the qubit states both through the Zeeman and the orbital terms, the composition of the qubit states is different depending on the magnetic field orientation. For an out-of-plane magnetic field, the in-plane and out-of-plane dynamics can be approximately decoupled. The main effect of the gate electric field is to give rise to Rashba-like terms acting on the heavy hole spins. Unlike Ge, the Schrieffer-Wolff approximation is not applicable to Si as studied by [193] so this decomposition is not as easily visualized, although one can still envisage an effective spin-orbit Hamiltonian characterized by a Rashba constant. The Rashba term affects the qubit Zeeman splitting, and is directly susceptible to charge noise perpendicular to the interface, which is the main way a defect affects qubit spin dynamics. An in-plane electric field does not couple to the diagonal qubit matrix elements to leading order and can thus be disregarded in a first approximation. ### In-plane magnetic field When magnetic field is applied in the plane, our result (Fig. 6) shows a large qubit Zeeman splitting variations Figure 6: **Qubit Zeeman splitting for an in-plane magnetic field \(\mathrm{B}_{\parallel}\)**. a) The qubit Zeeman splitting is plotted as a function of the gate electric field for two different in-plane magnetic field strengths: \(\mathrm{B}_{x}\) = 1 T (solid line with square markers) and \(\mathrm{B}_{x}\) = 0.8 T (dashed line with diamond markers). Notably, there is a flat local maximum observed around \(\mathrm{F}_{z}\) = 11 MV/m. b) The qubit Zeeman splitting is shown as a function of the in-plane magnetic field \(\mathrm{B}_{x}\) = 1 T for two different gate electric field strengths: \(\mathrm{F}_{z}\) = 20 MV/m (solid line with square markers) and \(\mathrm{F}_{z}\) = 20 MV/m (dashed line with diamond markers). c) The EDSR Rabi frequency is plotted as a function of the in-plane magnetic field orientation. In this case, the magnitude of the in-plane magnetic field is fixed at 1 T, while the in-plane AC electric field remains at 1 kV/m. The magnetic field is rotated through \(2\pi\) in the \(xy\)-plane. Notably, the maximum EDSR Rabi frequency occurs when the magnetic field is aligned along the \(\hat{x}\)-direction, which coincides with the direction of the in-plane AC driving electric field. Figure 7: **EDSR Rabi frequency for an in-plane magnetic field \(\mathrm{B}_{\parallel}\)**. a) The EDSR Rabi frequency is plotted as a function of the gate electric field \(F_{z}\) for two different in-plane magnetic field strengths: \(\mathrm{B}_{x}\) = 1 T (solid line with square markers) and \(\mathrm{B}_{x}\) = 0.8 T (dashed line with diamond markers). b) The EDSR Rabi frequency is shown as a function of the in-plane magnetic field \(\mathrm{B}_{x}\) = 1 T for two different gate electric field strengths: \(\mathrm{F}_{z}\) = 10 MV/m (solid line with square markers) and \(\mathrm{F}_{z}\) = 20 MV/m (dashed line with diamond markers). c) The EDSR Rabi frequency is plotted as a function of the in-plane magnetic field orientation. In this case, the magnitude of the in-plane magnetic field is fixed at 1 T, while the in-plane AC electric field remains at 1 kV/m. The magnetic field is rotated through \(2\pi\) in the \(xy\)-plane. Notably, the maximum EDSR Rabi frequency occurs when the magnetic field is aligned along the \(\hat{x}\)-direction, which coincides with the direction of the in-plane AC driving electric field. as a function of the top gate field, which is also observed in a recent experiments (Ref. [135]). A local minimum as a function of the gate electric field continues to exist, however, in an in-plane magnetic field the local minimum in the qubit Zeeman splitting does not protect the qubit from the single charge defect noise, as was emphasized above in the out-of-plane magnetic field case. The EDSR Rabi frequency in an in-plane field contains a dominant term \(\propto B_{x}\) as well as a small distortion \(\propto B_{x}^{2}\) due to the orbital magnetic field terms. For an in-plane field, the orbital term in Luttinger-Kohn Hamiltonian can change the dispersion of holes significantly, away from the center of the Brillouin zone, the orbital term will distort the parabolicity of the dispersion. Therefore, an in-plane magnetic field will eventually result in stronger heavy-hole-light-hole mixing and significant modulation of the \(g\)-factor even when the amplitude of the field is small. Considering the transition matrix element in Eq. 7, this amplitude is determined by the in-plane AC electric field as well as the shapes of the ground state wave function \(|0\rangle\) and of the excited state wave function \(|1\rangle\). As a result, the EDSR Rabi frequency exhibits a non-linear behavior as a function of \(B_{x}\) due to the heavy-hole-light-hole admixture and due to the orbital magnetic field terms. We also observe a strong anisotropy in the EDSR Rabi frequency: applying the magnetic field parallel to the AC in-plane electric field results in an enhanced EDSR Rabi frequency, as shown in Fig. 7 c). For an in-plane magnetic field, the orbital vector potential terms couple the in-plane and out-of-plane dynamics and no separation of the dynamics is possible. The net effect of this is that the qubit is sensitive to all components of the defect electric field, and an extremum in the qubit Zeeman splitting as a function of the perpendicular electric field does not translate into a coherence sweet spot for charge noise, as Fig. 9 shows. ### Ellipticity and in-plane \(g\)-factor anisotropy In experimental studies, dots are often formed without explicitly attempting to remain circular, leading to a notable anisotropy in the effective \(g\)-factors, as depicted in Fig. 10. Despite the presence of an anisotropy term, denoted as \(\kappa_{2}\) in the Zeeman Hamiltonian, it is important to note that \(\kappa_{2}\) is typically smaller than \(\kappa_{1}\) in both group IV and group III-V hole systems. Therefore, it has limited impact on the energy spectrum of the qubit. In contrast, the orbital term in \(H_{\rm{LK}}\) and the effective mass will contribute more strongly to the anisotropy of the Rashba spin-orbit coupling and of the effective \(g\)-factors. Additionally, while the linear Rashba term assumes a central role in circular dots, the cubic Rashba term becomes activated in elliptical quantum dots, resulting in enhanced Rashba spin-orbit coupling and faster EDSR Rabi oscillations. For possible experimental settings, the lateral confinement in the \(\hat{x}\) and \(\hat{y}\) directions can be independently adjusted using the electrostatic gates. This corresponds to in-situ control over \(\omega_{x,y}\), which are defined in Eqs. 5-6. Previous work by Qvist and Danon (Ref. [198]) investigated lateral confinement potentials, providing an analytical study of effective mass anisotropy and the size of the confinement potential by taking the linear Rashba term as an example in a perturbative approach on the four-band Luttinger-Kohn Hamiltonian. In contrast, our numerical calculations include all Rashba terms, involving tracing all non-commutable canonical momentum operators in higher excited states, and accounting for the non-parabolic behaviors of the band structure based on a six-band Luttinger-Kohn Hamiltonian. Our results indicate that the \(g\)-factor exhibits an oscillating pattern when we rotate a constant in-plane magnetic field in the \(xy\)-plane. For example, when the aspect ratio is 1.2, we observe a \(g\)-factor variation up to 100% as a function of the in-plane magnetic field angle. This substantial anisotropy in the in-plane \(g\)-factor is consistent with recent experimental observations (Ref. [135]). Figure 8: **Relaxation time and Rabi ratio for an in-plane magnetic field \(\mathbf{B}_{\parallel}\)**. a) The single phonon relaxation time is calculated for an in-plane magnetic field \(\mathbf{B}_{x}=1\,\)T. b) The Rabi ratio is plotted as a function of the gate electric field \(\mathbf{F}_{z}\). The Rabi ratio, which is approximately around \(10^{5}\). Figure 9: **Dephasing time in quasi-static limit for an in-plane magnetic field \(\mathbf{B}_{\parallel}\)**. Both single charge defects and dipole charge defects are taken into account, but the dominant contribution to the dephasing potential still comes from single charge defects. The dephasing time exhibits a local minimum, suggesting the absence of an optimal operation point. ## IV Comparison between germanium and silicon A promising competitor for silicon, in semiconductor quantum dot hole spin qubit area, is germanium. However, due to the fabrication details of silicon hole quantum dot and germanium hole quantum dot, the location of the sweet spot as a function of the top gate field, the strain in the sample, and the modulation of the in-plane and out-of-plane \(g\)-factors are expected to be different. As a comparison of the material characters, we list important parameters, which is relevant in fabricating the hole spin qubit, of silicon and germanium in Table. 1. The in-plane effective mass of a hole in silicon (\(0.216m_{0}\)) is much heavier than that in germanium (\(0.057m_{0}\)). As a consequence, the heavy-hole-light-hole energy splitting in silicon (around \(5\,\mathrm{meV}\)) will be much smaller than that in germanium (around \(50\,\mathrm{meV}\)). A direct outcome of a small heavy-hole-light-hole energy splitting is that, the presence of strains will efficiently lead to mixing between the light-hole and heavy-hole band, which will amplifies the Stark shift effect. Experimental data indicates that the in-plane \(g\)-factor in silicon can range between \(1.5\)-\(2.5\)[135], whereas in germanium hole quantum dots, the \(g\)-factor only exhibits small variations \(0.16\)-\(0.3\)[160; 20]. Furthermore, the smaller effective mass in silicon imposes limitations on the splitting of quantum dot orbital levels, thereby restricting the size of silicon hole quantum dots. The strain present in both silicon and germanium hole quantum dots is another determinant factor explaining the different \(g\)-factor modulations and the Rabi ratios. In general, axial strain terms (i.e., \(P_{\varepsilon}\), \(Q_{\varepsilon}\), \(P_{\varepsilon}\)+\(Q_{\varepsilon}\), \(P_{\varepsilon}\)\(-Q_{\varepsilon}\) in \(H_{\mathrm{LKBP}}\)) will change the the heavy-hole-light-hole energy splitting directly, while shear strain terms (i.e., \(R_{\varepsilon}\), \(S_{\varepsilon}\) in \(H_{\mathrm{LKBP}}\)) intermix the heavy-hole and light-hole states. For silicon hole quantum dots based on metal-oxide-semiconductor platforms, strain naturally develops in the device due to the differences in the thermal contraction between metal electrodes and the silicon substrate. While strain engineering is a common practice in the classical electronics industry, academic research into quantum dots has not been focused on this aspect thus far, except for the occasional consideration on choices of material stacks [229]. In the case of germanium hole spin qubits based on a homogeneous uni-axial strain strained germanium quantum well in a SiGe/Ge/SiGe heterostructure, strain can be meticulously controlled. The substrate includes a fully strain-relaxed SiGe layer. The middle of the heterostructure comprises an epitaxially grown layer of strained germanium, hosting the hole qubit, and another layer of relaxed SiGe atop the Ge layer. The concentration of Si atoms in Si\({}_{x}\)Ge\({}_{1-x}\), represented by the component fraction factor \(x\), also determines the strain in the pure Ge layer via Vegard's law. To quantitatively compare the strain in the silicon and the germanium hole spin qubit devices, we use typical parameters as listed in Ref. [156]. For instance, if the relaxed SiGe layer is Si\({}_{0.25}\)Ge\({}_{0.75}\), (\(x\) = 0.25) the axial compressive strain will be \(\epsilon_{xx}\) = \(-\) 0.001, which is \(\sim 5\) times larger than the strain present in the silicon metal-oxide-semiconductor quantum dot. In Table. 2, we summarize various typical configurations, including strains, top gate fields, magnetic fields, and geometries to reach the optimal operation points in different materials. We notice that the parameters used to fit experimental data, such as the dot geometry, shear strain, and axial strain, are estimates. It is crucial to include the non-uniform strain from the gate electrodes, as shown in Ref.[135]. For more precise results, direct strain profiling as in Ref. [230], or device-specific modelling can be employed. Strain will be thoroughly investigated in future works. In this context, we note that we do not anticipate strain to change the Figure 10: \(g\)**-factor anisotropy**. The variation of the \(g\)-factor is plotted as a function of the in-plane magnetic field orientation \(\phi\). Here, \(\phi\) represents the angle of the magnetic field with respect to the \(\hat{x}\)-direction. The \(g\)-factor is determined for an in-plane magnetic field with magnitude of \(1\,\mathrm{T}\). The semi-major axis of the dot is \(a_{x}\) = \(24\,\mathrm{nm}\); the semi-minor axis of the dot is \(a_{y}\) = \(20\,\mathrm{nm}\), giving the aspect ratio to be \(1.2\). When the magnetic field is parallel to the semi-major axis (\(\phi\) = \(0^{\circ}\)), the \(g\)-factor has a maximum value, which is also observed in Ref. [135]. Figure 11: **A prototype double quantum dot in strained germanium hole system**. The strained germanium quantum well is grown epitaxially on a strain-relaxed SiGe layer. The portion of the silicon in SiGe can be tuned experimentally, therefore, the axial strain can be controlled. Gate B2 and T1 can control the inter-dot tunneling. In this picture, we set the growth direction to be along \(\hat{z}\)-direction. existence of the optimal operation points of the qubits for fast EDSR Rabi ratio and minimized dephasing time as a function of the top gate field. Another important difference between Ge and Si concerns the applicability of the Schrieffer-Wolff transformation in analyzing qubit Hamiltonians. For Ge, a perturbative approach based on the Schrieffer-Wolff transformation is demonstrated to be effective for an out-of-plane magnetic field,[159] which relies on the large heavy-hole-light-hole splitting in a low density Ge system. In Si the heavy-hole-light-hole splitting is much smaller than in Ge, while the cubic-symmetry term in the Luttinger Hamiltonian is very strong. As a result of this, the Schrieffer-Wolff transformation cannot account for full density-dependence (i.e., quantum dot radius dependence) of the hole states and split-off band correctly, and a full diagonalization of the Hamiltonian is needed to yield accurate results. ## V Conclusions and outlook In this paper, starting from the diagonalization of the Luttinger-Kohn-Bir-Pikus Hamiltonian, we have developed a numerical method to study the silicon hole spin qubits in different experimental configurations. We have shown that the gate electric field significantly modulates the qubit Zeeman splitting, EDSR Rabi frequency and relaxation time. We have shown that the dephasing time due to random telegraph noise stemming from single and dipole charge defects exhibits very different behaviors in in-plane and out-of-plane magnetic fields. We find that in an out-of-plane magnetic field coherence sweet spots can be identified as a function of the top gate field, at which random telegraph noise does not couple to the spin. However, in the case of in-plane fields the role of random telegraph noise can be reduced but not entirely removed, because the vector potential terms expose the qubit to all components of a defect's electric field. The numerical method we have developed in this work can be extended to many-hole spin qubits in other materials as well as to studies of several qubits required for entanglement. ## VI Acknowledgments . We are grateful to Tetsuo Kodera for a series of stimulating discussions. This project is supported by the Australian Research Council Centre of Excellence in Future Low-Energy Electronics Technologies (project number CE170100039). \begin{table} \begin{tabular}{l c c} \hline \hline Confinements & Silicon & Germanium \\ \hline Orbital energy splitting & 0.3 meV & 0.3 meV \\ HH-LH energy splitting & 7 meV & 100 meV \\ Typical \(\varepsilon_{xx}\) & 0.001 & 0.01 \\ Typical \(\varepsilon_{yy}\) & 0.001 & 0.01 \\ Typical \(\varepsilon_{zx}\) & -0.00077 & -0.0077 \\ Typical \(\varepsilon_{xz}\) & 0.0008 & 0 \\ \(\Delta E_{\text{Z}}\) (\(B_{x}\,=\,1\)T) & 100 eV & 15 \(\mu\)eV \\ Sweet spot (\(B_{x}\,=\,1\)T) & 8 MV/m & 18 MV/m \\ \(\Delta E_{\text{Z}}\) (\(B_{z}\,=\,0.1\)T) & 10 \(\mu\)eV & 90 \(\mu\)eV \\ Sweet spot (\(B_{z}\,=\,0.1\)T) & 13 MV/m & 20 MV/m \\ \hline \hline \end{tabular} \end{table} Table 2: **Possible configurations for optimal operation points**. In this table, the diameter of the quantum dot is \(d_{x}\,=\,d_{y}\,=\,40\,\text{nm}\), and the width of the quantum well is L = 13 nm. We consider both in-plane and out-of-plane magnetic fields for silicon and germanium. The strains \(\varepsilon_{xx}\), \(\varepsilon_{yy}\) and \(\varepsilon_{zz}\) have the relation \(\varepsilon_{xx}\,=\,\varepsilon_{yy}\,-\,\varepsilon_{zz}\,=-\varepsilon_{xx} (2C_{12}/C_{11})\), the shear strain \(\varepsilon_{xz}\) is estimated from Ref. [135]. We use \(\Delta E_{\text{Z}}\) to denote the qubit Zeeman splitting. Note that there exist many possible combinations of parameters to get an optimal operation point as a function of the gate electric field. \begin{table} \begin{tabular}{c c c} \hline \hline Parameters & Silicon & Germanium \\ \hline \(\gamma_{1}\) & 4.29 & 13.38 \\ \(\gamma_{2}\) & 0.34 & 4.24 \\ \(\gamma_{3}\) & 1.45 & 5.69 \\ \(\kappa_{1}\) & -0.42 & 3.41 \\ \(\kappa_{2}\) & 0.01 & 0.06 \\ \(m_{\text{HH}}\) & 0.277 \(m_{0}\) & 0.204 \(m_{0}\) \\ \(m_{\text{LH}}\) & 0.201 \(m_{0}\) & 0.046 \(m_{0}\) \\ \(m_{\text{HP}}\) & 0.216 \(m_{0}\) & 0.057 \(m_{0}\) \\ \(m_{\text{HP}}\) & 0.253 \(m_{0}\) & 0.109 \(m_{0}\) \\ \(\rho\) & 2329 kg/m\({}^{3}\) & 5330 kg/m\({}^{3}\) \\ \(v_{t}\) & 9000 m/s & 3570 m/s \\ \(v_{t_{1}}\) & 5400 m/s & 4850 m/s \\ \(v_{t_{2}}\) & 5400 m/s & 4850 m/s \\ \(a_{v}\) & 2.38 eV & 2.00 eV \\ \(b_{v}\) & -2.10 eV & -2.16 eV \\ \(d_{v}\) & -4.85 eV & -6.06 eV \\ \(\Delta_{0}\) & 44 meV & 296 meV \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of silicon and germanium material parameters**. The relevant parameters defining the silicon hole spin qubit and germanium hole spin qubits are collected from Ref. [220, 209, 210, 221]. In the table, the out-of-plane heavy-hole band and light-hole band mass is defined as \(m_{\text{HH}}=m_{0}/(\gamma_{1}-2\gamma_{2})\), \(m_{\text{LH}}=m_{0}/(\gamma_{1}+2\gamma_{2})\); the in-plane heavy-hole band effective mass and the light-hole band effective mass is defined as \(m_{\text{HP}}=m_{0}/(\gamma_{1}+\gamma_{2})\), \(m_{\text{LP}}=m_{0}/(\gamma_{1}-\gamma_{2})\). \(m_{0}\) is the bare electron mass, \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\) are Luttinger parameters. The density \(\rho\) is the bulk density of isotropic silicon or germanium. \(v_{t}\), \(v_{t_{1}}\), \(v_{t_{2}}\) are phonon propagation speeds along three different polarizations. \(a_{v}\), \(b_{v}\), \(d_{v}\) are the hydro-static deformation potential constant, uniaxial deformation potential constant, and shear deformation potential constant respectively. The split-off band gap is denoted by \(\Delta_{0}\).
2301.00065
Learning Koopman eigenfunctions of stochastic diffusions with optimal importance sampling and ISOKANN
For stochastic diffusion processes the dominant eigenfunctions of the corresponding Koopman operator contain important information about the slow-scale dynamics, that is, about the location and frequency of rare events. In this article, we reformulate the eigenproblem in terms of $\chi$-functions in the ISOKANN framework and discuss how optimal control and importance sampling allows for zero variance sampling of these functions. We provide a new formulation of the ISOKANN algorithm allowing for a proof of convergence and incorporate the optimal control result to obtain an adaptive iterative algorithm alternating between importance sampling and $\chi$-function approximation. We demonstrate the usage of our proposed method in experiments increasing the approximation accuracy by several orders of magnitude.
Alexander Sikorski, Enric Ribera Borrell, Marcus Weber
2022-12-30T22:20:35Z
http://arxiv.org/abs/2301.00065v1
Learning Koopman eigenfunctions of stochastic diffusions with optimal importance sampling and ISOKANN+ ###### Abstract For stochastic diffusion processes the dominant eigenfunctions of the corresponding Koopman operator contain important information about the slow-scale dynamics, that is, about the location and frequency of rare events. In this article, we reformulate the eigenproblem in terms of \(\chi\)-functions in the ISOKANN framework and discuss how optimal control and importance sampling allows for zero variance sampling of these functions. We provide a new formulation of the ISOKANN algorithm allowing for a proof of convergence and incorporate the optimal control result to obtain an adaptive iterative algorithm alternating between importance sampling and \(\chi\)-function approximation. We demonstrate the usage of our proposed method in experiments increasing the approximation accuracy by several orders of magnitude. ## 1 Introduction Many real-world stochastic processes contain rare events, for example folding and binding events in molecular systems. The analysis of the frequency and mechanism of these events often takes operators associated with the process and their dominant invariant subspaces into account [1, 6, 9, 11, 14]. Usually this type of analysis leads to a kind of chicken and egg problem. In order to compute the dominant invariant subspace of the Koopman operator of the process, one has to somehow also "sample those events which are usually rare". However, in order to know how to generate a bias on the process for observing these events, one would need information from the dominant invariant subspace of the Koopman operator. The key idea of this article is to take an iterative algorithm which approximates the dominant eigenfunctions of the operator and to use the intermediate approximations for sampling along the reaction paths and generating an optimal bias to observe the relevant events. The algorithm for approximating eigenfunctions is ISOKANN1[11] and the bias is computed according to the theory of optimal importance sampling and the change of path measures [4]. Footnote 1: An acronym for “Invariant subspaces of Koopman operators with artificial neural networks”. Briefly speaking, ISOKANN can be thought of as an Arnoldi-like method using neural networks as function representations and replacing the subspace projections by a transformation suitable to its application on neural networks. It does not compute the eigenfunctions themselves but rather the so called \(\chi\)-functions, which span an invariant subspace of the Koopman operator and can be used to reconstruct the eigenfunctions. Furthermore the \(\chi\)-functions themselves allow for an interpretation as reaction coordinates indicating the locations of rare events and their reaction paths [3, 13]. Using this interpretation, the \(\chi\)-functions obtained during previous iterations can be used to adapt the sample locations for further iterations, e.g. by \(\chi\)_-stratified sampling_ (Sec. 2.6), and thus providing better global coverage and facilitating exploration. The theory of optimal importance sampling on the other hand allows to exploit the information locally by decreasing the variance of the samples required to approximate the action of the Koopman operator. This in turn allows ISOKANN to arrive at results either more quickly or more precisely. It is due to this feedback loop, coupling local and global information, together with its representation of the \(\chi\)-functions as neural networks that we expect ISOKANN to perform well even for complex high-dimensional systems. While these general ideas are far from being formalized yet, in this article we will try to construct the basic building blocks in a way amenable for their future analysis. In the following, we start by recalling basic knowledge about the eigenfunctions of Koopman operators (Sec. 2.1) and provide an abstract formulation of the ISOKANN problem in terms of \(\chi\)-functions (Sec. 2.2) encoding the invariant subspaces of the Koopman operator. This abstract formulation is incomplete without the choice of an adequate transformation which we discuss in Section 2.3 and follow it up with an explicit choice leading to 1D-ISOKANN (Sec. 2.4) which corresponds to the classical ISOKANN algorithm [11]. After proving convergence of 1D-ISOKANN we show how to reconstruct the eigenfunctions of the Koopman operator from the \(\chi\)-functions in Section 2.5. We then conclude Section 2 by providing an algorithmic description in form of Algorithm 1 and discussing the actual sampling and computation procedure (Sec. 2.6). Section 3 starts with an introduction to the theory of optimal importance sampling of classical random variables (Sec. 3.1) before reciting the result for the optimal sampling of path observables for diffusion processes with the Girsanov re-weighting in Section 3.2. After showing how to apply this result to obtain a zero variance sampler for the evaluation of the Koopman operator (Sec. 3.3) we discuss how to integrate the result into the ISOKANN framework (Sec. 3.4). In the end (Sec. 3.5) we apply the 1D-ISOKANN algorithm to a one-dimensional double-well potential and observe the improvement of the accuracy of the controlled versus the uncontrolled case. ## 2 Isokann Theory Before introducing our key idea in the next chapter, we will in this chapter recall the basics of Koopman operator theory and summarize the ISOKANN method while also complementing it by some new results. In particular we provide a new dimension-agnostic formulation (6) which naturally leads to a possible extension of ISOKANN to higher dimensions and show how the classical 1D-ISOKANN can be seen as a special case of this formulation and prove convergence of the algorithm to a \(\chi\) function (Theorem 1). After showing how to reconstruct the dominant eigenfunctions from the ISOKANN result (Proposition 1) we conclude this section with a discussion of the actual implementation of ISOKANN, suggesting a new adaptive sampling scheme. ### The Koopman operator Although our approach can be generalized to non-reversible stochastic processes (by shifting the focus from eigenfunctions to invariant subspaces), for simplicity we will restrict our explanations to the reversible case. More precisely, we will investigate potential-driven diffusion processes \(X=(X_{t})_{t\geq 0}\) of the form \[\mathrm{d}X_{t}=b(X_{t})\mathrm{d}t+\sigma\mathrm{d}B_{t}, \tag{1}\] taking values in the state space \(\mathbf{X}=\mathbb{R}^{n}\) with constant diffusion term \(\sigma\in\mathbb{R}^{n\times n}\) and force-field \(b:\mathbf{X}\rightarrow\mathbb{R}^{n}\) given by the gradient of a smooth potential \(b=-\nabla U\), \(U:\mathbf{X}\rightarrow\mathbb{R}\)2. \(B\) is a \(n\)-dimensional Brownian motion. Footnote 2: We choose \(b\) to be a gradient field for the process \(X_{t}\) to be reversible and hence admit a real eigendecomposition. Our principal results do not require reversibility when arguing in terms of invariant subspaces instead of eigenfunctions. However, for the sake of simplicity we here consider reversible systems only. The Koopman operator for lag time \(T\), \(\mathbf{K}^{T}:L^{\infty}(\mathbf{X})\to L^{\infty}(\mathbf{X})\), applied to a function \(f\in L^{\infty}(\mathbf{X})\) is defined by its pointwise evaluation via \[\left(\mathbf{K}^{T}f\right)(x)=\mathbf{E}\left[f(X_{T})\mid X_{0}=x\right] \tag{2}\] i.e. the expectation value of \(f\) at time \(T\) when starting the system in \(X_{0}=x\). Recall that since the process is time-homogeneous the Koopman operator just depends on the lag time for any start time \(t\geq 0\) \[\mathbf{E}\left[f(X_{t+T})\mid X_{t}=x\right]=\left(\mathbf{K}^{T}f\right)(x).\] The eigenfunctions \(v_{i}\in L^{\infty}(\mathbf{X})\) of \(\mathbf{K}^{T}\) satisfy for all lag times \(T\geq 0\) \[\mathbf{K}^{T}v_{i} =\lambda_{i}(T)\,v_{i} i=1,2,... \tag{3}\] \[\lambda_{i}(T) =\exp(Tq_{i}), 0=q_{1}>q_{2}\geq... \tag{4}\] with time-dependent eigenvalues \(\lambda_{i}(T)\), exponential in time with rates \(q_{i}\) (which in turn are the eigenvalues of the corresponding infinitesimal generator of \(\mathbf{K}^{T}\)). In the following we will refer to \(v_{1},\ldots,v_{d}\) as the \(d\)_dominant eigenfunctions_ and call \(v_{1}\equiv 1\) the _trivial eigenfunction_. When clear from the context or of no importance we will omit the lag time \(T\) and simply speak of the Koopman operator \(\mathbf{K}\). The dominant eigenfunctions are of particular interest as they decay the slowest and hence dominate the long time behavior of the system. The number \(d\) of eigenfunctions of interest depends on the time scales of the system and is usually chosen up to a spectral gap. There exist different approaches to estimate the eigenfunctions. Many depend on the discretization of the state-space into cells leading to a matrix representation of \(\mathbf{K}\). A classical method is starting trajectories in each such cell and counting how many of them end up in a certain cell. This sample-driven method can be interpreted as an approximate Ulam/Galerkin discretization onto indicator functions of the cells [7]. The eigenfunctions of \(\mathbf{K}\) are then approximated by the eigenfunctions of its (dense) matrix approximation. Unfortunately this scheme breaks down in high dimensions as the number of cells in a structured grid increases exponentially. One possible remedy to this problem is posed by the Square-Root-Approximation method (SQRA)[2]. It approximates the infinitesimal generator of the Koopman operator by a finite volume approximation where the volumes are implicitly defined by the Voronoi tesselation induced by some sample points. Using only evaluations of the potential \(U\) at these samples it can be understood as a semi-parametric method resulting in a sparse matrix representation, which in turn can be used for the computation of the eigenfunctions. All these classical approaches however depend on a discretization _before_ solving the eigenproblem. As an alternative we will now summarize ISOKANN, a recent matrix-free approach learning a linear combination of the eigenfunctions by neural networks. ### Isokann- Computing the dominant eigenspace The ISOKANN algorithm [11] uses a nonparametric representation in the form of a neural network in order to learn the dominant invariant subspace by interleaving an Arnoldi-like power iteration with the approximation of the Koopman operator by Monte-Carlo simulations. To this end it will be useful to reformulate the eigenproblem in terms of the so called \(\chi\)_-function_\(\chi=(\chi_{i})_{i=1}^{d}:\mathbf{X}\rightarrow\mathbb{R}^{d}\) with \(0\leq\chi_{i}\leq 1\), \(\sum_{i}\chi_{i}=1\) satisfying the \(\chi\)_-equation_, \[\chi=S\mathbf{K}\chi, \tag{5}\] for an appropriately chosen matrix \(S:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), which will be specified in the next section. The core idea here is that the components \(\chi_{i}\) of \(\chi\) span an invariant subspace of \(\mathbf{K}\), or more specifically they consists of linear combinations of eigenfunctions of \(\mathbf{K}\). Furthermore the action of \(\mathbf{K}\) on this space is then explicitly given by the matrix \(S^{-1}\). This definition of \(\chi\) functions does not only allow us to reconstruct the eigendecomposition of \(\mathbf{K}\) (Proposition 1) but also allows for an interpretation as macrostates in the form of _fuzzy memberships_ to \(d\) different sets [12] and leads to a direct characterization of rare transitions in form of their holding times, reaction rates, exit paths etc. [3, 13]. Formally ISOKANN then approximates the \(\chi\)-function representing the dominant eigenspace by an iterative sequence of approximations \(\chi_{n}:\mathbf{X}\rightarrow\mathbb{R}^{d}\) satisfying the ISOKANN_equation_, \[\chi_{n+1}(x)=S_{n}\mathbf{K}^{T}\chi_{n}(x)=S_{n}\mathbf{E}\left[\chi_{n}(X_{ T})\mid X_{0}=x\right], \tag{6}\] with \(S_{n}=S_{n}(\mathbf{K}^{T}\chi_{n})\) being a linear map depending on the previous iteration and constructed in such a way as to provide convergence to a desired form of \(S\) in (5). Similar to the Arnoldi-iteration, the iterative application of the Koopman operator leads to a decay of the eigenfunctions that is exponential in their eigenvalue. In the following section we will discuss how to construct \(S_{n}\) as to compensate this decay just so that the dominant components will prevail whilst the non-dominant ones vanish. In the limit, \(\chi_{n}\) spans the dominant subspace (Theorem 1) \[\mathrm{span}\{(\chi_{n})_{1},...,(\chi_{n})_{d}\}\overset{n\to \infty}{\rightarrow}\mathrm{span}\{v_{1},...,v_{d}\}, \tag{7}\] with \(v_{i}\) denoting the dominant eigenfunctions. This in turn allows us to learn the linear action of \(\mathbf{K}\) on that subspace leading to \[\chi_{n}\rightarrow\chi\text{ and }S_{n}\to S. \tag{8}\] Let us note that this presentation of ISOKANN differs from the original variant [11] in that the iteration in the form of (6) allows general \(d\)-dimensional valued \(\chi\) functions. However we will see that for the case of \(d=2\) this is equivalent to the original variant, which by slight abuse of notation we will refer to as 1D-ISOKANN. Illustrative exampleTo illustrate this better, let us consider the following example given by the SDE (1) with a double well potential \[U(x)=(x^{2}-1)^{2}. \tag{9}\] The double well potential with two minima is shown in Figure 1, the two dominant eigenfunctions are given in Figure 2. In Figure 3, we show the resulting two \(\chi\)-functions, that are a linear combination of the eigenfunctions, but allow us to grasp the slow time-scale dynamics better. In particular, the two \(\chi\)-functions can be interpreted as membership functions of the two wells of the potential. Their exit paths are charachterized by the gradients \(-\nabla\chi\) and their holding probabilties or exit rates can be computed from the associated matrix \(S\) in (5) (see [3, 13]). ### Choice of the transformation \(S\) In the previous description we were talking about an "appropriately chosen" linear transformation \(S\) or a sequence of \(S_{n}\). We will now explain the role that \(S\) plays in ISOKANN and suggest a way of determining suitable \(S\). In principle the map \(S\) can be chosen arbitrarily such that (5) has a solution in \(\chi\). However to obtain a useful result and for the ISOKANN iterations to converge with the corresponding choice of \(S_{n}\) we require some specific properties. In order to understand the role of \(S\) let us for now think about the classical _Arnoldi method_ to find the dominant eigenfunctions of a matrix. In principle the Arnoldi method also takes the form of (1), where the application of \(S_{n}\) corresponds to a _Gram-Schmidt orthonormalization_. Here the orthogonalization ensures that the leading eigenvectors are projected out from the subsequent ones and the following normalization step then ensures that the eigenvectors do not decay over multiple iterations. Note here that while the Gram-Schmidt orthonormalization on its own is a nonlinear procedure, the resulting action on a given set of input vectors can be expressed as a linear map, i.e. if \(\omega\) is the orthonormalization procedure, for each input matrix \(k\) we can find a linear map \(\Omega(k)\) (depending on \(k\)) such that \[\omega(k)=\Omega(k)\,k. \tag{10}\] It is in exactly this way that we understand \(S\) as a linear map computed non-linearly on the data in Eq. (6). For ISOKANN we want \(S\) to fulfill the same role: The goal of each linear transformation \(S_{n}\) is to counteract the decay of the dominant eigenfunction components contained in \(\chi_{n}\) after application of the Koopman operator \(\mathbf{K}\). However, we are interested in linear combinations of the eigenfunctions over a _continuous space_ represented by a neural network. In this setting orthonormalization is a hard problem, involving integration over the whole state space 3. Footnote 3: One might resort to restricting the \(\chi\) functions to a finite number of fixed points and orthonormalize wrt. the resulting vectors. We expect this to work fine as long as including those points where the eigenfunction differ strongly, i.e. the individual metastabilites. However, since these were not allowed to change and are usually not known a priori they do not lend themselves to an adaptive scheme like ISOKANN does. On the other hand, orthonormality is a strong assumption which is not necessarily required. If we manage to choose \(S\) such that it amplifies the first \(d\) eigenfunctions such that they stay bounded away from zero, we will obtain a representation of the dominant subspace, which in sequence allows for the reconstruction of eigenfunctions (c.f. Section 2.5). We now motivate a heuristic approach, based on the PCCA+ algorithm [12], to construct a transformation \(S\) and will prove its convergence in the 1D-setting. Footnote 4: In order for this to work PCCA+ has to be stable with respect to permutations, i.e. one has to ensure that the \(S_{n}\) do not suddenly change the orientation, which would prevent convergence. Let us shortly summarize the idea of the PCCA+ methodology. In the context of metastable systems the state-space regions where the individual eigenfunctions become extremal are representative for the respective metastabilities of that system. In practice, plotting the state space \(\mathbf{X}\) over the respective eigenfunction components \(v_{i}\) the resulting set often resembles a simplex. PCCA+ can be understood as a method to identify this simplex structure and construct the linear transformation (in the space of eigenfunction components) mapping this set into the unit-simplex (see also Figure 4), such that the image becomes "as big as possible" (specified by an optimization problem). We can similarly imagine this picture for the \(\chi\) functions: After sufficient time propagation (or power iterations), the \(\chi\) functions are mainly composed of the dominant eigenfunctions. Plotting \(\mathbf{X}\) over the \(\chi\) components thus will be close to the \(\mathbf{X}\) over \(v\) plot above modulo a linear transformation/a change of basis. We can thus apply PCCA+ onto our intermediate \(\mathbf{K}\chi_{n}\) in order to find a transformation such that the resulting \(\chi_{n+1}\) fills this unit-simplex. This inhibits the exponential decay of the non-trivial eigenfunctions by maintaining a set of \(d\) linearly independent components. Since the dominant eigenfunctions decay slower then the following non-dominant ones, they will dominate the behavior and prevail for \(n\to\infty\). Footnote 4: In order for this to work PCCA+ has to be stable with respect to permutations, i.e. one has to ensure that the \(S_{n}\) do not suddenly change the orientation, which would prevent convergence. Even though preliminary results have shown that using PCCA+ (with minor modifications5) to construct \(S\) in higher dimensions works fine, the focus of this paper is on the optimal control so we will reserve a more detailed report to future work and hence give a proof only for the simpler case of a single time-scale in the next section. Footnote 5: In order for this to work PCCA+ has to be stable with respect to permutations, i.e. one has to ensure that the \(S_{n}\) do not suddenly change the orientation, which would prevent convergence. ### 1d-Isokann with explicit PCCA+ In the case where one is interested merely in the first non-trivial eigenfunction, \(v_{2}\), the PCCA+ solution can be computed explicitly, which in turn allows us to prove convergence of ISOKANN for \(d=2\) and explicitly specify \(S\). Note that because of \(\chi_{1}+\chi_{2}=1\), the desired solution \(\chi:\mathbf{X}\to\mathbb{R}^{2}\) is fully determined by its first component \(\bar{\chi}:=\chi_{1}:\mathbf{X}\to\mathbb{R}\) alone, \[\chi=\begin{pmatrix}\bar{\chi}\\ 1-\bar{\chi}\end{pmatrix}. \tag{11}\] This representation allows us to solve the two dimensional ISOKANN problem with just one scalar function, approximated by a series of scalar neural networks \(\bar{\chi}_{n}:\mathbf{X}\to\mathbb{R}\). This scalar representation is the approach taken in [11] and in the following we will refer to it as 1D-ISOKANN5. Footnote 5: We see this abuse of notation justified as ISOKANN for \(d=1\) would otherwise only denote the trivial solution \(\chi\equiv 1\) Let us start by constructing the explicit PCCA\({}^{+}\) solution. Recall that \(v_{1}\equiv 1\) and note that \(v_{2}(\mathbf{X})=[\min(v_{2}),\max(v_{2})]\). Thus the image of \(\mathbf{X}\) forms a line segment, i.e. a 1-dimensional simplex, in the \(v_{1}\)-\(v_{2}\)-plane. PCCA\({}^{+}\) then constructs the unique map \(S\) which maps this simplex onto the unit simplex \(\{(x,y)\mid x+y=1,x>0,y>0\}\) (see Figure 4). To this end, let us introduce the map \(\bar{S}\) for bounded continuous functions \(\kappa\in C(\mathbf{X})\) \[\bar{S}(\kappa)=\frac{\kappa-\min(\kappa)}{\max(\kappa)-\min(\kappa)} \tag{12}\] such that \(\bar{S}(\kappa):\mathbf{X}\to[0,1]\) maps surjectively onto the unit interval. Even though \(\bar{S}\) (consisting of a shift and a scale) is only affine-linear in its argument it can be seen as a linear map on the 1D subspaces \(\{(1,\kappa)\mid\kappa\in C(\mathbf{X})\}\)) or similarly \(\{(\kappa,1-\kappa)\mid\kappa\in C(\mathbf{X})\}\) and indeed is the action of the PCCA\({}^{+}\) solution on a single component, i.e. there exists a matrix \(S\in\mathbb{R}^{2\times 2}\) such that it satisfies \[S\cdot\begin{pmatrix}\kappa\\ 1-\kappa\end{pmatrix}=\begin{pmatrix}\bar{S}(\kappa)\\ 1-\bar{S}(1-\kappa)\end{pmatrix}. \tag{13}\] So \(S\) determined by PCCA\({}^{+}\) is indeed is a linear map depending non-linearly on the input \(k\), just as in the case of the orthonormalization procedure ((10)), and the action on the first component is equivalently given by the affine-linear map \(\bar{S}\). Using \(\bar{S}\) to learn only the first component of the \(\chi\) function, we now can formulate the following explicit iterative 1D-ISOKANN procedure : **Theorem 1**.: _Let \(\chi,\ \bar{S}\) and \(S\) be chosen as above in Eqs. (11) to (13). For generic \(\bar{\chi}_{0}:\mathbf{X}\to\mathbb{R}\), i.e. containing components of \(v_{1}\) and \(v_{2}\), the 1D-ISOKANN iteration_ \[\bar{\chi}_{n+1}=\bar{S}(\mathbf{K}\bar{\chi}_{n}) \tag{14}\] _converges to the \(\chi\)-function_ \[\bar{\chi}=\lim_{n\to\infty}\bar{\chi}_{n}=\alpha v_{1}+\beta v_{2}, \tag{15}\] _for some \(\alpha,\beta\in\mathbb{R}\), which in turn solves the ISOKANN problem_ \[\bar{\chi}=S\mathbf{K}\bar{\chi}. \tag{16}\] Proof.: Noting that \(\bar{S}\) is merely a shift-scale, i.e. affine linear, and \(\mathbf{K}\mathbf{1}=\mathbf{1}\) one obtains \[(\bar{S}\circ\mathbf{K})^{n}=\bar{S}\circ\mathbf{K}^{n}. \tag{17}\] Looking at the eigendecomposition of \(\bar{\chi}_{0}=\sum_{i=1}^{\infty}a_{i}v_{i}\), we have \[\mathbf{K}^{n}\bar{\chi}_{0}=\sum_{i=1}^{\infty}a_{i}\lambda_{i}^{n}v_{i}=a_{1 }\mathbf{1}+\sum_{i=2}^{\infty}a_{i}\lambda_{i}^{n}v_{i} \tag{18}\] For large \(n\), the contribution of the faster eigenfunctions \(v_{i},\,i>2\) decays exponentially faster than the contribution of \(v_{1},v_{2}\). Hence the shift-scale is dominated by these slow eigenfunctions and with (17) we have for some \(\alpha,\beta\in\mathbb{R}\) \[\bar{\chi}=\lim_{n\to\infty}\left(\bar{S}\circ\mathbf{K}\right)^{n}\bar{\chi}_ {0}=\lim_{n\to\infty}\bar{S}\circ\mathbf{K}^{n}\bar{\chi}_{0}=\alpha v_{1}+ \beta v_{2}, \tag{19}\] which proves (15). Noting that \(\bar{\chi}(\mathbf{X})=[0,1]\) and \(\mathbf{K}\) acts as a shift-scale, and \(\bar{S}\) subsequently rescales \(\mathbf{K}\bar{\chi}\) back to the interval \([0,1]\) and we have \(\bar{S}(\mathbf{K}\bar{\chi})=\bar{\chi}\) which, given the implicit construction of \(S\) from \(\bar{S}\) in Eq. (13) shows the fixed-point result in (16). To summarize, we have shown how the representation of ISOKANN for \(d=2\) as a scalar problem reproduces the classical 1D-ISOKANN procedure [11] where the role of the linear map \(S\) (determined by PCCA\({}^{+}\)) gets replaced by the (explicitly given) affine-linear \(\bar{S}\). This simpler representation allowed us to show convergence to a \(\chi\)-function and thus solving the ISOKANN problem. ### Restoring the eigenfunctions At the beginning of the article we intended to compute the eigenfunctions of \(\mathbf{K}\). Whereas the \(\chi\) functions only span the corresponding invariant subspace, we now show how we can restore the eigenfunctions from the solution \(\chi\) to the ISOKANN problem \(S\mathbf{K}\chi=\chi\). This is equivalent to \[\mathbf{K}\chi=S^{-1}\chi \tag{20}\] which means that the action of \(\mathbf{K}\) on the subspace \(\chi\) is given by the matrix \(S^{-1}\). By means of a basis transformation (making use of the _Moore-Penrose pseudoinverse_\(\chi^{+}\)) we can recover the eigenfunctions of \(\mathbf{K}\) from \(S\) and \(\chi\). **Proposition 1**.: _Let \(\chi=(\chi_{i})_{i=1}^{d}\) be a column vector of \(\chi_{i}\) component functions and \(S\) be a full rank matrix such that_ \[S\mathbf{K}\chi=\chi \tag{21}\] _If \((X,\,\Lambda)\) is an eigendecomposition of \(Q:=\chi^{+}S^{-1}\chi\), i.e.,_ \[QX=X\Lambda, \tag{22}\] _then \(E:=\chi X\) are the eigenvectors of \(\mathbf{K}\) with eigenvalues \(\Lambda\)._ Proof.: By assumption we have \(\mathbf{K}\chi=S^{-1}\chi\). Inserting this into (22), multiplying with \(\chi\) from the left and noting that \(\chi^{+}\chi=\mathrm{Id}\) by definition, we arrive at \[\mathbf{K}\chi X=\chi X\Lambda \tag{23}\] which gives the desired result. ### Computational procedure With these theoretical considerations, let us now discuss how to apply the algorithm in practice, i.e. using neural networks as function approximators and Monte Carlo (MC) simulations for the Koopman evaluations. To this end let us recall the main formula for the iterative update (6): \[\chi_{n+1}(x_{m})\gets S_{n}\mathbf{K}^{T}\chi_{n}(x_{m})=S_{n}\mathbf{E} \left[\chi_{n}(X_{T})\mid X_{0}=x_{m}\right] \tag{24}\] Here we replaced the equality by a \(\leftarrow\) which indicates classical supervised learning (with the common mean squared error loss) along multiple (possibly random) training points \(x_{m}\), \(m=1,...,M\). For a procedural description see Algorithm 1. The main challenges posed by this iterative scheme consist of (a) a representation of the function(s) \(\chi_{n}\) and (b) the evaluation of the right hand side, i.e. the computation of \(S_{n}\) and the evaluation of the Koopman operator. For (a), the representation of the \(\chi_{n}\), we chose neural networks as they promise good approximation properties in high dimensions and their differentiability will prove crucial for the following optimal control part. In general any feed-forward architecture should be suitable and whilst convolutional networks could be especially suited due to the spatial structure of the state space \(\mathbf{X}\), in the example we will confine ourselves to a fully connected architecture for simplicity. In any case, the update step for \(\chi_{n+1}\) consists of a classical supervised learning routine with the labeled data \[D_{n}=\left\{(x_{m},s_{m})\right\},\quad s_{m}=S_{n}\mathbf{K}\chi_{n}(x_{m}) \tag{25}\] generated by evaluation of the current \(\chi_{n}\). Because the \(\chi_{n}\) are not expected to be changing a lot between the iterations it makes sense to initialize \(\chi_{n+1}\) with the weights from \(\chi_{n}\) as to transfer the already learned structure and speed up the learning. Note here that whilst we talk about different networks \(\chi_{n}\) for each iteration \(n\) to emphasize the iterative nature of (24), in practice we can update a single instance of the network. In this view, the learning procedure can be seen as iterative supervised batch learning, where the whole data batch \(D_{n}\) is generated along a set of points \(\{x_{m}\}\) using the current representation \(\chi_{n}\). The update step itself can be performed using any stochastic optimizer such as classical stochastic gradient descent or ADAM to minimize the empirical \(L^{2}\) error \[\min_{\chi_{n+1}}!\sum_{(x_{m},s_{m})\in D_{n}}(\chi_{n+1}(x_{m})-s_{m})^{2}. \tag{26}\] Considering (b), the evaluation of the right hand side, let us start with the approximation of the Koopman operator. For a given training point \(x_{m}\), we use its representation as an expectation value and approximate the action of the Koopman operator by a Monte-Carlo sum. Each simulation consists of starting \(K>0\) trajectories at the point \(x_{m}\) and propagating them according to the SDE (1) using an SDE integrator, such as the Euler-Maruyama scheme, for the lag time \(T\) and storing their end points \(y_{k,m},k=1,...,K\). The action of the Koopman operator at \(x_{m}\) is then approximated by the empirical average \[\kappa_{m}:=\mathbf{K}\chi_{n}(x_{m})=\mathbf{E}\left[\chi_{n}(X_{T})\mid X_{0 }=x_{m}\right]\approx\frac{1}{K}\sum_{k=1}^{K}\chi_{n}(y_{k,m}). \tag{27}\] To summarize, for each \(x_{m}\) we average the evaluation of \(\chi_{n}\) at \(K\) propagated positions obtained by SDE simulations. What now remains is the application of the shift-scale \(S_{n}\). In case of 1D-ISOKANN the action of \(S_{n}\) is determined by the shift-scale \(\tilde{S}\) from (12) which depends on the (global) extrema of the input function \(k\). In practice we thus use the empirical extrema over the observed data \(\{\kappa_{m}\}\) to directly compute \(s_{m}=\left(\tilde{S}(\kappa)\right)_{m}\) without explicitly constructing \(S_{n}\). In higher dimensions we compute the matrix \(S_{n}\) using \(\text{\sc PCCA}^{+}\) to find the transformation that maps the columns of the matrix \(K=\left[\kappa_{1}\,...\,\kappa_{m}\right]\) into the unit simplex. Note that the use of the empirical extrema requires that the training points \(x_{m}\) indeed cover the areas where \(\mathbf{K}\chi_{n}\) becomes (approximately) extremal. This brings us to the choice of the \(M\) training points \(x_{m}\). In principle ISOKANN can be applied to find the \(\chi\) functions of a system based on a fixed set of precomputed or assimilated trajectories (replacing the SDE integration). However its iterative nature makes it especially useful in the synthetic data regime where the trajectories are computed on-line as it allows adapting the training points \(x_{m}\), and as we will see the trajectory simulations too, to the information obtained so far. Since the \(\chi\)-functions can be interpreted as reaction coordinates [3] we suggest _"\(\chi\)-stratified" sampling_ of the \(x_{m}\), i.e. such that \(\chi(x_{m})\) is approximately uniform in \([0,1]\). In practice we achieve this by subsampling from the pool of start and end points of the previous simulations, \[P_{n}=\left(\bigcup_{m}x_{m}^{(n-1)}\right)\cup\left(\bigcup_{m,k}y_{k,m}^{(n -1)}\right). \tag{28}\] We then draw \(M\) stratified uniform samples \(u_{i}\in[0,1]\) by sampling uniformly from each of \(M\) equally sized partitions of the interval \([0,1]\). Finally we chose those \(p_{i}\in P_{n}\) such that \(\chi_{n}(p_{i})\) is the closest to one of the \(u_{i}\). We furthermore retain those samples which were extremal in \(\chi_{n}\) to facilitate good approximation of the extrema by \(\tilde{S}\) or \(\text{\sc PCCA}^{+}\). Heuristically speaking we obtain samples \(x_{m}\) which are uniform in \(\chi\) and hence provide good coverage or "bridges" along the transition region, thus facilitating an efficient "flow of information" during the power-iteration process. Furthermore the regions with a higher variation of \(\chi\), i.e. those that are "harder to learn", will also obtain more samples which in turn is also beneficial for the training of the neural network itself. Last but not least the samples obtained this way followed the system's ergodic dynamics and will therefore approximate the stationary distribution (restricted on each level set of \(\chi\)). This allows us to evade the curse of dimensionality by restricting the sampling to physically meaningful samples along the reaction paths. To summarize, \(\chi\)_-stratified sampling_ allows us to sample uniform along \(\chi\) and stationary conditioned on it without much additional cost and adapted to the learning process. The main ISOKANN routine can be summarized by three loops (\(N\) power iterations, \(M\) training points, \(K\) trajectories) which can be loosely tied to the three main ingredients of ISOKANN: 1. The power iteration learning the dominant subspace. 2. The neural network approximation of the next \(\chi\) iterate. 3. The Monte-Carlo simulation of the Koopman evaluation. In the outermost loop (1) we perform the power iteration \(\chi_{n}\) to \(\chi_{n+1}\). Whereas in theory we have convergence for \(n\to\infty\), we chose to terminate after a fixed number of iterations \(N\). This could be replaced by classical convergence criteria such as relative and absolute tolerances. Note that the rate of convergence and hence the required number of power iterates \(N\) depends on the eigenvalues of the Koopman operator where a bigger spectral gap implies faster decay of the non-dominant spectrum and hence faster convergence. When training the neural network (2) we loop over a batch of labeled training data at the \(M\) training points \(x_{m}\) which in turn (3) require \(K\) individual trajectory simulations. Both the number of training points \(M\) as well as trajectories per point \(K\) depend on the step sizes chosen for the neural network optimizer. Since in practice the evaluation of the Koopman operator is rather expensive compared to the neural network update, it may be efficient to perform multiple update steps on the same batch of data before proceeding to the next iteration. Note that the variance of the training data, scaling with \(K^{-1}\), is particularly high for metastable systems due to the impact of rare transitions. Whereas above we proposed the use \(\chi\)_-stratified_ subsampling as a heuristic to deal with sampling in \(\mathbf{X}\) space, we will now address the problem of variance in the "\(\mathbf{K}\chi_{n}\)-direction" using the techniques of optimal control and importance sampling. ``` 1:\(N\) the number of power iterations 2:\(M\) the number of \(x\)-samples 3:\(K\) the number of Koopman Monte-Carlo samples 4:\(\chi_{0}\) initial neural network 5:\(\chi_{N}\) approximates a \(\chi\) function (c.f. Eq. (5)) 6:for\(n=0\) to \(N-1\)do 7:\(x_{m}\leftarrow\textsc{sample}X_{0}()\)\(\triangleright\)sample training points 8:for\(k=1\)to\(K\)do 9:\(y_{k}\leftarrow\textsc{sample}X_{T}(x_{m})\)\(\triangleright\)simulate trajectories 10:\(\kappa_{m}\leftarrow\frac{1}{K}\sum_{k}\chi_{n}(y_{k})\)\(\triangleright\)Koopman approximation 11:\(s\leftarrow\mathrm{S}(\kappa)\)\(\triangleright\)transformed target data 12:\(\Delta\leftarrow\nabla_{\theta_{s}}\sum_{m}(\chi_{n}(x_{m})-s_{m})^{2}\)\(\triangleright\)compute loss gradient 13:\(\chi_{n+1}\leftarrow\textsc{optim}(\chi_{n},\Delta)\)\(\triangleright\)train the neural network 14:return\(\chi_{N}\) ``` **Subroutines**: ``` 1:sample\(X_{0}\): subroutine sampling the starting points \(x_{m}\) - either uniform or \(\chi\)_-stratified_ (Section 2.6). 2:\(\textsc{sample}X_{T}\): SDE solver, e.g. Euler-Maruyama - either uncontrolled or controlled (Section 3.4). 3:\(\textsc{S: empirical shift-scale}\) or \(\textsc{PCCA}^{+}\) (Section 2.4 or 2.3) 4:optim: gradient based optimization of the neural network (e.g. 100 SGD steps) ``` **Algorithm 1**ISOKANN ## 3 Optimal sampling of Koopman eigenfunctions and \(\chi\)-functions In this chapter we first recall importance sampling, before showing how the theory allows to better sample eigenfunctions of the Koopman operator and ISOKANN\(\chi\)-functions. We conclude this chapter with a numerical example. ### Importance Sampling for random variables Importance sampling allows to express the expectation value of an observable \(f>0\) with respect to some distribution \(p\) by an expectation value with respect to some other distribution \(q\) (with \(p\ll q\)) by the formula \[Z:=\mathbf{E}_{p}[f]=\mathbf{E}_{q}\left[f\frac{\mathrm{d}p}{\mathrm{d}q} \right], \tag{29}\] where the observable \(f\) is reweighted by the Radon-Nikodym derivative \(\frac{\mathrm{d}p}{\mathrm{d}q}\). It is easy to see that by choosing \(q^{*}\) such that \(\frac{\mathrm{d}p}{\mathrm{d}q^{*}}=\frac{Z}{f}\) (i.e. \(q^{*}=\frac{f}{Z}p\)), we have \[Z=\mathbf{E}_{q^{*}}\left[f\frac{Z}{f}\right]=\mathbf{E}_{q^{*}}[Z]. \tag{30}\] Now, since \(Z\) is a constant, it can be computed with a single (reweighted) \(f\)-sample from \(q^{*}\). We therefore refer to this importance sampler with sampling distribution \(q^{*}\) as _zero-variance-sampler_, or _optimal-importance-sampler_. Note however that we needed to know the (a priori unknown) result \(Z\) in order to define the optimal sampling distribution \(q^{*}\). ### Optimal Importance Sampling for Diffusion Processes Since we are working with diffusion processes, importance sampling is further complicated in that the measures are path measures and admit no probability density function. However, importance sampling can still be generalized to stochastic processes. Let us first consider the diffusion process of Eq. (1). In general, one is interested in computing the expectation of path-dependent quantities. Let us define the _work_ along a trajectory over \([0,T]\) by the accumulation of a _running cost_\(f\) and a _terminal cost_\(g\) as: \[W_{t,T}(X)\coloneqq\int_{t}^{T}f(X_{s},s)\mathrm{d}s+g(X_{T}). \tag{31}\] One then is interested in estimating expectation values of the form \[\psi(x,t)\coloneqq\mathbf{E}_{X_{t}=x}\left[\exp\left(-W_{t,T}(X)\right) \right]=\mathbf{E}_{X_{t}=x}\left[\exp\left(-\int_{t}^{T}f(X_{s},s)\mathrm{d} s-g(X_{T})\right)\right]. \tag{32}\] Girsanov's theorem builds the bridge from importance sampling to diffusion processes by allowing to sample from another diffusion processes. In particularly, it allows us to compute the change of measure in terms of the Radon-Nikodym derivative. To this end let us introduce the controlled process \(X^{u}=(X^{u}_{t})_{t\geq 0}\) \[\mathrm{d}X^{u}_{t}=(b(X^{u}_{t})+\sigma u(X^{u}_{t},t))\mathrm{d}t+\sigma \mathrm{d}B_{t}, \tag{33}\] with an admissible control term \(u\) acting as an external forcing to the original dynamics. Note that with zero control \(u=0\) one recovers the original dynamics \(X=X^{u=0}\). Let \(\mathcal{P}\) denote the path measure induced by \(X\) and \(\mathcal{Q}\) denote the measure induced by \(X^{u}\). According to Girsanov's theorem the change of measure from \(\mathcal{Q}\) to \(\mathcal{P}\) (analogous to \(\frac{\mathrm{d}p}{\mathrm{d}q}\) above) along a given controlled trajectory \(X^{u}\) is then given by \[G_{t,T}(X^{u})\coloneqq\frac{\mathrm{d}\mathcal{P}}{\mathrm{d}\mathcal{Q}} \big{|}_{[t,T]}(X^{u})=\exp\left(-\int_{t}^{T}u(X^{u}_{s},s)\cdot\mathrm{d}B_ {s}-\frac{1}{2}\int_{t}^{T}\left|u(X^{u}_{s},s)\right|^{2}\mathrm{d}s\right) \tag{34}\] which in turn provides an unbiased estimator of \(\psi\) in terms of the controlled process: \[\psi(x,0)=\mathbf{E}_{X_{0}=x}\left[\exp\left(-W_{0,T}(X)\right)\right]= \mathbf{E}_{X_{0}=x}\left[\exp\left(-W_{0,T}(X^{u})\right)G_{0,T}(X^{u})\right]. \tag{35}\] Note that even though the expected value of this estimator is the same for any control \(u\) its variance will vary. Analogous to the case above there exists an optimal measure corresponding to an optimal control \(u^{*}\) for which the controlled estimator exhibits zero variance [5, 10]: **Theorem 2**.: _The optimal control \(u^{*}\) is given by_ \[u^{*}(x,t)=\sigma^{\top}\nabla_{x}\log\psi(x,t) \tag{36}\] _and leads to the zero variance estimator for \(\psi\)_ \[\psi(x,0)=\mathbf{E}_{X_{0}=x}\left[\exp(-W_{0,T}(X))\right]\overset{a.s.}{=} G_{0,T}(X^{u^{*}})\exp(-W_{0,T}(X^{u^{*}}))\text{ with }X^{u^{*}}_{0}=x. \tag{37}\] ### Optimal sampling of eigenfunctions of the Koopman operator We will now show how this optimal control theorem can be used to evaluate the Koopman operator. **Corollary 2.1**.: _Let \(h\in L^{\infty}(\mathbf{X})\) be a function. A single realization of the controlled process \(X^{u}\) starting in \(X^{u}_{0}=x\) with control_ \[u(x,t)=\sigma^{\top}\nabla_{x}\log(\mathbf{K}^{T-t}h)(x) \tag{38}\] _then gives the evaluation of \(\mathbf{K}^{T}h\) at that point \(x\):_ \[(\mathbf{K}^{T}h)(x)\overset{a.s.}{=}G_{0,T}(X^{u})h(X^{u}_{T}). \tag{39}\] Proof.: With \(f(x)=0\) and \(g(x)=-\log h(x)\) Eq. (32) becomes \[\psi(x,t)=\mathbf{E}_{X_{t}=x}\left[h(X_{T})\right]=\mathbf{E}_{X_{0}=x}\left[h( X_{T-t})\right]=(\mathbf{K}^{T-t}h)(x), \tag{40}\] Application of Theorem 2 then leads to the desired result. This result shows us that in order to compute the optimal control for evaluating \(\mathbf{K}h\) we need to have access to the derivatives of \(\mathbf{K}h\). This conundrum is in line with the general optimal importance result and comes at no surprise. In the next section we will argue how this result can still be of use for ISOKANN where the convergence of the \(\chi_{n}\) provides us with an approximate description which we will use to compute the control. Let us for now consider the case where the observable of interest \(h\) is an eigenfunction of \(\mathbf{K}^{T}\), i.e. \(h=v_{i}\) with eigenvalue \(\lambda_{i}(T)\). In this case we can replace the action of the Koopman operator by its eigenvalue, which in turns cancels out after application of \(\nabla\log\) and results in a time-independent control: \[u^{*}(x,t)=\sigma^{\top}\nabla_{x}\log(K^{T-t}v_{i})(x)=\sigma^{\top}\nabla_{ x}\log(\lambda_{i}(T-t)v_{i}(x))=\sigma^{\top}\frac{\nabla_{x}v_{i}(x)}{v_{i}(x)}. \tag{41}\] This simple example provides a good point to get a feeling for how optimal importance sampling works. We can see that the control pushes the system in the direction of (relative) maximal ascent. If the system follows the forcing its expected evaluation increases, but the path taken also has an increased probability, resulting in a decreasing Girsanov weight. Both increments happen on a commensurate "relative scale": the expectation value gets pushed by an amount relative to its current value (\(\frac{\nabla_{x}v}{v}\)) whereas the reweighting is adjusted relatively in magnitude due to the exponentiated integral (which may be seen as an infinite product over all time-points) in (34). In this way the increase of the observable is balanced with the decreasing weight exactly so that no matter the path taken these always equalize and one obtains a zero-variance sampler. Unfortunately however, the control becomes singular whenever \(v_{i}(x)=0\). According to the Perron-Frobenius theorem, every non-trivial eigenfunction is unsigned, i.e. crosses the \(0\) at some point, so we have to find a way around that problem. We can alleviate this problem by shifting and (anticipating the form of \(\bar{S}\) in (12)) also rescaling the eigenfunction. Denote the shift-scaled eigenfunction by \[\chi:=\alpha v_{i}+\beta\mathbf{1},\quad\alpha,\beta\in\mathbb{R},\quad\chi>0. \tag{42}\] Due to linearity of \(\mathbf{K}^{T}\), we obtain \[(\mathbf{K}^{T}\chi)(x)=(\mathbf{K}^{T}\alpha v_{i})(x)+\beta=\alpha\lambda_ {i}(T)v_{i}(x)+\beta \tag{43}\] and thus after application of Corollary 2.1 the control in terms of \(\chi\) is \[u^{*}(x,t)=\sigma^{\top}\nabla_{x}\log(\mathbf{K}^{T-t}\chi)(x)=\sigma^{\top }\frac{\nabla\chi(x)}{\chi(x)+\frac{\beta}{\lambda_{i}(T-t)}-\beta}. \tag{44}\] In this case we see that control is time dependent (which makes sense as the relative contributions of the dominant eigenfunctions to the expectation value change over time). From \(\lambda_{i}(t)\leq\lambda_{i}(0)=1\) we can conclude that \(u^{*}(\cdot,t)=\gamma\sigma^{\top}\frac{\nabla\chi}{\chi}\) with function \(\gamma\) monotically increasing with \(\gamma(1)=1\), i.e. the control is pointing in the same direction as for the pure eigenfunction case, starting weaker and increasing until hitting the full magnitude at \(t=T\). Note that the requirement for \(\chi\) functions to satisfy \(0\leq\chi\leq 1\), which so far was merely motivated by their interpretation as macrostates, now also facilitates their optimally controlled importance sampling. ### Application to Isokann In the previous section we have shown how to obtain a zero-variance sampler for the Koopman operator in terms of the gradient of its solution, either for general observables \(h\) or (shift-scaled) eigenfunctions \(v_{i}\) (\(\chi\)). In either case the solution has to be known a priori as to compute the control. We will now argue how to integrate this result into the ISOKANN procedure. The main idea of using optimal importance sampling in ISOKANN is to use the intermediate results \(\chi_{n}\) and \(S_{n}\) to compute a _pseudo-optimal control_ as to lower the sampling variance. We therefore have to assume that using an approximation to the optimal control indeed leads to a variance reduction. Whereas we do not know of any proof to this statement it was shown that the objective of the associated optimal control problem is indeed convex in the control [8] which leads us to conjecture that that the variance should be well-behaving for approximate optimal controls as well. Note that the importance sampler (35) is unbiased for any control. Thus even if the above assumption does not hold ISOKANN would still converge, albeit slower, as long as the variance does not become unbounded6, under the usual conditions for stochastic gradient descent convergence (i.e. decaying learn rate). Footnote 6: Which could always be ensured by e.g. clipping the control, thus bounding the Girsanov reweighting term and in turn also the overall sampling variance Let us recall the equation for the control (38): \[u(x,t)=\sigma^{\top}\nabla_{x}\log(\mathbf{K}^{T-t}h)(x) \tag{45}\] In order to compute the differential of \(\mathbf{K}^{T-t}\) at \(\chi_{n}\) we assume sufficient convergence of ISOKANN together with (6), \[\chi_{n}\approx\chi_{n-1},\quad\chi_{n}=S_{n-1}\mathbf{K}^{T}\chi_{n-1}, \tag{46}\] to approximate the action of \(\mathbf{K}^{T}\) by \((S_{n-1})^{-1}\): \[(S_{n-1})^{-1}\chi_{n}=\mathbf{K}^{T}\chi_{n-1}\approx\mathbf{K}^{T}\chi_{n}. \tag{47}\] Using the semi-group property of \(\mathbf{K}\) and the matrix logarithm we can extend this to other lag times \(T-t\) to obtain the matrix approximation \(\widetilde{\mathbf{K}}^{T-t}\) \[\mathbf{K}^{T-t}\chi_{n}\approx\widetilde{\mathbf{K}}^{T-t}\chi_{n}:=\exp \left(\frac{T-t}{T}\log\left((S_{n-1})^{-1}\right)\right)\chi_{n}. \tag{48}\] Note that in the general \(d\)-dimensional case the expectation values in the ISOKANN iterations \(\mathbf{K}^{T}\chi_{n}\) are vector valued. Optimal importance sampling however works only in the scalar case. Therefore we have to compute an individual control for sampling each component \((\mathbf{K}^{T}\chi_{n})_{i}\) individually7. Footnote 7: In conjunction with \(\chi\)-_stratified_ sampling this results in multiple search directions, each exploiting the assumed location of one of the metastabilities. Since this also implies moving away from the respective other metastabilities (and beyond the current one) we have hopes that this interplay between the search directions may automatically provide a balance between exploration and exploitation. Thus using the optimal control Corollary 2.1 together with the matrix approximation (48) we can compute the pseudo-optimal control for the \(i\)-th component of \(\mathbf{K}^{T}\chi_{n}\) explicitly \[u_{i}^{*}(x,t)=\sigma^{\top}\nabla_{x}\log\left(\sum_{j}\widetilde{\mathbf{K }}_{ij}^{T-t}\chi_{j}(x)\right)=\sigma^{\top}\frac{\sum_{j}\widetilde{\mathbf{ K}}_{ij}^{T-t}\nabla_{x}\chi_{j}(x)}{\sum_{j}\widetilde{\mathbf{K}}_{ij}^{T-t} \chi_{j}(x)} \tag{49}\] In the case of 1D-ISOKANN the action of the Koopman operator on \(\chi_{n}\) converges to a shift-scale as in (43) and we can therefore estimate the parameters \(\alpha\), \(\beta\) and \(\lambda_{2}\) from the extrema of \(\mathbf{K}\chi_{n-1}\) as to apply the explicit control for the shift-scaled eigenfunction (44). Now that we know how to compute the control we can modify the algorithm (Line 5) by sampling the trajectories according to the controlled SDE (33). In order to compute the reweighting (34) we have to either save the trajectory and noise, or integrating it on the fly in an addition SDE component with \[G=\exp(-g_{T})\quad\mathrm{d}g_{t}=\frac{1}{2}u(X_{t}^{u},t)^{2}\mathrm{d}t+u (X_{t}^{u},t)\cdot\mathrm{d}B_{t}\quad\mathrm{d}g_{0}=0. \tag{50}\] Finally, for the Koopman Monte Carlo approximation (Line 6) we average over the \(\chi\) evaluations at the endpoints of \(K\) independent trajectories starting in \(x_{m}\) weighted with their respective weights \(G_{m}\): \[\mathbf{K}\chi_{n}(x_{m})\approx\frac{1}{K}\sum_{k}\chi_{n}(X_{T,m}^{u})G_{m}. \tag{51}\] In this way (and with the above assumption) we obtain a feedback loop where better approximation of the \(\chi\) functions results in in a better approximation of the action of \(\mathbf{K}\) and hence in a better approximation of the optimal control. This pseudo-optimal control in turn decreases the sampling variance which facilitates better approximation of the power iterates, i.e. the \(\chi\) function. As a proof of concept we will now illustrate the reduction of variance at the hand of the classic double-well potential. ### Example: Controlled 1d-Isokann for the double well Let us consider the controlled process as stated in (33) for the double-well potential (9) which leads to the simplest problem exhibiting metastable behavior and hence a challenging sample variance. In our experiments we compare the training performance of the ISOKANN algorithm, both, with and without the control (44). We start with a randomly initialized fully connected network \(\chi_{0}:\mathbb{R}\to\mathbb{R}\) with sigmoidal activation functions and \(2\) hidden layers, each of size \(5\) (i.e. with layer sizes \(1\times 5\times 5\times 1\)). For each network generation \(n\), we compute Monte Carlo approximations of the Koopman expectation at \(M=30\) positions. These are initially drawn uniformly from the interval \([-2,2]\) and subsequently obtained by \(\chi\)-_stratified_ subsampling as described in Section 2.6. From each starting position we then simulate \(K=20\) trajectories using the SROCK2 SDE integrator of strong order \(1\) with step-size \(\Delta t=0.001\). The next generation \(n+1\) is trained against these \(M\) training points by \(L=500\) stochastic gradient descent steps using the ADAM optimizer (with learning rate \(\eta=0.001\)). We repeat this evaluation-training procedure (corresponding to a single power iteration) for a total of \(N=50\) iterations. For each experiment we monitor the root of the training loss (26), i.e. the root mean squared error, \[\text{RMSE}\coloneqq\sqrt{\frac{1}{M}\sum_{m}(\chi_{n}(x_{m})-s_{m})^{2}} \tag{52}\] and the mean standard deviation of the MC estimator (27) \[\text{MSTD}\coloneqq\frac{1}{M}\sum_{m}\sqrt{\frac{1}{K}\sum_{k}(\chi_{n}(y_{ k,m})-\mu_{m})^{2}},\quad\text{where}\quad\mu_{m}=\frac{1}{K}\sum_{k}(\chi_{n}(y_{ k,m})) \tag{53}\] over the training phase of \(N=50\) iterations with \(L=500\) training steps each. Let us now compare the uncontrolled with the controlled experiment. Figure 5 shows the (square root of) our training loss together with the standard deviation of the Monte Carlo estimator for the two cases of study. In Figure 4(a) we observe that the uncontrolled system quickly (after \(3\) iterations) approaches its plateau at an error of about \(10^{-1}\) but afterwards the training loss does not decrease any further. This comes at no surprise since the training data exhibits noise of the same magnitude, and we cannot expect the average loss to be lower then the noise in the data. Note however, that even though the loss itself seems to have leveled off this does not necessarily mean that the solution does not improve: Whilst the empirical loss will necessarily remain at the level of the noise, the solution could still converge due to the inherent averaging of that noise in the SGD method. Looking at the controlled experiment, we observe in Figure 4(b) that for the loss behaves similar to the uncontrolled experiment for the first \(3\) iterations, reaching a value of \(10^{-1}\). From there on however the loss decreases further getting close to \(10^{-3}\) after \(50\) power iterations and still not having hit a plateau. Notice that the training noise, in strong contrast to the uncontrolled case, decreases rapidly from the beginning of the training. It is furthermore interesting to see that the training loss seems to be following the noise level closely, indicating that the sampling variance is indeed of high importance. Figure 5: Training performance over \(50\) power iterations / batches with \(500\) ADAM steps each. The blue line shows the training loss and the red line shows the standard deviation of the Monte-Carlo samples in the training data. Looking more closely at these performance plots we can furthermore identify the individual training batches: The MSTD is piecewise constant along each such batch, since the training data (and hence its standard deviation) is updated only inbetween the neural network training loops. Whereas the RMSE plateaus continiously during each batch, we can observe how it jumps up slightly at the beginning of each new batch. These jumps are caused by overfitting to the previous batch (the plateau) without generalization to the following batch (the flank) 8. Footnote 8: To increase the efficiency of the algorithm one could therefore take the plateau of each flank as indication to interrupt the current training and generate a new training batch. Let us finally look at Figure 6, which shows the different learned \(\chi\) functions after the application of the ISOKANN algorithm together with the evaluations of \(SK\chi_{n-1}(x_{m})\) at the \(M\) random locations \(x_{m}\). The error bars represent the standard deviation of the individual Monte-Carlo estimators, i.e. the noise in the training data. We see that both learned \(\chi\) functions qualitatively match the expectation. The uncontrolled case however has problems reaching \(0\) resp. \(1\) at the boundaries, which can be understood as a result of the noise and the subsequent noisy estimation of the empirical shift-scale. Last we notice that the Monte-Carlo standard deviation for the Koopman evaluation at the \(\chi\)-sampled positions is considerably lower by the controlled approach. ## 4 Conclusion In this article we started by enhancing ISOKANN by new theoretical results that prove the strengths of ISOKANN, namely a convergence proof (Thm. 1) and a method for reconstructing eigenfunctions from \(\chi\)-functions (Prop. 1). We also proposed a new adaptive sampling strategy, called \(\chi\)-stratified sampling, which complements ISOKANN well and deserves further investigation. Formulating ISOKANN in terms of the transformation \(S\) (6), we paved the way for higher-dimensional \(\chi\)-functions while generalizing the original 1D-ISOKANN. However, whereas we argued for using PCCA\({}^{+}\) for the construction of \(S\) for \(d>1\) a more detailed study and a proof of convergence for this case remain open for future work. The second main contribution in this article is the introduction of importance sampling into ISOKANN. Whereas we know that the resulting estimator is unbiased we argued only heuristically why the variance should not explode. A proof of convexity of the variance in the control, or even better, a proof of the convergence of control in ISOKANN is still missing. Note furthermore, that the concept of optimal importance sampling may be useful for the iterative solution of Koopman evaluations in general (c.f. Cor. 2.1). An important next step would be to apply controlled ISOKANN to an actual molecular dynamics (MD) system as to test how well the introduced techniques fare with the complexities of real world problems. This however requires a way to run many trajectories with different start locations and low-overhead as well as to inject the optimal control into the MD simulations. We hope that once these interfaces are implemented, ISOKANN will enhance the research of molecular systems. Figure 6: The blue line shows the learned \(\chi\) function at the end of training. The red dots show the train target, i.e. the Koopman evaluations at the \(x_{m}\), with the Monte-Carlo standard deviation as error bars. ## Acknowledgement We thank Luca Donati and Luzie Helfmann for their support in proofreading and insightful discussions. This research has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 "Scaling Cascades in Complex Systems", Project Number 235221301, Project A05 "Probing scales in equilibrated systems by optimal nonequilibrium forcing". ## Code availability The code used for the numerical examples is available as a Julia package on GitHub at [https://github.com/axsk/OptImpSampling.jl](https://github.com/axsk/OptImpSampling.jl) with the tag jmp 9. Footnote 9: In order to reproduce the experiments and plots of this paper run julia> Pkg.add("[https://github.com/axsk/OptImpSampling.jl#jmp](https://github.com/axsk/OptImpSampling.jl#jmp)"); julia> import OptImpSampling; OptImpSampling.paperplots()
2309.00647
Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Few-shot keyword spotting (FS-KWS) models usually require large-scale annotated datasets to generalize to unseen target keywords. However, existing KWS datasets are limited in scale and gathering keyword-like labeled data is costly undertaking. To mitigate this issue, we propose a framework that uses easily collectible, unlabeled reading speech data as an auxiliary source. Self-supervised learning has been widely adopted for learning representations from unlabeled data; however, it is known to be suitable for large models with enough capacity and is not practical for training a small footprint FS-KWS model. Instead, we automatically annotate and filter the data to construct a keyword-like dataset, LibriWord, enabling supervision on auxiliary data. We then adopt multi-task learning that helps the model to enhance the representation power from out-of-domain auxiliary data. Our method notably improves the performance over competitive methods in the FS-KWS benchmark.
Seunghan Yang, Byeonggeun Kim, Kyuhong Shim, Simyung Chang
2023-08-31T07:29:42Z
http://arxiv.org/abs/2309.00647v1
# Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data ###### Abstract Few-shot keyword spotting (FS-KWS) models usually require large-scale annotated datasets to generalize to unseen target keywords. However, existing KWS datasets are limited in scale and gathering keyword-like labeled data is costly undertaking. To mitigate this issue, we propose a framework that uses easily collectible, unlabeled reading speech data as an auxiliary source. Self-supervised learning has been widely adopted for learning representations from unlabeled data; however, it is known to be suitable for large models with enough capacity and is not practical for training a small footprint FS-KWS model. Instead, we automatically annotate and filter the data to construct a keyword-like dataset, LibriWord, enabling supervision on auxiliary data. We then adopt multi-task learning that helps the model to enhance the representation power from out-of-domain auxiliary data. Our method notably improves the performance over competitive methods in the FS-KWS benchmark. S. Yang, B. Kim, K. Shim, S. Chang Qualcomm AI Research+, Qualcomm Korea YH, Seoul, Republic of Korea {seunghan, kshim, simychan}@qti.qualcomm.com Footnote †: Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. **Index Terms**: few-shot learning, keyword spotting ## 1 Introduction Few-shot keyword spotting (FS-KWS) refers to the task of recognizing specific keywords in an audio signal, where only a limited number of examples are available for each keyword. This task has gained significant attention in recent years due to its practical importance in various applications such as voice assistants requiring user-defined keywords [1, 2, 3, 4, 5, 6, 7]. To solve this task, few-shot learning approaches have been proposed to learn a discriminative embedding space that can generate general representations for novel classes only given few examples. They mainly leverage prior knowledge from related tasks or similar data distributions using large-scale annotated training datasets [8, 9, 10, 11]. However, the keyword spotting (KWS) datasets [12, 13, 14] that are accessible to the public are typically limited in size and have a small number of categories. Acquiring a comprehensive annotated dataset suitable for KWS is a costly work. Given the existing datasets, the performance of FS-KWS is limited, as reported in Fig 1(a), even with the use of an advanced few-shot learning technique [15]. To this end, we propose to exploit auxiliary data, especially reading speech data from audiobooks, such as LibriSpeech [16] and MLS [17], which are readily publicly available in large quantities and easily collectable. These data are not built for the keyword spotting task (_e.g._, LibriSpeech [16] for speech recognition), but they are expected to help the model learn a more robust embedding space. One way to leverage knowledge from a large-scale unlabeled dataset is through self-supervised learning (SSL) [18, 19, 20, 21]. SSL has gained widespread usage in various applications due to its many advantages, particularly its ability to produce robust feature representations without the need for labeled data. SSL has been successfully employed in the pre-training stage of the model [19, 21] or in combination with a target objective for fine-tuning the model [22, 23]. However, prior literature has established that SSL is only beneficial to large models, as evidenced by [24, 25]. Therefore, the lightweight nature of FS-KWS models presents a challenge for the use of SSL. In this paper, we empirically derive that the existing SSL methods are also not suitable for training lightweight models in FS-KWS, as shown in Fig. 1(b). Instead, we automatically annotate the reading speech data with a word extraction technique [26] and construct a well-organized keyword-like dataset called _LibriWord_ by filtering and balancing the data, effectively enabling the use of auxiliary data with supervision on lightweight models. To effectively leverage both in-domain command-like data and out-of-domain auxiliary data, we propose a simple but effective multi-task learning (MTL) framework, called AuxSL, that incorporates an additional classifier with a supervised loss function for the out-of-domain data. AuxSL enables the model Figure 1: _t-SNE Visualization of Unseen Keywords Embeddings Using Various Learning Methods on the FS-KWS Benchmark [15]. We train four models using in-domain split-GSC and out-of-domain reading speech datasets, including LibriSpeech and our proposed word-level labeled dataset, LibriWord: **(a)** metric learning [15] on splitGSC, **(b)** (a)** + self-supervised learning [18] on LibriSpeech, **(c)** metric learning on both datasets (splitGSC + LibriWord), and **(d)** our proposed AuxSL on splitGSC and LibriWord. The closed-set classification accuracy (Acc.) and open-set detection rate (AUBROC) indicate the performance of the models._ to enhance the representation power from out-of-domain auxiliary data while primarily learning keyword representations on in-domain data. The naive approach of combining the two datasets and applying a single loss function to the entire dataset, as shown in Fig. 1(c), results in limited performance improvement. Our proposed framework is evaluated through extensive experiments on the FS-KWS benchmark, demonstrating its superior discriminative capabilities. In Fig. 1(d), the results show a relative improvement of \(16\%\) in both closed- and open-set settings when using 5-shot samples, compared to the baseline. ## 2 Related Works ### Small Footprint Keyword Spotting The aim of conventional keyword spotting is to detect a small set of pre-defined speech signals, such as the activation phrases "Alexa" and "OK Google". Recent research has primarily focused on enhancing accuracy while minimizing memory requirements and reducing power consumption, allowing for the deployment to an always-on system [27, 28, 29, 30]. Some studies have centered around the development of optimal architectures for keyword spotting [27, 28], while others have aimed to improve the loss function [29] or training approach [30]. ### Few-shot Keyword Spotting Few-shot keyword spotting (FS-KWS) aims to support systems that require user-defined keywords. Unlike conventional keyword spotting, FS-KWS involves a scenario in which new keywords that were not seen during training are enrolled and tested on the device. To achieve FS-KWS, it is crucial to learn robust representations in the pre-training stage using a large-scale dataset. To further improve performance, various loss functions [2, 3, 4], architectures [31], and learning strategies [5, 6, 7] have been proposed. Modifying the learning strategies is closely related to our work. Kao et al. [5] present a two-stage framework that compares several popular SSL models to determine the best model for FS-KWS. Lee et al. [6] incorporate auxiliary synthetic data generated by TTS, while Shin et al. [7] propose a novel approach that leverages the linguistically corresponding patterns between speech and text sequences. Our approach is distinct from prior studies as we propose utilizing auxiliary data from reading speech domain along with a supervised multi-task learning strategy, with the objective of training lightweight models. Recently, D-ProtoNets [15] has been proposed for few-shot open-set keyword spotting, which involves open-set classes during test time. Our proposed method is expected to improve open-set performance in addition to closed-set performance, as it helps to create robust representations. ## 3 Proposed Methods ### Problem Definition **Training objectives for few-shot learning:** In the domain of few-shot learning, metric learning based techniques have been predominantly investigated for constructing a robust embedding space that enables models to extract discriminative embeddings for unseen keywords and perform keyword detection using distance metrics. One of the most popular metric learning-based methods is ProtoNets [8]. At each training iteration, ProtoNets create a classification scenario by constructing a \(N\)-way and \(K\)-shot problem from training data, where \(N\) represents the number of classes and \(K\) denotes the number of support samples per class. ProtoNets first construct the prototype of each class by averaging the embeddings of support samples, \(c_{n}=\frac{1}{K}\sum_{i=1}^{K}F_{\theta}(x_{n,i}^{q})\), where \(F_{\theta}(\cdot)\) is a feature extractor. Subsequently, the prototypical loss function forces the query samples to minimize the distance from the corresponding prototypes as follows: \[L_{FSL}=-\frac{1}{M}\sum_{i=1}^{M}\text{log}\;p_{\theta}(y=n|x_{n,i}^{q}),\quad \text{where}\] \[p_{\theta}(y=n|x_{n,i}^{q})=\frac{\text{exp}(-d(F_{\theta}(x_{n,i}^{q}),c_{n} ))}{\sum_{n^{\prime}=1}^{N}\text{exp}(-d(F_{\theta}(x_{n,i}^{q}),c_{n^{\prime }}))}. \tag{1}\] \(d(\cdot)\) can be any distance metric, \(x_{n,i}^{q}\) is \(i\)-th query sample of class \(n\), and \(M\) indicates the number of query samples of class \(n\) in the \(N\)-way and \(K\)-shot problem. At the inference stage, prototypes of new classes are constructed by averaging the embeddings of enrolled samples. The test samples are then classified based on the classification probability in Eq. 1. **Limitation of few-shot keyword spotting:** Despite the success of few-shot learning, a large-scale dataset is required to learn a robust embedding space. For keyword spotting, the datasets usually have limited sizes, making it difficult to learn general keyword representations. While collecting in-domain command-like data is an effective way to create a robust embedding space, it is challenging to obtain such datasets. To address this issue, we propose to use easily obtainable auxiliary data, especially reading speech data. ### LibriWord Instead of collecting human-labeled keyword spotting data, we create a dataset named LibriWord containing segmented utterances and corresponding word-level labels. The samples are obtained from the LibriSpeech corpus [16], which comprises roughly 1,000 hours of read English speech at 16kHz. It contains numerous words, but lacks word-level alignments and only utterance-level transcriptions are provided. To obtain word-level segmented samples, we employ the Montreal Forced Aligner [32, 26], a word extraction technique previously used in [7, 33]. We then form a balanced dataset by organizing the extracted words based on the number of samples and eliminating similar keywords. Specifically, when a word partially overlapped with another word, such as with past tense, plural forms, or negative forms, we randomly remain one of them to simplify the representation learning process for small models. As a result, LibriWord includes 300 samples for each top 1,000 frequently appearing keywords. Table 1 presents the metadata before and after the refinement process, respectively. Learning with LibriWord, a comparatively smaller dataset than LibriSpeech, yields the benefits of reducing the burden of collecting auxiliary data and saving storage space, while achieving better performance (please see Section 4.2). ### FS-KWS with Auxiliary Supervision on LibriWord We propose a simple but effective multi-task learning framework, called AuxSL, which utilizes an additional classifier and \begin{table} \begin{tabular}{c|c|c} \hline Info. & LibriSpeech & LibriWord \\ \hline \# of samples & 4,806,675 & 300,000 \\ \# of keywords & 58,997 & 1,000 \\ Audio length & [0.0s, 11.1s] & [0.3s, 2.2s] \\ Annotations & Word-level labels \\ Annotator & Montreal Forced Aligner [26, 32] \\ \hline \end{tabular} \end{table} Table 1: _Metadata of LibriSpeech and proposed LibriWord._ supervised loss function for the out-of-domain auxiliary data to mitigate potential representation skew that may arise when directly sharing the feature extractor between in-domain and out-of-domain data, motivated by [34]. Our multi-task learning objectives are calculated as follows: \[L_{AuxSL}=L_{FSL}+\lambda L_{SL}, \tag{2}\] where \(L_{FSL}\) represents any few-shot learning loss function on in-domain training data and \(L_{SL}\) represents supervised loss function on out-of-domain auxiliary data. \(\lambda\) is a balancing parameter for auxiliary loss. Here, we maintain metric learning loss on in-domain data and use conventional cross-entropy loss, as depicted in Figure 2. This approach does not incur additional cost during inference since the classifier is not used. For our experiment, we employ a dummy prototypical loss function of D-ProtoNets [15] for \(L_{FSL}\) to efficiently handle both closed- and open-set query samples. D-ProtoNets [15] are trained using the dummy prototypical loss, which incorporates a learnable open-set prototype specifically designed to represent the open-set class. The open-set prototype is trained jointly with the class-wise prototypes to enable query samples of both closed- and open-set to be closely associated with their corresponding prototypes in the \(N+1\) classification task. During inference, if the probability of a query test sample \(x_{t}^{q}\) belonging to the open-set class \(N+1\), _i.e._, \(p_{\theta}(y=N+1|x_{t}^{q})\), exceeds a pre-defined threshold, then it is verified as the open-set class. ## 4 Experiments ### Experimental Settings **Dataset.** We use a standard benchmark splitGSC [15] on the Google Speech Commands (GSC) dataset [12] for keyword spotting task. The splitGSC contains the train, validation, and test split designed for few-shot closed- and open-set keyword spotting. The split includes 15, 10, and 10 keywords for training, validation, and testing, respectively, with 24,400, 4,007, and 4,482 samples, respectively. Note that a silence category is included in all sets. The official background noise provided by GSC with a probability of 0.8 is used. See [15] for more details. **Implementation details.** We use three different small footprint backbone models, BC-ResNet8 [27] and Res12 [35], both of which take 40-dimensional log Mel spectrograms as input with a window length of 30 ms and frame shift of 10 ms, and DS-ResNet18 [28], which takes 40-dimensional Mel-frequency cepstrum coefficient features as input. The network size of BC-ResNet8, Res12, and DS-ResNet18 is 321k, 8M, and 72k, respectively. Each model is trained for 100 epochs using the Adam optimizer with an initial learning rate of 0.001, which is step decayed by a factor of 0.5 every 20 epochs. Each epoch consists of 500 episodes, each containing 5 closed- and 5 open-set classes, with 5 support samples for prototypes and 5 query samples for each class. For multi-task learning, we use a batch size of 64 with parallel sampling as in the episodic iterations, and set \(\lambda\) to 1.0. As an additional module, we used a 1-FC layer as the classifier. To evaluate the trained model, we used \(1,000\) episodes at test time, with 1 or 5 support samples and 15 query samples per class including open-set classes. We reported the average 1-shot and 5-shot accuracy and threshold-free area under the receiver-operating characteristics (AUROC) for closed- and open-set performance with 3 different seeds. ### Analysis of LibriWord Dataset In Figure 3, we present the results obtained by training with different dataset constructions after extracting words from the imbalanced Librispeech corpus, which has highly skewed distribution of samples among its words. Notably, in this experiment, we employ our final MTL architecture solely to investigate the impact of LibriWord's dataset configuration. Our findings indicate that constructing a dataset with a balanced number of samples per keyword leads to superior performance compared to using an equally imbalanced dataset or the entire Librispeech dataset, which is roughly 16 times larger than LibriWord. Using a balanced number of data per keyword during training aids in building robust feature embeddings, a phenomenon that has also been observed in ImageNet, where research on handling imbalanced datasets is actively being pursued [36, 37]. To address the drawbacks of imbalanced datasets, we constructed LibriWord in a balanced manner and demonstrate its empirical efficacy. ### Comparison of SSL and SL Methods on Auxiliary Data In Table 2, we compared the performance of few-shot keyword spotting models trained using self-supervised learning (SSL) and supervised learning (SL) on auxiliary data. The baseline model, D-Proto [15], was trained with SL on splitGSC without using auxiliary data. We evaluated three SSL methods: (1) PreT-Big, which uses a large-scale pre-trained feature extractor from LibriSpeech and fine-tunes only the classifier for keyword spotting, (2) PreT, which pre-trains a keyword spotting model on LibriSpeech using SimCLR and BYOL, (3) MTL with SSL, which uses SSL on LibriSpeech and SL on splitGSC, and (4) MTL with knowledge distillation (KD), which uses feature distillation [38] from the large-scale pre-trained feature extractor and SL on splitGSC. We also evaluated three SL methods: (1) PreT, which pre-trains the keyword spotting model using cross-entropy loss (CE), (2) ALL, which uses all data together to train the model, and (3) AuxSL, our proposed method that uses a different path for each dataset. Balancing parameters for MTL methods are set to 1 and all hyper-parameters are chosen based Figure 3: The accuracy of models trained with balanced and imbalanced auxiliary data on BC-ResNet8 using the MTL path (our optimal framework). The 5-shot and 1-shot performance are denoted by circle and square, respectively. Figure 2: **Our Proposed AuxSL Framework.**\(F_{\theta}(\cdot)\) is trained with both in-domain data for keyword spotting and out-of-domain auxiliary data from the reading speech source, and \(C_{\phi}(\cdot)\) is trained to classify auxiliary words for helping to learn robust representations. on validation performance. **Self-supervised learning on auxiliary data.** It is well-known that self-supervised learning is challenging for small models [24, 25], and we empirically demonstrate that SSL is also ineffective for lightweight keyword spotting models. Despite extensive data augmentation and hyperparameter tuning, MTL with SSL and KD achieve limited performance improvement. Moreover, PreT degrades performance compared to the baseline, which trains the model on randomly initialized parameters. In small-footprint keyword spotting models, learning invariant information with SSL-based pre-training hinders the creation of keyword representations. Among the SSL methods, PreT-Big shows significantly better performance than the baseline. However, PreT-Big utilizes large feature extractors of size 300M, while other models use BC-ResNet8 that size is around 300K. **Supervised learning on auxiliary data.** Different from PreT with SSL, pre-training with SL boosts the performance of the baseline, which helps to learn keyword representations. We observe that the ALL method, which uses both datasets together for FS-KWS, is more beneficial for representation learning of small models than SSL, even in the presence of a domain gap between the two datasets. This naive approach benefits from well-organized word-like speech data for keyword representation learning. Our AuxSL adopts a different path for the auxiliary data through the MTL framework, which results in significant performance improvement as the model learns useful information for the target task from the auxiliary data. ### Evaluation on Various Architecture and Model Size To evaluate the effectiveness of our proposed method on various architectures and model sizes, we conducted experiments using three different backbone architectures: BC-ResNet, Res12, and DS-ResNet18. We applied the same training and evaluation settings as in the first sub-section, except for changing the backbone architecture. As shown in Table 3, we apply the competitive methods to other backbones, Res12 and DS-ResNet18, but the performance improvements are marginal, similar to the results obtained with BC-ResNet8. Our proposed MTL approach outperforms other training strategies. In Fig. 4, we observe that the performance of all methods, including the baseline, SSL, and SL, improves as the feature extractor size increases. However, SSL decreases performance for tiny models, such as BC-ResNet (9.2K), compared to the baseline. Remarkably, our proposed AuxSL approach applied to the smallest model (BC-ResNet1) outperforms other methods, including the largest model (BC-ResNet8). This result demonstrates the effectiveness of our method in learning discriminative representations even in extremely low-resource settings. ## 5 Discussion and Conclusion In this paper, we addressed the challenge of few-shot keyword spotting (FS-KWS) by proposing a framework that leverages commonly available reading speech data as auxiliary data. Our approach has two main contributions: (1) the creation of a well-organized and balanced keyword dataset, LibriWord, and (2) AuxSL: multi-task learning (MTL) with an additional classifier to minimize the domain gap between in-domain and auxiliary data. Our results have shown that creating a keyword-balanced dataset is a practical approach for training lightweight keyword-spotting models. Moreover, we demonstrated the superiority of the proposed learning approach through extensive experiments, as evidenced by the improved performance in the FS-KWS benchmark. While our approach has achieved promising results, we recognize that there may be other techniques to mitigate domain differences between datasets, such as RFN [39] and DSBN [40]. We leave it as future work to explore and analyze the effectiveness of these methods in a scenario where there is a large domain gap between in-domain and auxiliary data. \begin{table} \begin{tabular}{c|c|c|c c|c c} \hline \hline \multirow{2}{*}{Learning scheme on auxiliary data} & \multirow{2}{*}{Training strategy} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{5-shot} & \multicolumn{2}{c}{1-shot} \\ & & Acc. (\%) & AUROC & Acc. (\%) & AUROC \\ \hline \multirow{2}{*}{X} & Baseline (321k) & D-Proto (splitSC) & 82.0 (0.7) & 82.9 (0.1) & 65.8 (0.3) & 75.7 (0.1) \\ \hline \multirow{6}{*}{Self-supervised learning} & \multirow{2}{*}{PreT} & W2vec (LibriSpeech) \(\rightarrow\) D-Proto (splitSC) & 85.3 (0.9) & 76.7 (1.2) & 66.8 (1.5) & 70.5 (0.4) \\ & & Bubert (LibriSpeech) \(\rightarrow\) D-Proto (splitSC) & 90.4 (0.3) & 85.2 (0.6) & 76.2 (0.8) & 77.3 (0.9) \\ \cline{2-7} & \multirow{2}{*}{PreT} & SimCLR (LibriSpeech) \(\rightarrow\) D-Proto (splitSC) & 79.9 (1.0) & 81.2 (1.0) & 64.7 (1.0) & 75.0 (0.8) \\ & & BYOL (LibriSpeech) \(\rightarrow\) D-Proto (splitSC) & 82.0 (1.5) & 82.7 (1.0) & 65.7 (1.6) & 76.2 (1.0) \\ \cline{2-7} & \multirow{2}{*}{MTL} & SimCLR (LibriSpeech) \(\rightarrow\) D-Proto (splitSC) & 83.0 (0.8) & 83.1 (0.9) & 66.6 (1.6) & 76.0 (1.0) \\ & & BYOL (LibriSpeech) \(\rightarrow\) D-Proto (splitSC) & 81.7 (0.0) & 82.7 (0.0) & 65.2 (0.4) & 76.0 (0.5) \\ & & KD and W/W2vec (pitiSGC) \(\rightarrow\) D-Proto (splitSC) & 82.4 (0.7) & 83.3 (0.1) & 66.4 (0.6) & 76.4 (0.2) \\ & & KD and W/W2vec (pitiSGC) \(\rightarrow\) D-Proto (splitSC) & 83.7 (0.5) & 84.4 (0.5) & 67.8 (0.6) & 77.3 (0.9) \\ \hline \multirow{6}{*}{**Supervised learning**} & \multirow{2}{*}{PreT} & CE (LibriWord) \(\rightarrow\) D-Proto (pitilgGSC) & 82.8 (1.6) & 84.2 (1.1) & 68.7 (1.5) & 77.3 (0.8) \\ \cline{2-7} & & ALL & D-Proto (LibriWord + splitGSC) & 92.2 (0.9) & 86.6 (1.3) & 77.4 (2.1) & 75.3 (1.6) \\ \cline{1-1} \cline{2-7} & \multirow{2}{*}{**Ours: AuxSL**} & **CE (LibriWord) \(\rightarrow\) D-Proto (pitilgGSC)** & **95.6 (0.5)** & **95.2 (0.4)** & **87.1 (0.8)** & **89.4 (0.3)** \\ \cline{1-1} \cline{2-7} & & \multicolumn{1}{c}{} & & & & \\ \end{tabular} \end{table} Table 2: A performance comparison between models trained using self-supervised learning and supervised learning on auxiliary data. The Method column indicates the model training method, and the used dataset is indicated in parentheses. PreT-Big uses the pre-trained large feature extractor on Librispeech provided by torchaudio. In contrast, other methods use BC-ResNet8 as the feature extractor. We show 5-shot and 1-shot closed- and open-set average performance and standard deviation. \begin{table} \begin{tabular}{c|c|c c|c c} \hline \hline \multirow{2}{*}{Training strategy} & \multirow{2}{*}{Backbones} & \multicolumn{2}{c|}{5-shot} & \multicolumn{2}{c}{1-shot} \\ & & Acc. (\%) & AUROC & Acc. (\%) & AUROC \\ \hline \multirow{2}{*}{Baseline} & Reet21 & 86.8 (0.2) & 85.9 (0.3) & 70.4 (0.2) & 78.5 (0.1) \\ & DS-ResNet18 & 69.2 (0.2) & 74.8 (1.7) & 53.7 (2.2) & 79.3 (1.2) \\ \hline \multirow{2}{*}{PreT (SimCLR)} & Reet21 & 83.3 (1.1) & 83.3 (0.9) & 68.7 (1.5) & 77.1 (0.5) \\ & DS-ResNet18 & 59.0 (2.7) & 69.3 (0.9) & 45.8 (2.0) & 65.7 (0.3) \\ \hline \multirow{2}{*}{KD (Hubert) KD (Hay2vec)} & Reet21 & 87.0 (0.5) & 85.3 (0.1) & 70.3 (0.8) & 78.1 (0.4) \\ & DS-ResNet18 & 70.8 (0.5) & 75.6 (0.5) & 55.0 (0.3) & 70.6 (0.3) \\ \hline \multirow{2}{*}{ALL} & Reet21 & 93.1 (0.2) & 86.9 (1.7) & 78.3 (0.4) & 75.4 (1.0) \\ & DS-ResNet18 & 88.0 (1.3) & 82.1 (2.2) & 71.7 (1.2) & 72.1 (2.3) \\ \hline \multirow{2}{*}{**Ours: AuxSL**} & Reet21 & **95.6 (0.3)** & **94.1 (0.6)** & **85.5 (0.2)** & **84.3 (1.4)** \\ & DS-ResNet18 & **90.8 (0.3)** & **89.2 (0.9)** & **77.4 (1.0)** & **80.2 (1.8)** \\ \hline \hline \end{tabular} \end{table} Table 3: FS-KWS performance on various backbones. Figure 4: A performance comparison on various model size.
2309.11992
UAV Swarm Deployment and Trajectory for 3D Area Coverage via Reinforcement Learning
Unmanned aerial vehicles (UAVs) are recognized as promising technologies for area coverage due to the flexibility and adaptability. However, the ability of a single UAV is limited, and as for the large-scale three-dimensional (3D) scenario, UAV swarms can establish seamless wireless communication services. Hence, in this work, we consider a scenario of UAV swarm deployment and trajectory to satisfy 3D coverage considering the effects of obstacles. In detail, we propose a hierarchical swarm framework to efficiently serve the large-area users. Then, the problem is formulated to minimize the total trajectory loss of the UAV swarm. However, the problem is intractable due to the non-convex property, and we decompose it into smaller issues of users clustering, UAV swarm hovering points selection, and swarm trajectory determination. Moreover, we design a Q-learning based algorithm to accelerate the solution efficiency. Finally, we conduct extensive simulations to verify the proposed mechanisms, and the designed algorithm outperforms other referred methods.
Jia He, Ziye Jia, Chao Dong, Junyu Liu, Qihui Wu, Jingxian Liu
2023-09-21T12:04:11Z
http://arxiv.org/abs/2309.11992v1
# UAV Swarm Deployment and Trajectory for 3D Area Coverage via Reinforcement Learning ###### Abstract Unmanned aerial vehicles (UAVs) are recognized as promising technologies for area coverage due to the flexibility and adaptability. However, the ability of a single UAV is limited, and as for the large-scale three-dimensional (3D) scenario, UAV swarms can establish seamless wireless communication services. Hence, in this work, we consider a scenario of UAV swarm deployment and trajectory to satisfy 3D coverage considering the effects of obstacles. In detail, we propose a hierarchical swarm framework to efficiently serve the large-area users. Then, the problem is formulated to minimize the total trajectory loss of the UAV swarm. However, the problem is intractable due to the non-convex property, and we decompose it into smaller issues of users clustering, UAV swarm hovering points selection, and swarm trajectory determination. Moreover, we design a Q-learning based algorithm to accelerate the solution efficiency. Finally, we conduct extensive simulations to verify the proposed mechanisms, and the designed algorithm outperforms other referred methods. UAV swarm, 3D area coverage, swarm trajectory planning, reinforcement learning. ## I Introduction Unmanned aerial vehicles (UAVs) are recognized as promising solutions to tackle area coverage problems (ACPs) due to the inherent mobility and flexibility. In addition, UAVs are widely used in various practical scenarios, such as wireless data collection, mapping, disaster rescue, reconnaissance and surveillance [1, 2, 3, 4]. However, as for a single UAV, the limitations of payload, size, energy capacity, etc., impact its service ability [5]. Thus, a single UAV may not ensure sufficient and reliable wireless communication services for ground users (GUs) [6]. Fortunately, the UAV swarm, consisting of multiple small UAVs, can establish seamless wireless communication services for large-area GUs. However, the following challenges related to the UAV swarm still should be handled: 1) the efficient cooperation of UAVs to serve multiple GUs; 2) how to deploy the UAV swarm to achieve a tradeoff between coverage and energy efficiency; 3) the trajectory planning for the swarm considering the effects of obstacles. Hence, it is essential to design a reasonable UAV swarm framework and efficient trajectory plan for large-scale 3D area coverage. There exist a couple of recent works related to UAVs in ACPs scenarios. For instance, Zhao _et al._ in [7] study the 3D mobile sensing and trajectory optimization problem with a single UAV. In [8], Zeng _et al._ investigate energy-efficient UAV communication by deriving a paradigm related to UAV's energy consumption and trajectory planning. He _et al._ in [9] consider the 3D UAV deployment problem in uneven terrains for GUs' connectivity and coverage. Jia _et al._ in [10] present a matching game theory-based algorithm to deal with the offloading decisions from users to UAVs. Hou _et al._ in [11] propose a joint neural network to optimize both the transmission power and the user association in a multi-UAV communication system with moving users. In [12], the utilization of UAV swarms shows improved coverage performance for ACPs. However, as far as the authors' knowledge, the above works lack considering deployment and trajectory planning for UAV swarms, which is a significant issue. Hence, in this work, we consider a large-scale 3D coverage scenario with non-uniformly distributed GUs. Then, we design a hierarchical UAV swarm framework consisted of a cluster head UAV (H-UAV) and tail UAVs (T-UAVs) to efficiently accomplish the coverage mission. In particular, we divide the large-scale 3D area into discrete space considering obstacles. Then, the ACP is formulated as an optimization problem to minimize the total trajectory loss of the UAV swarm. Due to the non-convex property, we further decompose it into smaller problems of GUs clustering, UAV swarm hovering points (HPs) selection and trajectory planning. Then, we design a K-means clustering based method to cluster GUs and select the optimal HPs as deployment positions to satisfy the coverage requirements. Then, we model the trajectory planning problem as a Markov decision process (MDP) to depict the complex environment. Then, a Q-learning based UAV swarm trajectory planning algorithm (QLUTP) is designed to handle the problem. Finally, we conduct extensive simulations to verify the effectiveness of the proposed methods. This paper is organized as follows. In Section II, we present the system model of large-scale 3D area coverage and provide the problem formulation. Section III proposes the algorithms of deployment and trajectory planning for the UAV swarm. Further, Section IV conducts simulations and analyzes the results to verify the proposed mechanisms. Finally, conclusions are drawn in Section V. ## II System model and problem formulation ### _3D Area Coverage Scenario_ As shown in Fig. 1, we consider a UAV swarm enabled coverage scenario, including a UAV swarm and a set of GUs. In general, it is assumed that all UAVs fly with a constant speed at the same altitude \(h\) within \([h_{\min},h_{\max}]\). Further, GUs are non-uniformly distributed in the coverage area, denoted as \(D=(u_{1},u_{2},...,u_{M})\). During the flight, the UAV swarm needs to select \(N\) HPs, i.e., \(\Omega=\{q_{1},q_{2},...,q_{N}\}\), to satisfy the coverage requirements. In this work, we adopt the regular grid method to describe the large-scale 3D space, denoted as \(\mathcal{G}\)[13]. Specifically, \(\mathcal{G}\) is abstracted as an \(L_{x}\times L_{y}\times L_{z}\) cuboid composed of small cubes with length of \(\Delta l\). The center coordinates of the cube are presented as \((x,y,z)\). Further, the number of cubes is \(N_{x}\times N_{y}\times N_{z}\), i.e., \[N_{x}=\left\lceil\frac{L_{x}}{\Delta l}\right\rceil,N_{y}=\left\lceil\frac{L_ {y}}{\Delta l}\right\rceil,\mathrm{and}\ N_{z}\!\!=\!\!\left\lceil\frac{L_{z}} {\Delta l}\right\rceil, \tag{1}\] in which the ceil function \(\lceil f(x)\rceil\) returns the value of a number rounded upward to the nearest integer. The partition accuracy \(\Delta l\) is a hyper-parameter, and smaller \(\Delta l\) indicates that the accuracy of space \(\mathcal{G}\) increases. However, due to the searching and storage consumption, the computational complexity generally grows exponentially with slight decline of \(\Delta l\). Therefore, it is essential to make comprehensive analysis according to the performance constraints of the UAV swarm and the practical scale of ACPs. Moreover, it is necessary to consider the distribution of obstacles in the 3D space \(\mathcal{G}\) to avoid unnecessary collisions. In detail, the obstacle is described as \(\{\mathrm{O}|(x^{o},y^{o}),Ob(x^{o},y^{o})\}\), where \((x^{o},y^{o})\) is the projection of a certain point on the horizontal plane, and \(Ob(x^{o},y^{o})\) is the height of the corresponding obstacle point. Further, flag \(\mathcal{L}_{ob}\) represents the collision, i.e., \[\mathcal{L}_{ob}=\left\{\begin{array}{ll}0,&\mathcal{C}\notin\mathrm{O},\\ 1,&\mathcal{C}\in\mathrm{O},\end{array}\right. \tag{2}\] where \(\mathcal{C}\) is the position of the UAV swarm. \(\mathcal{L}_{ob}=0\) means there exist no collisions between UAVs and obstacles. Otherwise, \(\mathcal{L}_{ob}=1\) represents there exist collisions. Theoretically, the UAV swarm can fly above each GU for area coverage. However, this deployment policy results in additional energy consumption due to repeated coverage. Also, there are some isolated GUs which cannot be effectively covered. Therefore, a tradeoff between the coverage and energy performance should be considered. We define the GUs coverage rate \(\Upsilon\) as an index to evaluate the coverage performance of the UAV swarm. In detail, \[\Upsilon=\frac{1}{M}\sum_{n=1}^{N}A_{n}, \tag{3}\] in which \(A_{n}\) is the number of covered GUs at \(n\)th HP, i.e., \(q_{n}\in\Omega\). \(M\) represents the total number of GUs located in the area. In Fig. 2, an illustration of the coverage rate within a UAV swarm is provided. Since the coverage radius \(r_{s}\) of the UAV swarm is related with \(\Upsilon\), it is calculated as: \[r_{s}=r_{j}+r_{i,j}, \tag{4}\] where \(r_{i,j}\) represents the communication radius between H-UAV and T-UAV, and \(r_{j}\) is the coverage radius of single T-UAV. ### _UAV Swarm Model_ We propose a hierarchical UAV swarm framework composed of multiple small low-cost UAVs. All UAVs can be divided into two categories according to different roles in the swarm. H-UAV, acting as the leader of the swarm, can establish wireless communication links with other UAVs or ground base station for data transmission. T-UAVs, acting as followers, perform Fig. 1: UAV swarm enabled 3D area coverage scenario. Fig. 2: An illustration of the coverage rate within a UAV swarm. the coverage mission with the leadership of H-UAV. Thus, the UAV swarm is defined as \(\left\{H_{i},T_{i,1},...,T_{i,j},...,T_{i,K}\right\}\), in which \(H_{i}\) and \(T_{i,j}\) represent H-UAV and \(j\)th T-UAV of swarm \(i\), respectively. In the swarm, we adopt the star topology, i.e., the H-UAV serves as the communication center and directly communicates with all T-UAVs. Besides, it is assumed that T-UAVs can only communicate with the H-UAV within the same swarm. ### _Communication Model_ #### Ii-C1 A2A Communication Model We adopt the typical air-to-air (A2A) communication link (CL) model to characterize the communication model between the H-UAV and T-UAV [14]. Firstly, the position of H-UAV \(H_{i}\) and T-UAV \(T_{i,j}\) are denoted by \(\mathcal{I}_{i}\) and \(\mathcal{I}_{i,j}\), respectively. Then, the received power of H-UAV from T-UAVs is: \[P_{r,H}\left(\mathcal{I}_{i},\mathcal{I}_{i,j}\right)=P_{C,T}+G_{t}+G_{r}-P_{l }\left(\mathcal{I}_{i},\mathcal{I}_{i,j}\right), \tag{5}\] where \(G_{t}\) and \(G_{r}\) are the constant antenna gains of T-UAV and H-UAV, respectively. \(P_{C,T}\) is the constant transmission power for UAVs. \(P_{l}\left(\mathcal{I}_{i},\mathcal{I}_{i,j}\right)\) is the large-scale fading coefficient, defined as: \[P_{l}\left(\mathcal{I}_{i},\mathcal{I}_{i,j}\right)=10a\mathrm{log}_{10}\left( \frac{4\pi\left\|\mathcal{I}_{i}-\mathcal{I}_{i,j}\right\|_{2}f_{c}}{v_{c}} \right), \tag{6}\] where \(a>0\) is the trajectory loss exponent, \(v_{c}\) is the speed of light, and \(f_{c}\) is the electromagnetic wave frequency [15]. In addition, the H-UAV and T-UAV establish a CL if the received power satisfies: \[P_{r,H}\left(\mathcal{I}_{i},\mathcal{I}_{i,j}\right)\geq P_{0}, \tag{7}\] where \(P_{0}\) is set as the threshold power. Based on (7), the Euclidean distance \(\left\|\mathcal{I}_{i}-\mathcal{I}_{i,j}\right\|_{2}\) between \(H_{i}\) and \(T_{i,j}\) satisfies: \[\left\|\mathcal{I}_{i}-\mathcal{I}_{i,j}\right\|_{2}\leq\frac{v_{c}}{4\pi f_{ c}}\exp\left\{\frac{\ln(10)}{10a}\left(P_{C,T}+G_{t}+G_{r}-P_{0}\right) \right\}. \tag{8}\] Note that the horizontal distance between \(H_{i}\) and \(T_{i,j}\) is equal to the Euclidean distance, \(r_{i,j}=\left\|\mathcal{I}_{i}-\mathcal{I}_{i,j}\right\|_{2}\). #### Ii-C2 A2G Link Model As shown in Fig. 1, T-UAVs provide effective signal coverage for corresponding GUs in the target area. In the air-to-ground (A2G) communication model, the channel gain of the line-of-sight link between T-UAVs and GUs is: \[h_{j,m}^{A2\mathrm{G}}=\frac{\beta_{0}}{d_{j,m}^{2}}, \tag{9}\] where \(\beta_{0}\) is the channel gain at the reference distance of each meter, and \(d_{j,m}\) represents the Euclidean distance between \(T_{i,j}\) and \(u_{m}\)[16]. Let \(P_{m,j}\) represent the transmitting power of GU \(u_{m}\), and the transmission rate is expressed as: \[R_{j,m}=B\log 2\left(1+\frac{P_{m,j}h_{j,m}^{A2\mathrm{G}}}{B\eta}\right), \tag{10}\] where \(B\) and \(\eta\) are channel bandwidth and noise power density, respectively. The transmission rate \(R_{j,m}\) should be not less than the minimum achievable rate threshold \(R_{Th}\), i.e., \[R_{j,m}\geq R_{Th}, \tag{11}\] which guarantees that the T-UAV can establish a stable CL with the GU. According to (9) and (10), we obtain the maximum coverage range \(d_{j,m}\) of T-UAV as: \[d_{j,m}=\sqrt{P_{m,j}{\beta_{0}}\Big{/}(2^{R_{j,m}/B-1})}, \tag{12}\] where the transmission rate \(R_{j,m}\) is equal to threshold \(R_{Th}\). Then, the coverage radius of a single T-UAV is calculated as: \[r_{j}=\sqrt{{d_{j,m}}^{2}-h^{2}}, \tag{13}\] where \(h\) is the flight altitude of the UAV swarm. ### _Problem Formulation_ Since ACP aims to save energy via appropriate deployment of the UAV swarm, the objective is to minimize the flight trajectory loss of the UAV swarm. Thus, the optimization problem is detailed as: \[\textbf{P0}:\quad\min_{\rho} (\Delta l\sum_{\rho\in\mathcal{P}}Ed_{n}^{n+1})_{\Omega\left(q_{1} \to q_{N},T\right)}, \tag{14a}\] \[\mathrm{s.t.} \quad\text{C1:}\quad\rho\in\mathcal{P},\] (14b) \[\quad\text{C2:}\quad\mathcal{L}_{ob}=0,\] (14c) \[\quad\text{C3}-\text{C4:}\quad(7),\ (11),\] (14d) \[\quad\text{C5:}\quad\Upsilon\geq\Upsilon_{Th}, \tag{14e}\] where variable \(\rho\) is a swarm trajectory path, \(\Delta l\) is trajectory loss for each step, and \(Ed_{n}^{m+1}\) is the number of path steps from \(q_{n}\) to \(q_{n+1}\). All the HPs satisfying the coverage rate \(\Upsilon\) are denoted as \(\Omega(q_{1}\!\!\to\!\!q_{N},\Upsilon)\). Further, in constraint C1, the UAV swarm should fly within the tolerable space, i.e., \(\rho\in\mathcal{P}\). Constraint C2 addresses the distribution of obstacles, in which flag \(\mathcal{L}_{ob}=0\) means there exist no collisions. As illustrated in C3, the received power \(P_{r,H}\) should be not less than the limited threshold \(P_{0}\). In constraint C4, transmission rate \(R_{j,m}\) should not less than threshold \(R_{Th}\). Constraint C5 guarantees that the selected HPs satisfy the limitation of coverage rate \(\Upsilon\). Thus, \(\Upsilon_{Th}\) is set to ensure the basic coverage performance. Due to the objective is to minimize the flight trajectory loss of the UAV swarm, both the deployment positions selection and trajectory planning are non-convex. As for the problem complexity, the solution space grows exponentially with the problem scale due to the non-convex property of **P0**, which is intractable via a traditional optimization approach. ## III Deployment and Trajectory Planning for the UAV swarm To efficiently deal with **P0**, we decompose it into smaller problems of GUs clustering, HPs selection for UAV swarm and trajectory planning. Firstly, in Section III-A, a K-means based method is designed to cluster GUs, and select multiple HPs as deployment positions for the UAV swarm. Then, the ACP is modeled as an MDP in Section III-B. Further, a Q-learning based UAV swarm trajectory planning (QLUTP) is designed in Section III-C to minimize the flight trajectory loss. ### _GUs Clustering and HPs Selection_ We adopt the K-means clustering based method to select optimal deployment positions of HPs. The variable \(\Omega(q_{1}\neg q_{N},\mathcal{T})\) and \(N\) depend on the dispersion degree of GUs and the scale of the coverage area. The details of GUs clustering and HPs selection approach are presented in Algorithm 1. Firstly, we initialize the deployment of GUs \(\{u_{1},u_{2},...,u_{M}\}\), the coverage radius \(r_{s}\) and number of clusters \(N\). From step 3 to 12, GUs are clustered according to the distance and obtain the updated clusters. Further, we analyze the coverage range conditions of GUs in each cluster, as shown in step 13. Finally, we obtain the deployment positions of HPs satisfying coverage constraints, as illustrated from step 14 to 15. ``` 1:Initialize the deployment of GUs \(\{u_{1},u_{2},...,u_{M}\}\), the coverage radius \(r_{s}\) and the number of clusters \(N\). 2:Create \(N\) points randomly as the starting centroid. 3:for\(episode=1\) to \(MAX_{eps}\)do 4: Reset the starting clusters \(\{C_{1},C_{2},...,C_{N}\}\) as \(\emptyset\). 5:for GUs \(i\in[1,M]\)do 6: Calculate the Euclidean distance between the centroid and each data point. 7: Assign the data point to the nearest cluster. 8:endfor 9:for cluster \(n\in[1,N]\)do 10: Reset the centroid of each cluster. 11:endfor 12:endfor 13: Count the number of data points within the coverage radius \(r_{s}\) for each cluster. 14: Calculate the coverage rate \(\mathcal{T}\) according to (3). 15:Return the positions of updated clusters \(\{C_{1},C_{2},...,C_{N}\}\) and the coverage rate \(\mathcal{T}\). ``` **Algorithm 1** GUs Clustering and Hovering Points Selection ### _MDP for Trajectory Planning_ Reinforcement learning (RL) studies the sequential decision-making process, in which the agent acts as the subject body and interacts with the objective environment. Correspondingly, the UAV swarm is abstracted as an agent in the environment \(\mathcal{G}\) with obstacles. Thus, the optimization problem **P0** is reformulated in the form of MDP. An MDP problem is defined by a tuple \(\{S,A,P,R,\gamma\}\), where \(S\) is the state space. \(A\) is the action space. \(\{P:S\times A\rightarrow\Delta(S)\}\) is the state transition probability function from state \(s\in S\) to another state \(s^{\prime}\in S\) with action \(a\in A\). \(\{R:S\times A=\mathcal{R}\}\) is the immediate reward function of the agent, and \(\gamma\in[0,1]\) is a discounted factor. In addition, \(\pi\) represents the policy, and \(Q_{\pi}(s,a)\) represents the action value function. #### Iii-B1 State space The state space \(S\) indicates the environment of the agent. In order to make the agent aware of the environment state, we introduce the positions of HPs \(\Omega(q_{1}\neg q_{N},\mathcal{T})\) and obstacles \(\{\mathrm{O}|(x^{o},y^{o}),Ob(x^{o},y^{o})\}\). #### Iii-B2 Action space The action space provides six actions for the agent, i.e., \(A=\{a_{f},a_{b},a_{r},a_{l},a_{u},a_{d}\}\). Note that the elements represent the action of flying forward, backward, left, right, up and down, respectively. At each time step, agent selects one action from \(A\). #### Iii-B3 Transition probability The transition probability function \(P\) represents the dynamic characteristic of the environment. It is intractable to model accurately due to the model-free property. #### Iii-B4 Discounted factor \(\gamma\) The discounted factor \(\gamma\) denotes the effect of the future reward, and a larger \(\gamma\) indicates the decision of focusing on long-term reward. #### Iii-B5 Reward function The reward function aims to lead the agent for achieving the goals of the task in RL process. Further, \(R_{t+1}\) is denoted as instantaneous reward of performing action \(a_{t}\) under current state \(s_{t}\), i.e., \[R_{t+1}(s_{t},a_{t})=(-0.1\times\Delta l)(1-\mathcal{H}_{n})+100\mathcal{H}_{n }-100\mathcal{L}_{ob}, \tag{15}\] where \(\Delta l\) is the trajectory loss for each step. \(\mathcal{H}_{n}\) is defined as the flag for the end of the trajectory planning process, i.e., \[\mathcal{H}_{n}=\left\{\begin{array}{cc}0,&p_{i}\in\Omega,\\ 1,&p_{i}\notin\Omega.\end{array}\right. \tag{16}\] #### Iii-B6 Action value function The action value function \(Q_{\pi}(s,a)\) indicates the expected future reward for performing action \(a_{t}\) under the current state \(s_{t}\), i.e., \[Q_{\pi}(s,a)=E_{\pi}[R_{t+1}+\gamma R_{t+2}+\gamma^{2}R_{t+3}+...\left|s_{t}=s,a_{t}=a\right|], \tag{17}\] where \(a_{t}\) and \(s_{t}\) represent the current action and state, respectively. The cumulative discounted reward \(G_{t}\) is denoted as: \[G_{t}=R_{t+1}+\gamma R_{t+2}+\gamma^{2}R_{t+3}+..., \tag{18}\] where the reward of the future time step is implied in the present step, but future rewards should be multiplied with the discounted factor \(\gamma\). ### _Q-learning based QLUTP_ Q-learning algorithm is a fundamental RL method. In detail, the agent chooses action \(a{\in}A(s)\) under state \(s\) with the instruction of action value function \(Q(s,a)\). Moreover, all the action value functions \(Q_{\pi}(s,a)\) are stored in the action value table \(\mathcal{Q}\), which is updated during the training process. Then, according to the updated \(\mathcal{Q}\), we obtain the optimal policy \(\pi^{*}\). The goal is to maximize the accumulated reward \(\mathcal{R}\), i.e., \[\pi^{*}=\arg\,\max_{\pi}\mathcal{R}. \tag{19}\] According to the principle of value based Q-learning algorithm, policy \(\pi^{*}\) depends on the action value table \(\mathcal{Q}\). Thus, the policy \(\pi^{*}(a|s)\) leads the agent to perform the action \(a^{\prime}\) under state \(s^{\prime}\) with the maximum \(Q(s^{\prime},a^{\prime})\), i.e., \[\pi^{*}(a|s)=\left\{\begin{array}{ll}1,&a=\arg\max_{a^{\prime}\in A(s^{ \prime})}Q\left(s^{\prime},a^{\prime}\right),\\ 0,&\text{otherwise},\end{array}\right. \tag{20}\] where \(\max_{a^{\prime}\in A(s^{\prime})}Q\left(s^{\prime},a^{\prime}\right)\) represents the max value when selecting action \(a^{\prime}\) at \(s^{\prime}\). The details of the updating process of action value table \(\mathcal{Q}\) are presented in Algorithm 2. In detail, as in step 6, it is necessary to update \(Q\left(s_{t},a_{t}\right)\) at time step \(t\) in the learning process, according to \[Q\left(s_{t},a_{t}\right)\gets Q\left(s_{t},a_{t}\right)+\alpha\delta_{t}, \tag{21}\] where \(\alpha\) is the learning rate, and the action value error \(\delta_{t}\) is denoted as \[\delta_{t}=R_{t+1}+\gamma\max_{a^{\prime}\in A(s^{\prime})}Q\left(s^{\prime}, a^{\prime}\right)-Q\left(s_{t},a_{t}\right). \tag{22}\] The agent performs the action \(a^{\prime}\) corresponding to the optimal value \(\max_{a^{\prime}\in A(s^{\prime})}Q\left(s^{\prime},a^{\prime}\right)\) under the next state \(s^{\prime}\) when updating \(Q(s,a)\), resulting in an overestimation problem concerning the sampled action. Therefore, action selection strategy \(\varepsilon-\mathrm{greedy}\) is introduced in traditional Q-learning method, i.e., \[\pi^{*}(a|s)=\left\{\begin{array}{ll}1-\varepsilon,&a=\arg\max_{a^{\prime} \in A(s^{\prime})}Q\left(s^{\prime},a^{\prime}\right),\\ \varepsilon,&a=\mathrm{random\,\,\,action},\end{array}\right. \tag{23}\] where \(\varepsilon{\in}(0,1)\) represents the probability of exploring random action \(a\). However, the traditional Q-learning method performs not well with respect to the convergence, due to the slow updating speed of \(\mathcal{Q}\). In order to accelerate the update speed of \(\mathcal{Q}\), an improved action selection strategy \(\varepsilon(ep)-\mathrm{greedy}\) is introduced. Further, the probability of random selection \(\varepsilon(ep)\) is a function related to episodes, instead of the constant value. With the increment of episodes, the probability of selecting random action decreases, which results in faster updating of \(\mathcal{Q}\). Thus, we design the QLUTP method to obtain an optimal trajectory \(\rho\) to minimize the total flight trajectory loss during the coverage missions. UAV swarm is 500m. Further, the corresponding values of the simulation parameters about QLUTP are initially set as \((\alpha,\gamma,episode)=(0.6,0.6,40)\). In addition, MATLAB is used as the platform to evaluate the convergence and effectiveness of the proposed mechanisms. As illustrated in Fig. 3, the coverage rate rises obviously with the increment of the number of hovering points \(N\), and it gradually converges and oscillates within a certain range slightly. The coverage rates under several scenarios with different number of GUs, i.e., (\(M=[30,35,40,45,50]\)), show that the average coverage rate is evidently above 90% when \(N=6\). Finally, the optimal hovering points \(\Omega(q_{1}{\rightarrow}q_{6},\mathcal{T})\) is obtained. To evaluate the performance of the proposed trajectory planning method, Fig. 4 indicates the convergences of the proposed QLUTP and QLUTP* obtained by different action selection strategies \(\varepsilon*(ep)\), compared with other methods. Note that the fastest convergence of QLUTP* reveals the superiority and effectiveness of Algorithm 2. Moreover, the trajectory loss of different algorithms for three scenarios are shown in Fig. 5. Specifically, we select four methods for trajectory planning under different HPs sets. It is shown that the distance of trajectory increases significantly with \(N\) increasing, which indicates that the UAV swarm consumes more flight energy. In addition, the optimal trajectory planned by QLUTP* needs fewer steps with less trajectory loss. ## V Conclusions In this work, we propose a hierarchical framework for the UAV swarm to handle the ACP for large-scale 3D coverage scenario. The UAV swarm provides steady wireless communication for non-uniformly distributed GUs. Additionally, K-means method is adopted to cluster GUs and select appropriate HPs. Further, according to the MDP model, we propose a QLUTP approach for trajectory planning. Simulations are conducted to evaluate the performance of the proposed method, and the results demonstrate that the proposed algorithms perform better for ACP than other baseline methods.
2309.09105
Distinguishable consequence of classical gravity on quantum matter
What if gravity is classical? If true, a consistent co-existence of classical gravity and quantum matter requires that gravity exhibit irreducible classical fluctuations. These fluctuations can mediate classical correlations between the quantized motion of the gravitationally interacting matter. We use a consistent theory of quantum-classical dynamics, together with general relativity, to show that experimentally relevant observables can conclusively test the hypothesis that gravity is classical. This can be done for example by letting highly coherent source masses interact with each other gravitationally, and performing precise measurements of the cross-correlation of their motion. Theory predicts a characteristic phase response that distinguishes classical gravity from quantum gravity, and from naive sources of decoherence. Such experiments are imminently viable.
Serhii Kryhin, Vivishek Sudhir
2023-09-16T22:32:04Z
http://arxiv.org/abs/2309.09105v1
# Distinguishable consequence of classical gravity on quantum matter ###### Abstract What if gravity is classical? If true, a consistent co-existence of classical gravity and quantum matter requires that gravity exhibit irreducible classical fluctuations. These fluctuations can mediate classical correlations between the quantized motion of the gravitationally interacting matter. We use a consistent theory of quantum-classical dynamics, together with general relativity, to show that experimentally relevant observables can conclusively test the hypothesis that gravity is classical. This can be done for example by letting highly coherent source masses interact with each other gravitationally, and performing precise measurements of the cross-correlation of their motion. Theory predicts a characteristic phase response that distinguishes classical gravity from quantum gravity, and from naive sources of decoherence. Such experiments are imminently viable. _Introduction._ It is believed that if gravitational source masses can be prepared in quantum superpositions, then the gravitational field sourced by them has to be quantum. Feynman's argument in support Feynman (1947) relies on the expectation that a double-slit experiment with two massive particles produces an interference pattern. If the gravitational fields sourced by them are assumed to be classical, then it can convey the which-path information that contradicts the development of the interference pattern. So for consistency, the gravitational field needs to be quantum, so that its quantum fluctuations obfuscate the which-path information (see also Feynman (1947); Feynman (1947)). An equally viable hypothesis that achieves consistency is that the gravitational field is classical and stochastic Feynman (1947); Feynman (1947); Feynman and Hibbs (1952); Feynman and Hibbs (1952); Feynman (1952), so that it cannot convey precise which-path information. Contrary to the hypothesis that gravity is quantum Feynman (1947); Feynman and Hibbs (1952), a stochastic classical gravity will not entangle the source masses. But since the field is a dynamical entity, it can mediate classical correlations between the motion of the source masses. It is precisely this subtle detail that is missing by naively adopting the view that the only effect of classical stochastic gravity is the decoherence of the source masses Feynman (1947); Feynman (1947); Feynman (1947); Feynman and Hibbs (1952); Feynman (1952); Feynman (1952); Feynman (1952). It certainly does, but it leaves a telltale sign distinct from quantum gravity, and from other extraneous sources of decoherence. We describe a consistent and generic low-energy theory of classical gravity interacting with quantum source masses, and produce experimentally relevant signatures of such a theory. These signatures are smoking-gun evidence for the hypothesis that gravity is classical. Crucially, these predictions are sensitive to the difference between a classical description of gravity and an effective quantum theory of gravitation Feynman and Hibbs (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman and Hibbs (1952), and can be tested without macroscopic masses in quantum superpositions Feynman (1952); Feynman (1953). _Classical-quantum theory._ Even if gravity is classical, it is necessary that when a quantized mass interacts with it, the ensuing dynamics does not prevent the assignment of a legitimate joint state. That is, there exists a positive-definite unit-trace operator \(\hat{\rho}_{t}(z)\) in the Hilbert space of the quantum object for each classical state \(z\) in the phase space of the classical gravitational field. The (quantum) state of the mass alone is \(\hat{\rho}_{Q}=\int\mathrm{d}z\,\hat{\rho}_{t}(z)\), while the (classical) state of the field alone is \(p_{t}(z)=\mathrm{Tr}\,\hat{\rho}_{t}(z)\). The general structure of the dynamics of \(\hat{\rho}_{t}(z)\) is Feynman (1947); Feynman and Hibbs (1952); Feynman (1952); Feynman and Hibbs (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); Feynman (1952); (1952); Feynman (1952); Feynman (1952); Feynman (1952); (1952); Feynman (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); Feynman (1952); (1952); (1952); Feynman (1952); (1952); (1953); Feynman (1952); (1953); Feynman (1952); (1952); (1953); (1954); (1955); Feynman (1952); (1953); (1954); (1955); (1956); (1957); (1958); (1959); (1959); (195); (195); (1959); (195); (1956); (1957); (1958); (1959); (1959); (1959); (1959); (195); (195); (1957); (1959); (1959); (1959); (1960); (1959); (1961) accounted for. Explicitly separating out its trace-free part (see Appendix B) gives rise to an additional function \(H^{\prime}\) which, together with \(H_{I}\), subsumes the classical-quantum diffusion term. In sum, Eq. (1) reduces to: \[\begin{split}\dot{\hat{\rho}}_{t}=-i[\hat{H},\hat{\rho}_{t}]+\,Q_{ \alpha\beta}\left(\hat{L}_{\alpha}\hat{\rho}_{t}\hat{L}_{\beta}^{\dagger}- \frac{1}{2}[\hat{L}_{\beta}^{\dagger}\hat{L}_{\alpha},\hat{\rho}_{t}]_{+}\right) +\\ \text{Herm}\left\{H,\hat{\rho}\right\}+i\,\text{AntiHerm}\{\hat{H }^{\prime},\hat{\rho}_{t}\}+\partial_{z_{i}z_{j}}\left(C_{ij}\hat{\rho}_{t}(z) \right),\end{split} \tag{2}\] where \(\hat{H}=\hat{H}_{Q}+H_{C}+\hat{H}_{I}\), Herm/AntiHerm denote the hermitian and anti-hermitian parts, and \(H^{\prime}\) is a new function, independent of the Hamiltonian \(H\), that will turn out to describe the irreducible effect of the classical system on the quantum system. We will now describe, within the formalism above, a consistent theory of classical gravity interacting with quantum masses, such that the gravitational interaction reduces to the (experimentally relevant) Newtonian limit. _The Hamiltonian \(H\)._ In the Newtonian limit, general relativity has no dynamical degrees of freedom: the Newtonian potential \(\Phi\) is entirely fixed by the configuration of source masses, _instantaneously_. Indeed, in this limit, the Lagrangian is singular and so passage to the Hamiltonian formalism is subtle [40; 41; 42]. Instead, we directly deal with the equations of motion of the degrees of freedom defined by the linearized metric tensor \(h_{\mu\nu}\). Writing \(h_{00}=-2\Phi,h_{0i}=w_{i},h_{ij}=2s_{ij}-2\Psi\delta_{ij}\), and exploiting the gauge freedom, \(h_{\mu\nu}\to h_{\mu\nu}+\partial_{(\mu}\xi_{\nu)}\), we can show (see Appendix C) that the Einstein equations reduce to \(\Box\Phi=4\pi G(T_{00}+T_{i}^{i})\). We verify that in the gauge chosen here, corresponding to the constraints \(2\partial_{0}\Phi+\partial_{k}w^{k}=0\) and \(\Phi=\Psi\), the dynamics of test particles in the low-energy limit is gauge independent. We then perform a Legendre transformation from the linearized Lagrangian of general relativity, with the gauge constraint imposed via Lagrange multipliers, to identify the momentum \(\Pi\) conjugate to the Newtonian potential \(\Phi\), and thus construct the Hamiltonian (see Appendix C). This Hamiltonian, partially simplified in the adiabatic limit, is \[H_{C}(z)+\hat{H}_{I}=\int\mathrm{d}^{3}x\left[2\pi Gc^{2}\Pi^{2}+\frac{|\nabla \Phi|^{2}}{8\pi G}-\Phi\hat{f}(x)\right], \tag{3}\] where \(z=(\Pi,\Phi)\) are the phase space coordinates, and \(\hat{f}(x)=\hat{T}_{00}\) is the mass density of the quantized matter. (The other components of the stress-energy tensor do not survive the \(c\to\infty\) limit.) The first two terms in Eq. (3) are purely classical and therefore correspond to \(H_{C}\), and the last term has both quantum and classical degrees of freedom, and therefore corresponds to \(\hat{H}_{I}\). Note that, unlike semi-classical theories of quantum gravity, this construction deals with the mass density operator \(\hat{f}\), and not its expectation value -- thus eluding the pathologies of the semi-classical theory. The full Hamiltonian is \(\hat{H}(z)=\hat{H}_{Q}+H_{C}(z)+\hat{H}_{I}\), where \(\hat{H}_{Q}\) is the Hamiltonian of the quantized matter. _The function \(H^{\prime}\) and quantum-classical diffusion._ Now we turn attention to the new function \(\hat{H}^{\prime}(z)\). Even though \(\hat{H}^{\prime}\) produces evolution of \(\hat{\rho}_{t}(z)\) (as in Eq. (2)), it turns out to have no influence on the dynamics of the classical field alone. Thus the structure of \(\hat{H}^{\prime}\) cannot be determined from purely classical arguments, and has to be part of the theory. However, its structure can be fixed from knowledge of the relevant degrees of freedom at play. For the gravitational field, as argued above, the relevant degree of freedom in the Newtonian limit is the potential \(\Phi\). For the quantized matter it is its mass density \(\hat{f}(x)\). Therefore, to the lowest order in \(1/c\), the function \(H^{\prime}\) can only be \[\hat{H}^{\prime}(z)=\epsilon\int\mathrm{d}^{3}x\;\Phi(x)\hat{f}(x), \tag{4}\] where \(\epsilon\) is a dimensionless constant that needs to be experimentally determined. The remaining terms in Eq. (2), proportional to \(Q_{\alpha\beta}\) and \(C_{ij}\), turn out to be irrelevant as far as weak gravity is concerned. As described above, in the weak-gravity regime, the relevant phase-space variables of the field are \(z=(\Phi,\Pi=\dot{\Phi})\). This means that a weak-field expansions of \(Q_{\alpha\beta}\) and \(C_{ij}\) involve terms that are zeroth and second order in \(z\). The zeroth order terms are clearly independent of gravity, while the second order terms are negligible compared to the first-order effect conveyed by \(H^{\prime}\). _Dynamics of quantum matter._ Suppose we assume that it is only the quantized matter that is accessible for experiments. Then the above theory can only be tested vis-\(\grave{a}\)-vis the properties encoded in the state \(\hat{\rho}_{Q}(t)\equiv\int\mathrm{d}z\,\hat{\rho}_{t}(z).\) In order to derive an equation of motion for this quantum state, it is necessary to eliminate the gravitational potential \(\Phi\) from Eq. (2). This is facilitated by the observation that the gravity Hamiltonian [Eq. (3)] together with the classical-quantum model implies the modified Newtonian law (see Appendix D): \[\begin{split}\int\mathrm{d}z\,\nabla_{x}^{2}\Phi(x)\hat{\rho}_{t} (z)=-4\pi G\left(\frac{1}{2}[\hat{f}(x),\hat{\rho}_{Q}]_{+}\right.\\ \left.i\epsilon[\hat{f}(x),\hat{\rho}_{Q}]\right),\end{split} \tag{5}\] where \([\cdot,\cdot]_{+}\) is the anti-commutator. (Note that this is a consequence of the Newtonian limit of the theory, and not an additional assumption [43].) Substituting Eq. (5) in Eq. (1) and integrating out the classical gravitational degree of freedom gives (restoring \(\hbar\)) \[\begin{split}\dot{\hat{\rho}}_{Q}=&-\frac{i}{\hbar}[ \hat{H}_{\text{eff}},\hat{\rho}_{Q}]\\ &-\epsilon\frac{G}{2\hbar}\int\frac{\mathrm{d}^{3}x\,\mathrm{d}^{3} y}{|x-y|}\left[\hat{f}(x),\left[\hat{f}(y),\hat{\rho}_{Q}\right]\right]\\ &+Q^{\alpha\beta}\left(\hat{L}_{\alpha}\hat{\rho}_{Q}\hat{L}_{ \beta}^{\dagger}-\frac{1}{2}\left[\hat{L}_{\alpha}^{\dagger}\hat{L}_{\beta}, \hat{\rho}_{Q}\right]_{+}\right),\end{split} \tag{6}\] which is a closed Lindblad equation for \(\hat{\rho}_{Q}\). \(\hat{H}_{\text{eff}}=\hat{H}_{Q}+\hat{H}_{G}\) is the effective Hamiltonian, where \[\hat{H}_{G}=-\frac{G}{2}\int\mathrm{d}^{3}x\,\mathrm{d}^{3}x^{\prime}\;\frac{ \hat{f}(x)\hat{f}(x^{\prime})}{|x-x^{\prime}|}, \tag{7}\] arises from the anti-commutator term in Eq. (5), and is \(z\)-independent. For a collection of point masses, \(\hat{f}(x)=\sum_{i}m_{i}\delta(x-\hat{x}_{i})\), and so \(\hat{H}_{G}\) describes the quantized Newtonian interaction between them. The stochastic effect of classical gravity is contained in the Lindblad term proportional to \(\epsilon\) in Eq. (6) -- its origin is the anti-commutator term in Eq. (5), which traces back to the function \(H^{\prime}(z)\). In contrast to recent work [7; 8; 44], we _derive_ the gravity-dependent Lindbladian term from the natural structure of the dynamics of the joint state, and the consistency of that with a stochastic extension of general relativity; this Lindbladian arises from the function \(H^{\prime}\), and not from a weak-field expansion of \(Q^{\alpha\beta}\). In our approach, since \(Q^{\alpha\beta}\) is independent of \(\epsilon\) to lowest order, the corresponding Lindbladian terms simply describe additional decoherence of the quantum matter. Thus \(\epsilon=0\) corresponds to a quantum description of gravity, while \(\epsilon\neq 0\) corresponds to classical gravity with \(\epsilon\) controlling the strength of its fluctuations. _Form of the gravity Lindbladian revisited._ Consider the example of two localized objects of masses \(m_{1,2}\) separated by a distance \(R\). Suppose the quantum fluctuations in the displacement of the two masses \(\hat{x}_{1,2}\) are small compared to \(R\), then the gravity Lindbladian in Eq. (6) \[\mathcal{L}_{G}[\hat{\rho}_{Q}]\equiv-\epsilon\frac{G}{2\hbar}\int\frac{ \mathrm{d}^{3}x\,\mathrm{d}^{3}x^{\prime}}{|x-x^{\prime}|}\left[\hat{f}(x), \left[\hat{f}(x^{\prime}),\hat{\rho}_{Q}\right]\right]\] reduces to the simple form \[\mathcal{L}_{G}[\hat{\rho}_{Q}]\approx-\epsilon\frac{Gm_{1}m_{2}}{\hbar R^{3}} [\hat{x}_{1},[\hat{x}_{2},\hat{\rho}_{Q}]]. \tag{8}\] This form of the Lindbladian is in fact completely natural in the Newtonian limit in the sense that it can be derived from purely dimensional arguments once the relevant degrees of freedom are known. To wit, assuming that the classical description of gravity only depends on the position of source masses, and that its effect is translation invariant, the Lindbladian can be taken to be of the form \[\mathcal{L}_{G}[\hat{\rho}_{Q}]=\frac{Gm_{1}m_{2}}{\hbar R}[\hat{L}_{1}(\hat{ \ell}_{1}-\hat{\ell}_{2}),[\hat{L}_{2}(\hat{\ell}_{1}-\hat{\ell}_{2}),\hat{ \rho}_{Q}]], \tag{9}\] where \(\hat{L}_{i}\) are dimensionless Hermitian operators that depend on the difference of \(\hat{\ell}_{i}=\hat{x}_{i}/R\). The dimensional pre-factor is fixed since it is the only suitable combination relevant in the low energy limit of concern here. (Other combinations, such as \(\hbar/(Gm^{3}R),\sqrt{G/(c^{3}R^{2})}\), come up only in higher orders in \(R\), which are irrelevant in the Newtonian limit.) Since the quantum fluctuations of the source masses are small, we only consider, \(\hat{L}_{i}(\hat{\ell}_{1}-\hat{\ell}_{2})=\epsilon_{i}(\hat{\ell}_{1}-\hat{ \ell}_{2})\), where the proportionality constant is arbitrary (but real). Inserting this in Eq. (9) gives three terms: two of the form (no sum on \(i\)) \(\epsilon_{i}^{2}[\hat{\ell}_{i}[\hat{\ell}_{i},\hat{\rho}_{Q}]]\); and another of the form \(\epsilon_{1}\epsilon_{2}[\hat{\ell}_{1},[\hat{\ell}_{2},\hat{\rho}_{Q}]]\). The former describes gravitational decoherence of either mass independent of the other; it may thus be absorbed into the description of the thermal bath that any realistic source mass will invariably be coupled to. The latter term describes joint decoherence of the masses due to their mutual back-action mediated via the stochastic classical field. This term, \(\mathcal{L}_{G}[\hat{\rho}_{Q}]\approx(\epsilon_{1}\epsilon_{2})(Gm_{1}m_{2}/ R^{3})[\hat{x}_{1},[\hat{x}_{2},\hat{\rho}_{Q}]]\), is precisely the form derived in Eq. (8). _Experimental signature._ We now consider an experimentally relevant example where two massive quantum harmonic oscillators interact with each other via classical gravity. As we will show, their motion is correlated via the classical stochastic gravitational field mediating their interaction, and these correlations offer a qualitatively distinct prediction from one where gravity is assumed quantum (i.e. \(\epsilon=0\)). We take the masses to be one-dimensional quantum harmonic oscillators of frequency \(\omega_{1,2}\) each coupled to independent thermal baths of occupation \(\bar{N}_{1,2}\) at rates \(\gamma_{1,2}\). They are also gravitationally coupled to each other by their close proximity of distance \(R\). This scenario is Figure 1: Cross-correlation of the motion of a pair of quantum oscillators mediated by stochastic classical gravity. Here the label “Quantum” corresponds to setting \(\epsilon=0\), in which case, gravity is a quantum field; “Classical” corresponds to \(\epsilon=1\). The two oscillators are identical with resonance frequency \(\omega=2\pi\cdot 100\,\mathrm{Hz}\), and damping rate \(\gamma=2\pi\cdot 10^{-3}\,\mathrm{Hz}\). The red traces depict \(|S_{x_{1}x_{2}}|\) while blue depicts the phase \(\arg S_{x_{1}x_{2}}\). The effect of the gravitational Lindbladian is an increase in the noise, by a factor \(\omega/2\gamma(2\bar{N}+1)\) compared to the quantum case, and an additional \(\pi\) phase shift at a detuning \(\Delta\Omega\approx 2\gamma(2\bar{N}+1)\) from resonance. described by the equation \[\dot{\rho}_{Q}(t)=-\frac{i}{\hbar}[\hat{H}_{\text{eff}},\hat{\rho}_{Q}]+\mathcal{L} \left[\hat{\rho}_{Q}\right], \tag{10}\] where \(\hat{H}_{\text{eff}}=\hat{H}_{Q}+\hat{H}_{G}\), with \(\hat{H}_{Q}=\sum_{i}\hbar\omega_{i}(\hat{a}_{i}^{\dagger}\hat{a}_{i}+\frac{1}{2})\), and \(\mathcal{L}=\mathcal{L}_{G}+\sum_{i}\mathcal{L}_{i}\), where \(\mathcal{L}_{i}[\hat{\rho}_{Q}]=\gamma_{i}(\bar{N}_{i}+1)(\hat{a}_{i}\hat{\rho }_{Q}\hat{a}_{i}^{\dagger}-\frac{1}{2}[\hat{a}_{i}^{\dagger}\hat{a}_{i},\hat{ \rho}_{Q}]_{+})+\gamma_{i}\bar{N}_{i}(\hat{a}_{i}^{\dagger}\hat{\rho}_{Q}\hat{ a}_{i}-\frac{1}{2}[\hat{a}_{i}\hat{a}_{i}^{\dagger},\hat{\rho}_{Q}]_{+})\), describes the realistic coupling of any experimental source mass to a thermal bath of average thermal occupation \(\bar{N}_{i}\)[45]. Here the gravitational interaction Hamiltonian \(\hat{H}_{G}\approx Gm_{1}m_{2}\hat{x}_{1}\hat{x}_{2}/(2R^{3})\) is obtained from the mass density \(\hat{f}(x)=m_{1}\delta(x+\frac{R}{2}-\hat{x}_{1})+m_{2}\delta(x-\frac{R}{2}- \hat{x}_{2})\) to evaluating Eq. (7) by first performing a small-distance expansion of the Coulombic denominator, and then performing the integrals. (Note that here we have dropped a singular self-interaction term whose origin is the point-mass approximation of the center-of-mass degree of freedom of the oscillator; for an oscillator that is formed of an elastic continuum, these terms will excite internal modes.) The primary effect of the gravitational interaction is the correlation developed via the gravity Lindbladian \(\mathcal{L}_{G}\). The simplest experimentally accessible observable sensitive to non-zero value of \(\epsilon\) is the the cross-correlation of the quantized displacements \(\hat{x}_{1,2}\) of the two gravitating oscillators. This can be inferred from the outcomes of continuous measurements of the displacements, say by interferometric displacement measurements. (These measurement schemes cause back-action [46] whose effect is an increased apparent temperature [47; 48], which can thus be absorbed into the bath occupations \(\bar{N}_{i}\); back-action-evasion measurements may also be imagined [49; 50; 51].) The theory presented above can produce a concrete prediction for the cross-correlation spectrum of the displacement \(S_{x_{1}x_{2}}[\Omega]\). We do so by first mapping the master equation [Eq. (10)] into a partial differential equation (PDE) for the characteristic function \(\chi(\xi,\xi^{*})=\text{Tr}\!\left[\hat{\rho}_{Q}(t)\hat{D}(\xi)\right]\), where \(\hat{D}(\xi)=\prod_{i}\exp\!\left[\xi_{i}\hat{a}_{i}^{\dagger}-\xi_{i}^{*} \hat{a}_{i}\right]\) are a set of complete orthogonal basis in the set of operators of the joint Hilbert space of the two oscillators [52]. Since the resulting PDE is of second order, and the initial thermal state of the oscillators is Gaussian in \(\chi(\xi,\xi^{*},0)\), a Gaussian ansatz suffices to solve for \(\chi(\xi,\xi^{*},t)\). From this solution, we obtain equal-time correlations of the coordinates and momenta of the oscillators. Finally, employing the quantum regression theorem [45], we get the unequal-time correlations, whose Fourier transform gives the cross-correlation spectrum (see Appendix E.2 for the gory details). For identical oscillators, the result is (Appendix E.3 contains the full expressions) \[S_{x_{1}x_{2}}[\Omega]\approx-\mu\left(\frac{d}{\omega}\right)^{2}\begin{cases} \epsilon+\frac{2\bar{N}+1}{Q};&\Omega\ll\omega\\ \left(\frac{\omega}{\Omega}\right)^{4}\left(\epsilon-\frac{2\bar{N}+1}{Q} \right);&\Omega\gg\omega,\end{cases} \tag{11}\] where, \(Q=\omega/(2\gamma)\) is the quality factor of the oscillator, and \(\mu=(2G/R^{3})(m/\omega)\). Importantly, gravitationally interacting oscillators with quality factor \(Q\gtrsim\bar{N}/\epsilon\) can be sensitive probes of whether \(\epsilon\) is zero or not. Figure 1 depicts the cross-correlation \(S_{x_{1}x_{2}}\) for a typical scenario of two identical mechanical oscillators coupled via gravity, under the hypothesis that gravity is quantum (\(\epsilon=0\)) or that it is a classical stochastic field, with \(\epsilon=1\) chosen arbitrarily. In the quantum case, the correlation between the two oscillators is purely via the quantized Newtonian interaction Hamiltonian \(\hat{H}_{G}\), resulting in correlated motion across all frequencies, with most of the motion concentrated around the common resonance. In the classical stochastic case, the joint interaction mediated by \(\mathcal{L}_{G}\) apparently produces anti-correlated motion away from resonance, accompanied by a characteristic phase shift arg\(\,S_{x_{1}x_{2}}\). This phase shift persists for _all_ non-zero values of \(\epsilon\) (unlike the anti-correlated feature in \(|S_{x_{1}x_{2}}|\), which can diminish with \(\epsilon\)) and is thus a robust experimental signature that gravity is classical. Thus, careful measurements of the cross-correlation of two gravitationally coupled highly coherent mechanical oscillators can distinguish between the hypothesis that gravity is classical or quantum. _Conclusion._ We have derived a natural and pathology-free theory of two quantized bodies interacting via classical gravity, and produced an experimentally testable implication of this theory that can distinguish whether gravity is classical or not. Testing this prediction calls for an experiment where two highly coherent massive oscillators are coupled to each other via gravity. Importantly, they do not need to be in superposition states [26; 27]. Further, the experimental signatures are distinct from the effects of extraneous decoherence. Although it is only the Newtonian limit of a stochastic classical gravitational interaction that has been explicated here, the more fundamental question of producing a fully covariant generalization, or indeed, whether such a version is sensible, is outstanding. Equation (1) can conceivably be generalized to spacetimes foliated by spacelike surfaces, since the extension from the case of a fixed time coordinate to a general timelike coordinate is straightforward [53]. Any further generalization -- for example, to elucidate the correct modification of general relativity beyond the semi-classical approximation, as achieved by Eq. (5) for the Newtonian limit -- will require a conceptual leap.
2305.00592
Automorphism groups of some 3-dimensional Leibniz algebras
Let $L$ be an algebra over a field $F$ with the binary operations $+$ and $[,]$. Then $L$ is called a left Leibniz algebra if it satisfies the left Leibniz identity: $[[a,b],c]=[a,[b,c]]-[b,[a,c]]$ for all elements $a,b,c\in L$. A linear transformation $f$ of $L$ is called an endomorphism of $L$, if $f([a,b])=[f(a),f(b)]$ for all elements $a,b\in L$. A bijective endomorphism of $L$ is called an automorphism of $L$. It is easy to show that the set of all automorphisms of the Leibniz algebra is a group with respect to the operation of multiplication of automorphisms. The description of the structure of the automorphism groups of Leibniz algebras is one of the natural and important problems of the general Leibniz algebra theory. The main goal of this article is to describe the structure of the automorphism group of a certain type of nilpotent three-dimensional Leibniz algebras.
L. A. Kurdachenko, O. O. Pypka, M. M. Semko
2023-04-30T22:31:44Z
http://arxiv.org/abs/2305.00592v1
# Automorphism groups of some 3-dimensional Leibniz algebras ###### Abstract Let \(L\) be an algebra over a field \(F\) with the binary operations \(+\) and \([,]\). Then \(L\) is called a left Leibniz algebra if it satisfies the left Leibniz identity: \([[a,b],c]=[a,[b,c]]-[b,[a,c]]\) for all elements \(a,b,c\in L\). A linear transformation \(f\) of \(L\) is called an endomorphism of \(L\), if \(f([a,b])=[f(a),f(b)]\) for all elements \(a,b\in L\). A bijective endomorphism of \(L\) is called an automorphism of \(L\). It is easy to show that the set of all automorphisms of the Leibniz algebra is a group with respect to the operation of multiplication of automorphisms. The description of the structure of the automorphism groups of Leibniz algebras is one of the natural and important problems of the general Leibniz algebra theory. The main goal of this article is to describe the structure of the automorphism group of a certain type of nilpotent three-dimensional Leibniz algebras. **Key Words:** Leibniz algebra, automorphism group. **2020 MSC:** 17A32, 17A36. ## 1 Introduction. Let \(L\) be an algebra over a field \(F\) with the binary operations \(+\) and \([,]\). Then \(L\) is called a _left Leibniz algebra_ if it satisfies the left Leibniz identity: \[[[a,b],c]=[a,[b,c]]-[b,[a,c]]\] for all elements \(a,b,c\in L\). Leibniz algebras appeared first in the paper of A. Blokh [2], but the term "Leibniz algebra" appears in the book of J.-L. Loday [11], and the article of J.-L. Loday [12]. In [13], J.-L. Loday and T. Pirashvili began the real study of the properties of Leibniz algebras. The theory of Leibniz algebras has developed very intensively in many different directions. Some of the results of this theory were presented in the book [1]. Note that Lie algebras are a partial case of Leibniz algebras. Conversely, if \(L\) is a Leibniz algebra, in which \([a,a]=0\) for every element \(a\in L\), then it is a Lie algebra. Thus, Lie algebras can be characterized as anticommutative Leibniz algebras. At the same time, there is a very significant difference between Lie algebras and Leibniz algebras (see, for example, survey papers [3, 4, 9, 14]). Let \(L\) be a Leibniz algebra. As usual, a linear transformation \(f\) of \(L\) is called an _endomorphism_ of \(L\), if \(f([a,b])=[f(a),f(b)]\) for all elements \(a,b\in L\). Clearly, a product of two endomorphisms of \(L\) is also endomorphism, so that the set of all endomorphisms of \(L\) is a semigroup by a multiplication. We note that the sum of two endomorphisms of \(L\) is not necessary to be an endomorphism of \(L\), so that we cannot speak about the endomorphism ring of \(L\). As usual, a bijective endomorphism of \(L\) is called an _automorphism_ of \(L\). It is not hard to show that the set \(Aut_{[,]}(L)\) of all automorphisms of \(L\) is a group by a multiplication (see, for example, [7]). As for other algebraic structures, the search for the structure of automorphism groups of Leibniz algebras is one of the important problems of this theory. It should be noted that automorphisms groups of Leibniz algebras have hardly been studied. It is natural to start studying automorphism groups of Leibniz algebras, the structure of which has been studied quite fully. A description of the structure of automorphism groups of infinite-dimensional cyclic Leibniz algebras was obtained in [10], and of finite-dimensional cyclic Leibniz algebras was obtained in [7]. The question naturally arises about automorphism groups of Leibniz algebras of low dimension. Unlike Lie algebras, the situation with Leibniz algebras of dimension 3 is very diverse. Leibniz algebras of dimension 3 are mostly described. Their most detailed description can be found in [6]. In [8], the description of automorphism groups of Leibniz algebras with dimension 3 was started. This description is quite large. The automorphism groups of only two types of nilpotent Leibniz algebras of dimension 3 are described in [8]. This article is devoted to the description of another type of nilpotent Leibniz algebras. Some preliminary results and remarks. Let \(L\) be a Leibniz algebra over a field \(F\). Then \(L\) is called _abelian_ if \([a,b]=0\) for every elements \(a,b\in L\). In particular, an abelian Leibniz algebra is a Lie algebra. If \(A,B\) are subspaces of \(L\), then \([A,B]\) will denote a subspace generated by all elements \([a,b]\) where \(a\in A\), \(b\in B\). A subspace \(A\) of \(L\) is called a _subalgebra_ of \(L\), if \([a,b]\in A\) for every \(a,b\in A\). A subalgebra \(A\) of \(L\) is called a _left_ (respectively _right_) _ideal_ of \(L\), if \([b,a]\in A\) (respectively \([a,b]\in A\)) for every \(a\in A\), \(b\in L\). A subalgebra \(A\) of \(L\) is called an _ideal_ of \(L\) (more precisely, _two-sided ideal_) if it is both a left ideal and a right ideal. Every Leibniz algebra \(L\) possesses the following specific ideal. Denote by \(Leib(L)\) the subspace generated by the elements \([a,a]\), \(a\in L\). It is not hard to prove that \(Leib(L)\) is an ideal of \(L\). The ideal \(Leib(L)\) is called the _Leibniz kernel_ of \(L\). We note the following important property of the Leibniz kernel: \([[a,a],x]=[a,[a,x]]-[a,[a,x]]=0\). The _left_ (respectively _right_) _center_\(\zeta^{\rm left}(L)\) (respectively \(\zeta^{\rm right}(L)\)) of a Leibniz algebra \(L\) is defined by the rule: \[\zeta^{\rm left}(L)=\{x\in L|\ [x,y]=0\ {\rm for\ each\ element}\ y\in L\}\] (respectively, \[\zeta^{\rm right}(L)=\{x\in L|\ [y,x]=0\ {\rm for\ each\ element}\ y\in L\}\] ). It is not hard to prove that the left center of \(L\) is an ideal, but that is not true for the right center. Moreover, \(Leib(L)\leqslant\zeta^{\rm left}(L)\) so that \(L/\zeta^{\rm left}(L)\) is a Lie algebra. The right center is a subalgebra of \(L\) and, in general, the left and right centers are different (see, for example, [5]). The _center_\(\zeta(L)\) of \(L\) is defined by the rule: \[\zeta(L)=\{x\in L|\ [x,y]=0=[y,x]\ {\rm for\ each\ element}\ y\in L\}.\] The center is an ideal of \(L\). Now we define the _upper central series_ \[\langle 0\rangle=\zeta_{0}(L)\leqslant\zeta_{1}(L)\leqslant\ldots\zeta_{ \alpha}(L)\leqslant\zeta_{\alpha+1}(L)\leqslant\ldots\zeta_{\eta}(L)=\zeta_{ \infty}(L)\] of a Leibniz algebra \(L\) by the following rule: \(\zeta_{1}(L)=\zeta(L)\) is the center of \(L\), and recursively, \(\zeta_{\alpha+1}(L)/\zeta_{\alpha}(L)=\zeta(L/\zeta_{\alpha}(L))\) for all ordinals \(\alpha\), and \(\zeta_{\lambda}(L)=\bigcup_{\mu<\lambda}\zeta_{\mu}(L)\) for the limit ordinals \(\lambda\). By definition, each term of this series is an ideal of \(L\). Define the _lower central series_ of \(L\) \[L=\gamma_{1}(L)\geqslant\gamma_{2}(L)\geqslant\ldots\gamma_{\alpha}(L)\geqslant \gamma_{\alpha+1}\geqslant\ldots\gamma_{\tau}(L)=\gamma_{\infty}(L)\] by the following rule: \(\gamma_{1}(L)=L\), \(\gamma_{2}(L)=[L,L]\), and recursively \(\gamma_{\alpha+1}(L)=[L,\gamma_{\alpha}(L)]\) for all ordinals \(\alpha\) and \(\gamma_{\lambda}(L)=\bigcap_{\mu<\lambda}\gamma_{\mu}(L)\) for the limit ordinals \(\lambda\). We say that a Leibniz algebra \(L\) is _nilpotent_, if there exists a positive integer \(k\) such that \(\gamma_{k}(L)=\langle 0\rangle\). More precisely, \(L\) is said to be _nilpotent of nilpotency class_\(c\) if \(\gamma_{c+1}(L)=\langle 0\rangle\), but \(\gamma_{c}(L)\neq\langle 0\rangle\). We denote the nilpotency class of \(L\) by \(ncl(L)\). Let \(L\) be a Leibniz algebra over a field \(F\), \(M\) be non-empty subset of \(L\) and \(H\) be a subalgebra of \(L\). Put \[Ann^{\rm left}_{H}(M)=\{a\in H|\ [a,M]=\langle 0\rangle\},\] \[Ann^{\rm right}_{H}(M)=\{a\in H|\ [M,a]=\langle 0\rangle\}.\] The subset \(Ann^{\rm left}_{H}(M)\) is called the _left annihilator_ of \(M\) in subalgebra \(H\). The subset \(Ann^{\rm right}_{H}(M)\) is called the _right annihilator_ of \(M\) in subalgebra \(H\). The intersection \[Ann_{H}(M)=Ann^{\rm left}_{H}(M)\cap Ann^{\rm right}_{H}(M)=\] \[\{a\in H|\ [a,M]=\langle 0\rangle=[M,a]\}\] is called the _annihilator_ of \(M\) in subalgebra \(H\). It is not hard to see that all of these subsets are subalgebras of \(L\). Moreover, if \(M\) is an ideal of \(L\), then \(Ann_{L}(M)\) is an ideal of \(L\) (see, for example, [4]). The first type of nilpotent Leibniz algebras of dimension 3 are nilpotent Leibniz algebras of nilpotency class 3. There is only one type of such algebras: \[L_{1}=Lei_{1}(3,F)=Fa_{1}\oplus Fa_{2}\oplus Fa_{3},\ {\rm where }\ [a_{1},a_{1}]=a_{2},[a_{1},a_{2}]=a_{3},\] \[[a_{1},a_{3}]=[a_{2},a_{1}]=[a_{2},a_{2}]=[a_{2},a_{3}]=[a_{3},a_ {1}]=[a_{3},a_{2}]=[a_{3},a_{3}]=0.\] This is a cyclic Leibniz algebra, \(Leib(L_{1})=\zeta^{\rm left}(L_{1})=[L_{1},L_{1}]=Fa_{2}\oplus Fa_{3}\), \(\zeta^{right}(L_{1})=\zeta(L_{1})=\gamma_{3}(L_{1})=Fa_{3}\). Let now \(L\) be a nilpotent Leibniz algebra, whose nilpotency class is 2. Of course, we assume that \(L\) is not a Lie algebra. Then the center \(\zeta(L)\) has dimension 2 or 1. Suppose that \(dim_{F}(\zeta(L))=2\). Since \(L\) is not a Lie algebra, there is an element \(a_{1}\) such that \([a_{1},a_{1}]=a_{3}\neq 0\). Since a Leibniz algebra of dimension 1 is abelian, \(a_{3}\in\zeta(L)\). It follows that \([a_{1},a_{3}]=[a_{3},a_{1}]=[a_{3},a_{3}]=0\). Being an abelian algebra of dimension 2, \(\zeta(L)\) has a direct decomposition \(\zeta(L)=Fa_{2}\oplus Fa_{3}\) for some element \(a_{2}\). Thus we come to the following type of nilpotent Leibniz algebra: \[L_{2}=Lei_{2}(3,F)=Fa_{1}\oplus Fa_{2}\oplus Fa_{3},\text{ where }[a_{1},a_{1}]=a_{3},[a_{1},a_{2}]=\] \[[a_{1},a_{3}]=[a_{2},a_{1}]=[a_{2},a_{2}]=[a_{2},a_{3}]=[a_{3},a_ {1}]=[a_{3},a_{2}]=[a_{3},a_{3}]=0.\] In other words, \(L_{2}\) is a direct sum of two ideals \(A=Fa_{1}\oplus Fa_{3}\) and \(B=Fa_{2}\). Moreover, \(A\) is a nilpotent cyclic Leibniz algebra of dimension 2, \(Leib(L_{2})=[L_{2},L_{2}]=Fa_{3}\), \(\zeta^{\text{left}}(L_{2})=\zeta^{\text{right}}(L_{2})=\zeta(L_{2})=Fa_{2} \oplus Fa_{3}\). We note that the structure of the automorphism groups of Leibniz algebras \(Lei_{1}(3,F)\) and \(Lei_{2}(3,F)\) was described in [8]. Suppose now that \(L\) is a nilpotent Leibniz algebra, \(ncl(L)=2\) and \(dim_{F}(\zeta(L))=1\). Since \(L\) is not a Lie algebra, there is an element \(a_{1}\) such that \([a_{1},a_{1}]=a_{3}\neq 0\). Since the factor-algebra \(L/\zeta(L)\) is abelian, \(a_{3}\in\zeta(L)\). It follows that \([a_{1},a_{3}]=[a_{3},a_{1}]=[a_{3},a_{3}]=0\). Then \(\zeta(L)=Fa_{3}\). For every element \(x\in L\) we have \([a_{1},x],[x,a_{1}]\in\zeta(L)\leqslant\langle a_{1}\rangle=Fa_{1}\oplus Fa_{3}\). It follows that subalgebra \(\langle a_{1}\rangle\) is an ideal of \(L\). Since \(dim_{F}(\langle a_{1}\rangle)=2\), \(\langle a_{1}\rangle\neq L\). Choose an element \(b\) such that \(b\not\in\langle a_{1}\rangle\). We have \([b,a_{1}]=\gamma a_{3}\) for some \(\gamma\in F\). If \(\gamma\neq 0\), then put \(b_{1}=\gamma^{-1}b-a_{1}\). Then \([b_{1},a_{1}]=0\). The choice of \(b_{1}\) shows that \(b_{1}\not\in\langle a_{1}\rangle\). If follows that subalgebra \(Ann^{\text{left}}_{L}(a_{1})\) has dimension 2. Suppose first that \(Ann^{\text{left}}_{L}(a_{1})\) is an abelian subalgebra. Then it has a direct decomposition \(Ann^{\text{left}}_{L}(a_{1})=Fa_{2}\oplus Fib_{2}\) for some element \(b_{2}\), where \([b_{2},b_{2}]=0\). Since \(dim_{F}(\zeta(L))=1\), \(b_{2}\not\in\zeta(L)\). Then \([a_{1},b_{2}]=\lambda a_{3}\) where \(0\neq\lambda\in F\). Put now \(a_{2}=\lambda^{-1}b_{2}\). Thus, we come to the following type of nilpotent Leibniz algebra: \[L_{3}=Lei_{3}(3,F)=Fa_{1}\oplus Fa_{2}\oplus Fa_{3},\text{ where }[a_{1},a_{1}]=[a_{1},a_{2}]=a_{3},\] \[[a_{1},a_{3}]=[a_{2},a_{1}]=[a_{2},a_{2}]=[a_{2},a_{3}]=[a_{3},a_ {1}]=[a_{3},a_{2}]=[a_{3},a_{3}]=0.\] In other words, \(L_{3}\) is a direct sum of ideal \(A=Fa_{1}\oplus Fa_{3}\) and subalgebra \(B=Fa_{2}\). Moreover, \(A\) is a nilpotent cyclic Leibniz algebra of dimension 2, \(Leib(L_{3})=[L_{3},L_{3}]=\zeta^{\text{right}}(L_{3})=\zeta(L_{3})=Fa_{3}\), \(\zeta^{\text{left}}(L_{3})=Fa_{2}\oplus Fa_{3}\). This article is devoted to the description of this type of nilpotent Leibniz algebras. Here are some general useful properties of automorphism groups of Leibniz algebras. Their proofs are given in the article [8]. **Lemma 2.1**.: _Let \(L\) be a Leibniz algebra over a field \(F\) and \(f\) be an automorphism of \(L\). Then \(f(\zeta^{\text{left}}(L))=\zeta^{\text{left}}(L)\), \(f(\zeta^{\text{right}}(L))=\zeta^{\text{right}}(L)\), \(f(\zeta(L))=\zeta(L)\), \(f([L,L])=[L,L]\)._ **Lemma 2.2**.: _Let \(L\) be a Leibniz algebra over a field \(F\) and \(f\) be an automorphism of \(L\). Then \(f(\zeta_{\alpha}(L))=\zeta_{\alpha}(L)\), \(f(\gamma_{\alpha}(L))=\gamma_{\alpha}(L)\) for all ordinals \(\alpha\). In particular, \(f(\zeta_{\infty}(L))=\zeta_{\infty}(L)\) and \(f(\gamma_{\infty}(L))=\gamma_{\infty}(L)\)._ **Lemma 2.3**.: _Let \(L\) be a Leibniz algebra over a field \(F\) and \(f\) be an endomorphism of \(L\). Then \(f(\gamma_{\alpha}(L))\leqslant\gamma_{\alpha}(L)\) for all ordinals \(\alpha\). In particular, \(f(\gamma_{\infty}(L))\leqslant\gamma_{\infty}(L)\)._ Let \(L\) be a Leibniz algebra over a field \(F\), \(A\) be a subalgebra of \(L\), \(G=Aut_{[,]}(L)\). Put \[C_{G}(A)=\{\alpha\in G|\ \alpha(x)=x\ \text{for every element}\ x\in A\}.\] If \(A\) is an ideal of \(L\), then put \[C_{G}(L/A)=\{\alpha\in G|\ \alpha(x+A)=x+A\ \text{for every element}\ x\in L\}.\] **Lemma 2.4**.: _Let \(L\) be a Leibniz algebra over a field \(F\) and \(G=Aut_{[,]}(L)\). If \(A\) is a \(G\)-invariant subalgebra, then \(C_{G}(A)\) and \(C_{G}(L/A)\) are normal subgroups of \(G\)._ ## 3 Main result. **Theorem 3.1**.: _Let \(G\) be an automorphism group of Leibniz algebra \(L_{3}\). Then \(G\) is isomorphic to a subgroup of \(GL_{3}(F)\), consisting of the matrices, having the following form:_ \[\left(\begin{array}{ccc}\alpha_{1}&0&0\\ \alpha_{2}&\alpha_{1}+\alpha_{2}&0\\ \alpha_{3}&\beta_{3}&\alpha_{1}^{2}+\alpha_{1}\alpha_{2}\end{array}\right)\] _where \(\alpha_{1}\neq 0\), \(\alpha_{1}+\alpha_{2}\neq 0\). This subgroup is a semidirect product of normal subgroup \(S_{3}(L,F)\), which is isomorphic to a subgroup of \(GL_{3}(F)\), consisting of the matrices of the form_ \[\left(\begin{array}{ccc}1&0&0\\ \alpha_{2}&1+\alpha_{2}&0\\ \alpha_{3}&\beta_{3}&1+\alpha_{2}\end{array}\right)\] _and a subgroup \(D_{3}(L,F)\), consisting of the matrices of the form_ \[\left(\begin{array}{ccc}\sigma&0&0\\ 0&\sigma&0\\ 0&0&\sigma^{2}\end{array}\right).\] _In particular, \(D_{3}(L,F)\) is isomorphic to multiplicative group of a field \(F\). Furthermore, \(S_{3}(L,F)\) is a semidirect product of subgroup \(T_{3}(L,F)\), which is normal in \(G\) and isomorphic to a subgroup of \(GL_{3}(F)\), consisting of the matrices of the form_ \[\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ \alpha_{3}&\beta_{3}&1\end{array}\right),\] _and a subgroup \(J_{3}(L,F)\), which is isomorphic to a subgroup of \(GL_{3}(F)\), consisting of the matrices of the form_ \[\left(\begin{array}{ccc}1&0&0\\ \lambda&1+\lambda&0\\ 0&0&1+\lambda\end{array}\right).\] _A subgroup \(T_{3}(L,F)\) is isomorphic to direct product of two copy of additive group of a field \(F\), and a subgroup \(J_{3}(L,F)\) is isomorphic to multiplicative group of a field \(F\)._ Proof. Let \(L=Lei_{3}(3,F)\), \(f\in Aut_{[;]}(L)\). By Lemma 2.1, \(f(Fa_{3})=Fa_{3}\), \(f(Fa_{2}\oplus Fa_{3})=Fa_{2}\oplus Fa_{3}\), so that \[f(a_{1})=\alpha_{1}a_{1}+\alpha_{2}a_{2}+\alpha_{3}a_{3},\] \[f(a_{2})=\beta_{2}a_{2}+\beta_{3}a_{3}.\] Then \[f(a_{3})=f([a_{1},a_{1}])=[f(a_{1}),f(a_{1})]=\] \[[\alpha_{1}a_{1}+\alpha_{2}a_{2}+\alpha_{3}a_{3},\alpha_{1}a_{1}+ \alpha_{2}a_{2}+\alpha_{3}a_{3}]=\] \[\alpha_{1}^{2}[a_{1},a_{1}]+\alpha_{1}\alpha_{2}[a_{1},a_{2}]= \alpha_{1}^{2}a_{3}+\alpha_{1}\alpha_{2}a_{3}=(\alpha_{1}^{2}+\alpha_{1}\alpha _{2})a_{3};\] \[f(a_{3})=f([a_{1},a_{2}])=[f(a_{1}),f(a_{2})]=[\alpha_{1}a_{1}+ \alpha_{2}a_{2}+\alpha_{3}a_{3},\beta_{2}a_{2}+\beta_{3}a_{3}]=\] \[\alpha_{1}\beta_{2}[a_{1},a_{2}]=\alpha_{1}\beta_{2}a_{3}.\] Thus, we obtain an equality \(\alpha_{1}(\alpha_{1}+\alpha_{2})=\alpha_{1}\beta_{2}\). Being an automorphism, \(f\) is a non-degenerate linear transformation, so that \(\alpha_{1}\neq 0\). It follows that \(\alpha_{1}+\alpha_{2}=\beta_{2}\). Thus, an automorphism \(f\) has in basis \(\{a_{1},a_{2},a_{3}\}\) the following matrix \[\left(\begin{array}{ccc}\alpha_{1}&0&0\\ \alpha_{2}&\alpha_{1}+\alpha_{2}&0\\ \alpha_{3}&\beta_{3}&\alpha_{1}^{2}+\alpha_{1}\alpha_{2}\end{array}\right)\] Denote by \(\Xi\) the canonical monomorphism of \(End_{[;]}(L)\) in \(M_{3}(F)\). Put \[S=\{f|\ f\in End(L),f(a_{1})\in a_{1}+\zeta^{\rm left}(L)\}=C_{End(L)}(L/\zeta^{ \rm left}(L)).\] If \(f\in S\cap Aut_{[,]}(L)\), then \(f(a_{1})=a_{1}+\alpha_{2}a_{2}+\alpha_{3}a_{3}\), \(f(a_{2})=(1+\alpha_{2})a_{2}+\beta_{3}a_{3}\), \(f(a_{3})=(1+\alpha_{2})a_{3}\). If \(x\) is an arbitrary element of \(L\), \(x=\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3}\), then \[f(x)=\xi_{1}f(a_{1})+\xi_{2}f(a_{2})+\xi_{3}f(a_{3})=\] \[\xi_{1}a_{1}+\xi_{1}\alpha_{2}a_{2}+\xi_{1}\alpha_{3}a_{3}+\xi_{2 }((1+\alpha_{2})a_{2}+\beta_{3}a_{3})+\xi_{3}(1+\alpha_{2})a_{3}=\] \[\xi_{1}a_{1}+(\xi_{1}\alpha_{2}+\xi_{2}+\xi_{2}\alpha_{2})a_{2}+( \xi_{1}\alpha_{3}+\xi_{2}\beta_{3}+\xi_{3}(1+\alpha_{2}))a_{3}.\] Conversely, let \(\lambda,\mu,\nu\) be the elements of \(F\), \(v_{\lambda,\mu,\nu}\) be a linear transformation of \(L\), defined by the rule: if \(x=\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3}\), then \[v_{\lambda,\mu,\nu}(x)=\xi_{1}a_{1}+(\xi_{1}\lambda+\xi_{2}+\xi_{2}\lambda)a_ {2}+(\xi_{1}\mu+\xi_{2}\nu+\xi_{3}(1+\lambda))a_{3}.\] Let \(x,y\) be the arbitrary elements of \(L\), \(x=\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3}\), \(y=\eta_{1}a_{1}+\eta_{2}a_{2}+\eta_{3}a_{3}\), where \(\xi_{1},\xi_{2},\xi_{3},\eta_{1},\eta_{2},\eta_{3}\in F\). Then \[[x,y]=[\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3},\eta_{1}a_{1}+\eta_{2}a_{2}+\eta _{3}a_{3}]=\] \[\xi_{1}\eta_{1}[a_{1},a_{1}]+\xi_{1}\eta_{2}[a_{1},a_{2}]=\xi_{1}(\eta_{1}+ \eta_{2})a_{3};\] \[v_{\lambda,\mu,\nu}([x,y])=v_{\lambda,\mu,\nu}(\xi_{1}(\eta_{1}+\eta_{2})a_{3} )=\xi_{1}(\eta_{1}+\eta_{2})v_{\lambda,\mu,\nu}(a_{3})=\] \[\xi_{1}(\eta_{1}+\eta_{2})(1+\lambda)a_{3};\] \[[v_{\lambda,\mu,\nu}(x),v_{\lambda,\mu,\nu}(y)]=\] \[[\xi_{1}a_{1}+(\xi_{1}\lambda+\xi_{2}+\xi_{2}\lambda)a_{2}+(\xi_{1}\mu+\xi_{2 }\nu+\xi_{3}(1+\lambda))a_{3},\] \[\eta_{1}a_{1}+(\eta_{1}\lambda+\eta_{2}+\eta_{2}\lambda)a_{2}+(\eta_{1}\mu+ \eta_{2}\nu+\eta_{3}(1+\lambda))a_{3}]=\] \[\xi_{1}\eta_{1}[a_{1},a_{1}]+\xi_{1}(\eta_{1}\lambda+\eta_{2}+\eta_{2}\lambda) [a_{1},a_{2}]=(\xi_{1}\eta_{1}+\xi_{1}(\eta_{1}\lambda+\eta_{2}+\eta_{2} \lambda))a_{3}=\] \[\xi_{1}(\eta_{1}+\eta_{1}\lambda+\eta_{2}+\eta_{2}\lambda)a_{3}=\xi_{1}(\eta_{ 1}+\eta_{2})(1+\lambda)a_{3},\] so that \(v_{\lambda,\mu,\nu}([x,y])=[v_{\lambda,\mu,\nu}(x),v_{\lambda,\mu,\nu}(y)]\). It shows that \(S\leqslant Aut_{[,]}(L)\). Moreover, by Lemma 2.4, \(S\) is a normal subgroup of \(Aut_{[,]}(L)\). Furthermore, put \(S_{3}(L,F)=\Xi(S)\). Then \(S_{3}(L,F)\) is a subgroup of a group \(T_{3}(F)\), which consist of the matrices, having the following form: \[\left(\begin{array}{ccc}1&0&0\\ \alpha_{2}&1+\alpha_{2}&0\\ \alpha_{3}&\beta_{3}&1+\alpha_{2}\end{array}\right).\] Let \[T=\{f|\ f\in End(L),f(a_{1})\in a_{1}+[L,L],f(a_{2})\in a_{2}+[L,L]\}=\] \[C_{End(L)}(L/[L,L]).\] If \(f\in T\cap Aut_{[,]}(L)\), then \(f(a_{1})=a_{1}+\alpha_{3}a_{3}\), \(f(a_{2})=a_{2}+\beta_{3}a_{3}\), \(f(a_{3})=a_{3}\). If \(x\) is an arbitrary element of \(L\), \(x=\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3}\), then \[f(x)=\xi_{1}f(a_{1})+\xi_{2}f(a_{2})+\xi_{3}f(a_{3})=\] \[\xi_{1}a_{1}+\xi_{1}\alpha_{3}a_{3}+\xi_{2}a_{2}+\xi_{2}\beta_{3} a_{3}+\xi_{3}a_{3}=\] \[\xi_{1}a_{1}+\xi_{2}a_{2}+(\xi_{1}\alpha_{3}+\xi_{2}\beta_{3}+ \xi_{3})a_{3}.\] Conversely, let \(\lambda,\mu\) be the elements of \(F\), \(z_{\lambda,\mu}\) be a linear transformation of \(L\), defined by the rule: if \(x=\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3}\), then \[z_{\lambda,\mu}(x)=\xi_{1}a_{1}+\xi_{2}a_{2}+(\xi_{1}\lambda+\xi_{2}\mu+\xi_{3 })a_{3}.\] Let \(x,y\) be the arbitrary elements of \(L\), \(x=\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3}\), \(y=\eta_{1}a_{1}+\eta_{2}a_{2}+\eta_{3}a_{3}\), where \(\xi_{1},\xi_{2},\xi_{3},\eta_{1},\eta_{2},\eta_{3}\in F\). Then \[[x,y]=[\xi_{1}a_{1}+\xi_{2}a_{2}+\xi_{3}a_{3},\eta_{1}a_{1}+\eta_ {2}a_{2}+\eta_{3}a_{3}]=\xi_{1}(\eta_{1}+\eta_{2})a_{3};\] \[z_{\lambda,\mu}([x,y])=z_{\lambda,\mu}(\xi_{1}(\eta_{1}+\eta_{2} )a_{3})=\xi_{1}(\eta_{1}+\eta_{2})z_{\lambda,\mu}(a_{3})=\xi_{1}(\eta_{1}+ \eta_{2})a_{3};\] \[[z_{\lambda,\mu}(x),z_{\lambda,\mu}(y)]=\] \[[\xi_{1}a_{1}+\xi_{2}a_{2}+(\xi_{1}\lambda+\xi_{2}\mu+\xi_{3})a_ {3},\eta_{1}a_{1}+\eta_{2}a_{2}+(\eta_{1}\lambda+\eta_{2}\mu+\eta_{3})a_{3}]=\] \[\xi_{1}\eta_{1}[a_{1},a_{1}]+\xi_{1}\eta_{2}[a_{1},a_{2}]=\xi_{1} (\eta_{1}+\eta_{2})a_{3},\] so that \(z_{\lambda,\mu}([x,y])=[z_{\lambda,\mu}(x),z_{\lambda,\mu}(y)]\). It shows that \(T\) is a subgroup of \(Aut_{[,]}(L)\). Furthermore, put \(T_{3}(L,F)=\Xi(T)\). Then \(T_{3}(L,F)\) is a subgroup of a group \(UT_{3}(F)\) of all unitriangular matrices over a field \(F\), which consist of the matrices, having the following form: \[\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ \alpha_{3}&\beta_{3}&1\end{array}\right).\] It is not hard to see, that \(T_{3}(L,F)\) is abelian and it is isomorphic to direct product of two copy of additive group of a field \(F\). Clearly \(\Xi(Aut_{[,]}(L))\cap UT_{3}(F)=T_{3}(L,F)\), so it follows that a subgroup \(T\) is normal in \(Aut_{[,]}(L)\). Let \[J=\{f|\:f\in S,f(a_{1})=a_{1}+\lambda a_{2},f(a_{2})=(1+\lambda)a_{2},f(a_{3}) =(1+\lambda)a_{3},\lambda\in F\}.\] Put \(J_{3}(L,F)=\Xi(J)\). Then \(J_{3}(L,F)\) is a subset of \(T_{3}(F)\), which consist of the matrices, having the following form: \[\left(\begin{array}{ccc}1&0&0\\ \lambda&1+\lambda&0\\ 0&0&1+\lambda\end{array}\right).\] An equality \[\left(\begin{array}{ccc}1&0&0\\ \lambda&1+\lambda&0\\ 0&0&1+\lambda\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \mu&1+\mu&0\\ 0&0&1+\mu\end{array}\right)=\] \[\left(\begin{array}{ccc}1&0&0\\ (1+\lambda)(1+\mu)-1&(1+\lambda)(1+\mu)&0\\ 0&0&(1+\lambda)(1+\mu)\end{array}\right)\] shows that \(J_{3}(L,F)\) is a subgroup of \(S_{3}(L,F)\). Moreover, it is not hard to see that \(J_{3}(L,F)\) is isomorphic to multiplicative group of a field \(F\). Furthermore, it is not hard to see that the matrix equation \[\left(\begin{array}{ccc}1&0&0\\ \alpha_{2}&1+\alpha_{2}&0\\ \alpha_{3}&\beta_{3}&1+\alpha_{2}\end{array}\right)=\left(\begin{array}{ccc }1&0&0\\ 0&1&0\\ x&y&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ z&1+z&0\\ 0&0&1+z\end{array}\right)=\] \[\left(\begin{array}{ccc}1&0&0\\ z&1+z&0\\ x+yz&y+yz&1+z\end{array}\right)\] has the solutions. It proves that \(S_{3}(L,F)=T_{3}(L,F)J_{3}(L,F)\) and therefore \(S=TJ\). Clearly the intersection \(T\cap J\) is trivial. Let \[D=\{f|\ f\in Aut_{[,]}(L),f(a_{1})=\sigma a_{1},f(a_{2})=\sigma a_{2},\sigma \in F\}.\] By proved above, \(f(a_{3})=\sigma^{2}a_{3}\). Put \(D_{3}(L,F)=\Xi(D)\). Then \(D_{3}(L,F)\) is a subset of \(T_{3}(F)\), which consist of the matrices, having the following form: \[\left(\begin{array}{ccc}\sigma&0&0\\ 0&\sigma&0\\ 0&0&\sigma^{2}\end{array}\right).\] An equality \[\left(\begin{array}{ccc}\sigma&0&0\\ 0&\sigma&0\\ 0&0&\sigma^{2}\end{array}\right)\left(\begin{array}{ccc}\nu&0&0\\ 0&\nu&0\\ 0&0&\nu^{2}\end{array}\right)=\left(\begin{array}{ccc}\sigma\nu&0&0\\ 0&\sigma\nu&0\\ 0&0&\sigma^{2}\nu^{2}\end{array}\right)\] shows that \(D_{3}(L,F)\) is a subgroup of \(\Xi(Aut_{[,]}(L))\). Moreover, it is not hard to see that \(D_{3}(L,F)\) is isomorphic to multiplicative group of a field \(F\). Furthermore, an equality \[\left(\begin{array}{ccc}\alpha_{1}&0&0\\ \alpha_{2}&\alpha_{1}+\alpha_{2}&0\\ \alpha_{3}&\beta_{3}&\alpha_{1}^{2}+\alpha_{1}\alpha_{2}\end{array}\right)=\] \[\left(\begin{array}{ccc}1&0&0\\ \alpha_{2}\alpha_{1}^{-1}&1+\alpha_{2}\alpha_{1}^{-1}&0\\ \alpha_{3}\alpha_{1}^{-1}&\beta_{3}\alpha_{1}^{-1}&1+\alpha_{2}\alpha_{1}^{-1 }\end{array}\right)\left(\begin{array}{ccc}\alpha_{1}&0&0\\ 0&\alpha_{1}&0\\ 0&0&\alpha_{1}^{2}\end{array}\right)\] proves that \(\Xi(Aut_{[,]}(L))=S_{3}(L,F)D_{3}(L,F)\) and therefore \(Aut_{[,]}(L)=SD\). Clearly the intersection \(S\cap D\) is trivial. Thus, we obtain that \(Aut_{[,]}(L)=S\diagdown D\), \(D\cong F^{\times}\), \(S=T\diagdown J\), moreover \(T\) is normal in \(Aut_{[,]}(L)\), \(T\cong F_{+}\times F_{+}\), \(J\cong F^{\times}\). \(\square\)
2302.14540
Casimir effect, loop corrections and topological mass generation for interacting real and complex scalar fields in Minkowski spacetime with different conditions
In this paper the Casimir energy density, loop corrections, and generation of topological mass are investigated for a system consisting of two interacting real and complex scalar fields. The interaction considered is the quartic interaction in the form of a product of the modulus square of the complex field and the square of the real field. In addition, it is also considered the self-interaction associated with each field. In this theory, the scalar field is constrained to always obey periodic condition while the complex field obeys in one case a quasiperiodic condition and in other case mixed boundary conditions. The Casimir energy density, loop corrections, and topological mass are evaluated analytically for the massive and massless scalar fields considered. An analysis of possible different stable vacuum states and the corresponding stability condition is also provided. In order to better understand our investigation, some graphs are also presented. The formalism we use here to perform such investigation is the effective potential, which is written as loop expansions via path integral in quantum field theory.
A. J. D. Farias Junior, Herondy F. Santana Mota
2023-02-28T12:54:23Z
http://arxiv.org/abs/2302.14540v3
Casimir effect, loop corrections and topological mass generation for interacting real and complex scalar fields in Minkowski spacetime with different conditions ###### Abstract In this paper the Casimir energy density, loop corrections, and generation of topological mass are investigated for a system consisting of two interacting real and complex scalar fields. The interaction considered is the quartic interaction in the form of a product of the modulus square of the complex field and the square of the real field. In addition, it is also considered the self-interaction associated with each field. In this theory, the scalar field is constrained to always obey periodic condition while the complex field obeys in one case a quasiperiodic condition and in other case mixed boundary conditions. The Casimir energy density, loop corrections, and topological mass are evaluated analytically for the massive and massless scalar fields considered. An analysis of possible different stable vacuum states and the corresponding stability condition is also provided. In order to better understand our investigation, some graphs are also presented. The formalism we use here to perform such investigation is the effective potential, which is written as loop expansions via path integral in quantum field theory. ## I Introduction Since its prediction in 1948, the Casimir effect is considered one of the most interesting physical phenomenon. This effect, which is of pure quantum nature, was predicted by H. Casimir [1]. In its standard form, the Casimir effect consists in a force of attraction that arises in a system of two neutral parallel and perfectly conducting plates, placed in a classical vacuum, near to each other. This force of attraction is described in the framework of quantum field theory and is due to modifications in the vacuum fluctuations associated with the quantized electromagnetic field, as a consequence of the imposition of Dirichlet boundary condition on the plates. The phenomenon of the Casimir effect, in the case of the electromagnetic field, has been confirmed by several high accuracy experiments [2; 3; 4; 5; 6; 7; 8]. Currently, it is also known that not only the electromagnetic field presents the Casimir effect, but other fields as well, such as scalar and fermion fields. Additionally, the Casimir effect may also arise from different boundary conditions. For instance, the Casimir effect associated with a real scalar field subjected to a helix boundary condition with temperature corrections is considered in Ref. [9], subjected to Robin boundary conditions in Ref. [10], and in Ref. [11] the Casimir energy for a real scalar field and the Elko neutral spinor field in a field theory at a Lifshitz fixed point is obtained. Moreover, in Ref. [12] a complex scalar field theory has been considered and the corresponding Casimir energy density in compact spacetimes investigated. A review on the Casimir effect can be found in Ref. [13] (see also [14; 15]). As we have mentioned, boundary conditions play a crucial role in the investigation of the Casimir effect. A very interesting condition is the quasiperiodic one. In Ref. [16] a scalar field under a quasiperiodic condition, inspired by nanotubes, is considered in order to investigate the corresponding Casimir effect. It is found that the Casimir force can be attractive or repulsive depending on the value of the phase related to the quasiperiodic condition. Another interesting boundary condition that modifies the quantum vacuum fluctuations of a field is the one known as mixed boundary condition (Dirichlet - Neumann). The Casimir energy arising as a consequence of the imposition of mixed boundary conditions on a real self-interacting scalar field has been considered in Ref. [17] and in Ref. [18], where a Lorentz violation scenario is also taken into account. Also, in Ref. [19] it is studied a scalar field with a quartic self-interaction restricted to obey a helix boundary condition. The investigation of interacting quantum fields is important since fields found in nature are always interacting. The interaction mostly considered in the previous mentioned works is a quartic self-interaction. In the framework of two interacting quantum fields, using the effective potential approach, Toms in Ref. [20] have considered two real scalar fields, one twisted and the other untwisted, interacting via the so called quartic interaction, i.e., the product between the square of the two fields in a Euclidean spacetime, in order to investigate the symmetry breaking and mass generation as a consequence of the nontrivial topology produced by the periodic and antiperiodic conditions used. The quartic interaction between two real fields is also considered in Ref. [21] for the study of the phenomenon of particle production from oscillating scalar backgrounds in a Friedmann-Lemaitre-Robertson-Walker universe using non-equilibrium quantum field theory. In this type of calculation the renormalization procedure is of particular importance. In this sense, Ref. [22] presents a detailed discussion about the renormalization scheme present in quantum field theory comprising two interacting scalar fields. In order to investigate the Casimir energy density, loop corrections and generation of topological mass, in the present paper, we consider a system consisting of two real and complex scalar fields interacting with each other via quartic interaction, in addition to the self-interactions which are normally present. The real field is always subject to a periodic condition while the complex field is restricted to obey a quasiperiodic condition, as well as mixed boundary conditions used on two identical and perfectly reflecting parallel planes separated by a distance \(L\). Hence, we shall analyze separately two scenarios, one where a periodic real scalar field interacts with a complex scalar field obeying a quasiperiodic condition and the other where a periodic real scalar field interacts with a complex scalar field obeying mixed boundary conditions. These two scenarios generalize previous results found in the literature where it has been considered self-interacting real scalar fields under periodic [23; 24] and quasiperiodic [25] conditions, and also under mixed boundary conditions [17; 18]. In this regard, we extend the analysis performed in Ref. [20] to the complex scalar field and considering other conditions. The choice of boundary conditions have to be mathematically consistent and the choice of mixed boundary conditions are natural, for instance, in the case of quantum gravity, spinor field theory and supergravity [26]. Furthermore, the quasiperiodic condition plays an important role when one considers nanotubes or nanoloops for a quantum field [16]. For example, if the phase angle is zero (the periodic case), we have a corresponding system describing metallic nanotubes, while the values \(\pm\frac{2\pi}{3}\) for the phase angle correspond to semiconductor nanotubes. In our investigation, we shall use the path integral formalism and construct the effective potential in terms of loop expansions. This formalism was developed by Jackiw [27] and allows us to obtain the Casimir energy density and loop corrections. The formalism also allows us, in principle, to calculate loop corrections to the mass of the fields as a consequence of the nontrivial topology of the spacetime. In our case, we consider only one-loop correction to the mass, which is enough to see generation of topological mass at first order. This paper is organized as follows: in Sec.II we review the main aspects of the path integral formalism to obtain the effective potential in the case of two interacting quantum fields, one real and the other complex. The interaction considered is the quartic interaction, that is, a product between the modulus square of the complex field and the square of the real field. In addition, we also consider self-interaction contributions for each field. In Sec.III, we consider the real and complex fields interacting with each other, where the real scalar field is subjected to a periodic condition, while the components of the complex field obey a quasiperiodic condition. By using the Riemann zeta function technique, we evaluate the effective potential of the system, the Casimir energy density and the one-loop correction to the mass. It is also discussed the conditions for the stability of the vacuum states, which leads to conditions for a positive topological mass. In Sec.IV the components of the complex field will now be subjected to mixed boundary conditions. The Casimir energy density, topological mass and the vacuum stability are also investigated. In Sec.V we present our conclusions. Through this paper we use natural units in which both the Planck constant and the speed of light are given by \(\hbar=c=1\). ## II Effective potential for interacting real and complex scalar fields In this section we consider a real scalar field, \(\psi\), interacting with a complex scalar one, that is, \(\phi\). The interaction term is in the form of a product of the modulus square of the complex field and the square of \(\psi\). This choice of interaction satisfies the discrete symmetry \(\psi\rightarrow-\psi\), as well as the global symmetry \(\phi\to e^{i\alpha}\phi\). The existence of symmetries in particle physics is crucial for the predicability of a given model and for a better fitting with experimental data [28], making it important to consider this type of interactions in the investigation of the Casimir effect, for instance. Note that it is also considered the quartic self-interaction for each field. In the path integral approach for the evaluation of the effective potential, it is usual to work with the Euclidean spacetime coordinates, with imaginary time [29]. Moreover, the complex field \(\phi\) can be decomposed in terms of its real components as \[\phi=\frac{1}{\sqrt{2}}\left(\varphi_{1}+i\varphi_{2}\right),\qquad\qquad\phi ^{*}=\frac{1}{\sqrt{2}}\left(\varphi_{1}-i\varphi_{2}\right).\] The model associated with the system described above, in Euclidean coordinates, is given by the following action: \[S_{E}\left[\psi,\varphi_{i}\right]=\frac{1}{2}\int d^{4}x\left[ \psi\Box\psi+\sum_{i=1}^{2}\varphi_{i}\Box\varphi_{i}\right]-\int d^{4}xU\left( \psi,\varphi\right),\] \[U\left(\psi,\varphi\right)=\frac{m^{2}+C_{2}}{2}\psi^{2}+\frac{ \mu^{2}}{2}\varphi^{2}+\frac{g}{2}\varphi^{2}\psi^{2}+\frac{\lambda_{\varphi}} {4!}\varphi^{4}+\frac{\lambda_{\psi}+C_{1}}{4!}\psi^{4}+C_{3}, \tag{1}\] where \(m\) is the mass of the real field \(\psi\), \(\mu\) is the mass of the complex field \(\phi\), \(\lambda_{\psi}\) and \(\lambda_{\phi}\) are the coupling constants of self-interaction for the real and complex fields, respectively, and \(g\) is the coupling constant of the interaction between the fields. The parameters \(C_{i}\)'s are the renormalization constants and their explicit form will be obtained in the renormalization process of the effective potential for each case considered in the next sections. We also make use of the notation \(\varphi^{2}=\varphi_{1}^{2}+\varphi_{2}^{2}\). Furthermore, the d'Alembertian operator, \(\Box\), is written in Euclidean spacetime coordinates as \[\Box=\left(\partial_{\tau}^{2}+\nabla^{2}\right), \tag{2}\] where \(\tau=it\) is the imaginary time. The construction of the effective potential using the path integral approach is described in detail in Refs. [23; 30] (see also [18; 25]). Here we present only the main steps necessary to our purposes. Thus, the action in Eq. (1) is now expanded about a fixed background \(\Psi\) and \(\Phi_{i}\) that is, \(\psi=\Psi+\chi\), \(\varphi_{i}=\Phi_{i}+\varrho\), with \(\chi\) and \(\varrho\) representing quantum fluctuations. Since we are interested in the real field we seek to obtain the effective potential as a function only of \(\Psi\), i.e., \(V_{\rm eff}\left(\Psi\right)\). Then it is unnecessary to shift the components of the complex field \(\phi\), which amounts to set \(\Phi_{i}=0\)[27]. Hence, we do not need to include counter terms proportional to powers of \(\varphi\) in Eq. (1). A note here is in place, if we try to carry out the shift in \(\phi\), i.e., setting \(\Phi_{i}\neq 0\), a cross term will appear in the exponential on the r.h.s. of Eq. (5), which turns the calculations needed for the analysis extremely cumbersome. For this reason the simplification \(\Phi_{i}=0\), is justified in the calculations. In the next sections we will in fact impose on the components of the complex field a quasiperiodic condition as well as mixed boundary conditions. This requires that we set the value \(\Phi_{i}=0\), the only possible choice for a constant field to be compatible with the imposed conditions. In these cases, the use of Eq. (5) without cross terms becomes more accurate. Therefore, in the next sections, we shall be interested in analyzing the influence of the complex scalar field, subjected to a quasiperiodic and mixed conditions, on both the Casimir energy density and topological mass arising due to the real scalar field subjected to a periodic condition. The expansion of the effective potential in powers of \(\hbar\), up to order \(\hbar^{2}\), can be written as \[V_{\rm eff}\left(\Psi\right)=V^{\left(0\right)}\left(\Psi\right)+V^{\left(1 \right)}\left(\Psi\right)+V^{\left(2\right)}\left(\Psi\right). \tag{3}\] The zero order term, \(V^{\left(0\right)}\left(\Psi\right)\), describes the classical potential, i.e., the tree-level contribution, \[V^{\left(0\right)}\left(\Psi\right)=U\left(\Psi\right)=\frac{m^{2}+C_{2}}{2} \Psi^{2}+\frac{\lambda_{\psi}+C_{1}}{4!}\Psi^{4}+C_{3}. \tag{4}\] The next term, \(V^{\left(1\right)}\left(\Psi\right)\), is the one-loop correction to the classical potential and, in terms of the path integral approach, takes the following form [20]: \[V^{\left(1\right)}\left(\Psi\right)=-\frac{1}{\Omega_{4}}\ln\int\mathcal{D} \psi\mathcal{D}\varphi_{1}\mathcal{D}\varphi_{2}\exp\left\{-\frac{1}{2}\left( \psi,\hat{A}\psi\right)-\frac{1}{2}\left(\varphi_{1},\hat{B}\varphi_{1} \right)-\frac{1}{2}\left(\varphi_{2},\hat{B}\varphi_{2}\right)\right\}, \tag{5}\] where \(\Omega_{4}\) is the 4-dimensional volume of the Euclidean spacetime, which depends on the conditions imposed on the fields. Note that we have introduced the notation, \[\left(\psi,\hat{A}\psi\right)=\int d^{4}x\psi\left(x\right)\hat{A}\psi\left(x \right), \tag{6}\] with the self-adjoint operators \(\hat{A}\) and \(\hat{B}\) defined as \[\hat{A}=-\Box+m^{2}+\frac{\lambda_{\psi}}{2}\Psi^{2},\qquad\qquad\hat{B}=- \Box+\mu^{2}+g\Psi^{2}. \tag{7}\] The one-loop correction to the effective potential can be written in terms of the eigenvalues of the operators \(\hat{A}\) and \(\hat{B}\), using the generalized zeta function [9; 23]. Let us denote by \(\alpha_{n}\) and \(\beta_{n}\), the eigenvalues of the operators \(\hat{A}\) and \(\hat{B}\), respectively. Then, one can construct the generalized zeta function \(\zeta\left(s\right)\) as follows \[\zeta_{\alpha}\left(s\right)=\sum_{\rho}\alpha_{\sigma}^{-s},\qquad\qquad \zeta_{\beta}\left(s\right)=\sum_{\rho}\beta_{\rho}^{-s}, \tag{8}\] where \(\sigma\) and \(\rho\) stand for the set of quantum numbers associated with the eigenfunctions of the operators \(\hat{A}\) and \(\hat{B}\), respectively. The summation symbol denotes sum or integration of the quantum numbers, depending on whether they are discrete or continuous. It is possible to show that the one-loop correction, (5), can be written in terms of the generalized zeta functions, (8), as [23; 31] \[V^{\left(1\right)}\left(\Psi\right)=V_{\alpha}^{\left(1\right)}\left(\Psi \right)+V_{\beta}^{\left(1\right)}\left(\Psi\right), \tag{9}\] where \[V_{\alpha}^{\left(1\right)}=-\frac{1}{2\Omega_{4}}\left[\zeta_{\alpha}^{ \prime}\left(0\right)+\zeta_{\alpha}\left(0\right)\ln\nu^{2}\right],\qquad \qquad V_{\beta}^{\left(1\right)}=-\frac{1}{\Omega_{4}}\left[\zeta_{\beta}^{ \prime}\left(0\right)+\zeta_{\beta}\left(0\right)\ln\nu^{2}\right]. \tag{10}\] In the above expressions, \(\zeta_{\alpha,\beta}\left(0\right)\) and \(\zeta_{\alpha,\beta}^{\prime}\left(0\right)\) denote the generalized zeta function and its derivative with respect to \(s\), evaluated at \(s=0\), respectively. Note that the parameter \(\nu\) stands for an integration measure in the functional space and is to be removed via renormalization of the effective potential [23]. In addition, for practical reasons, the two-loop correction, \(V^{\left(2\right)}\left(\Psi\right)\), of the effective potential is calculated from the two-loop graphs. This correction can also be written in terms of the generalized zeta function if one is interested in calculating the vacuum contribution [18; 19; 25]. We postpone the explicit form of \(V^{\left(2\right)}\left(\Psi\right)\) for later on, when we investigate it. After one obtains the explicit form of the effective potential with its corrections, it is required to renormalize it. The renormalization process is achieved by means of a set of renormalization conditions. The first one is written in analogy to Coleman-Weinberg. It allows us to fix the constant \(C_{1}\) in Eq. (1) and also the coupling constant \(\lambda_{\psi}\), [32]. This condition is expressed as \[\left.\frac{d^{4}V_{\text{eff}}\left(\Psi\right)}{d\Psi^{4}}\right|_{\Psi=M}= \lambda_{\psi}, \tag{11}\] where \(M\) is a parameter with dimension of mass, which in the case the model is massive, we can take it as being zero [18; 19; 25]. The next renormalization condition which fix the constant \(C_{2}\) in Eq. (1), is written as follows \[\left.\frac{d^{2}V_{\text{eff}}\left(\Psi\right)}{d\Psi^{2}}\right|_{\Psi=v}= m^{2}, \tag{12}\] where \(v\) is the value that minimizes the effective potential. It is pertinent to point out that the above expression also provides the topological mass when we use the renormalized effective potential instead of \(V_{\text{eff}}\left(\Psi\right)\). Note that \(\Psi=v\) in Eq. (12) is the value of the field that minimizes the potential as long as the extremum condition is obeyed \[\left.\frac{dV_{\text{eff}}\left(\Psi\right)}{d\Psi}\right|_{\Psi=v}=0. \tag{13}\] In Sec.III.3 we discuss the vacuum stability and present the values of the field which satisfy the condition above. The last condition one should use to renormalize the effective potential, fixing the constant \(C_{3}\), is written in the form [18] \[V_{\text{eff}}\left(\Psi\right)|_{\Psi=0}=0, \tag{14}\] which is relevant only if the model is massive [18; 19; 25]. It should be clear that the conditions presented in Eqs. (11), (12) and (14) are taken in the limit of Minkowski spacetime. We are now ready to study the loop expansion of the effective potential of the real scalar field and the generation of topological mass, imposing a periodic condition for the real field, and a quasiperiodic condition for the components of the complex field, along with mixed boundary conditions. We shall consider the components of the complex field, \(\varphi_{1}\) and \(\varphi_{2}\), obeying the same boundary conditions. ## III Periodic and quasiperiodic conditions We are considering a real field \(\psi\), interacting with a complex field \(\phi\) via quartic interaction. The action of the system is presented in Eq. (1). Note that the system takes into consideration the quartic self-interaction terms as well. In this section, the conditions which the fields must obey are the periodic, for the real field, and quasiperiodic for the components of the complex field. The real field being subjected to the periodic condition means it must satisfy the following relation: \[\psi\left(\tau,x,y,z+L\right)=\psi\left(\tau,x,y,z\right), \tag{15}\] where \(L\) is the periodic parameter. In fact, the condition above leads to the compactification of the \(z\)-coordinate into a length \(L\), as show the illustration in Fig.1. Hence, the eigenvalues equation of the operator \(\hat{A}\), presented in Eq. (7), is well known in the literature and is written as \[\alpha_{\sigma}=k_{\tau}^{2}+k_{x}^{2}+k_{y}^{2}+\frac{4\pi^{2}}{L^{2}}n^{2}+ M_{\lambda}^{2},\qquad\qquad M_{\lambda}^{2}=m^{2}+\frac{\lambda_{\psi}}{2} \Psi^{2}, \tag{16}\] where \(n=0,\pm 1,\pm 2,...\), and the subscript \(\sigma\) stands for the set of quantum numbers \(\left(k_{\tau},k_{x},k_{y},n\right)\). For the components of the complex field, \(\varphi_{i}\), we apply the quasiperiodic condition [16; 25], i.e., \[\varphi_{i}\left(\tau,x,y,z+L\right)=e^{2i\pi\theta}\varphi_{i}\left(\tau,x,y, z\right). \tag{17}\] This condition also compactifies the \(z\)-coordinate into a length \(L\), but now there exists the influence of the phase \(\theta\), that is, the quasiperiodic parameter that assumes values in the range \(0\leq\theta<1\). In this sense, the quasiperiodic condition recovers the periodic one for \(\theta=0\). The case for \(\theta=1/2\) recovers the well known antiperiodic condition. Thereby, under the quasiperiodic condition, the eigenvalues of the operator \(\hat{B}\), presented in Eq. (7), take the form \[\beta_{\rho}=p_{\tau}^{2}+p_{x}^{2}+p_{y}^{2}+\frac{4\pi^{2}}{L^{2}}\left(n+ \theta\right)^{2}+M_{g}^{2},\qquad\qquad M_{g}^{2}=\mu^{2}+g\Psi^{2}, \tag{18}\] where \(n=0,\pm 1,\pm 2,...\), and \(\rho\) stands for the set of the quantum numbers \(\left(p_{\tau},p_{x},p_{y},n\right)\). The values \(\theta=0\) and \(\theta=1/2\) are also known as the cases for the untwisted and twisted scalar fields, respectively. Knowing the explicit form of the eigenvalues \(\alpha_{\sigma}\) and \(\beta_{\rho}\), given in Eqs. (16) and (18), respectively, one can construct the generalized zeta function from Eq. (8) and obtain a practical expression for the first order correction to the effective potential in Eq. (10). We shall do that next, also obtaining the topological mass. ### One-loop correction Starting from the eigenvalues presented in Eq. (16), which are associated with the real scalar field \(\psi\), we construct the generalized zeta function from Eq. (8) as \[\zeta_{\alpha}\left(s\right)=\frac{\Omega_{3}}{\left(2\pi\right)^{3}}\int dk_ {\tau}dk_{x}dk_{y}\sum_{n=-\infty}^{+\infty}\left\{k_{\tau}^{2}+k_{x}^{2}+k_{ y}^{2}+\left(\frac{2\pi n}{L}\right)^{2}+M_{\lambda}^{2}\right\}^{-s}, \tag{19}\] where \(\Omega_{3}\) stands for the 3-dimensional volume associated with the Euclidean spacetime coordinates \(\tau,x,y\), necessary to make the integrals dimensionless. In order to obtain an expression for the generalized zeta function (19), we shall follow similar steps as the ones presented in [19; 25]. We keep most of the calculation for the convenience of the reader. Thus, by using the identity, \[w^{-s}=\frac{2}{\Gamma\left(s\right)}\int_{0}^{\infty}d\tau\ \tau^{2s-1}e^{-w\tau^{2}}, \tag{20}\] Figure 1: Illustrative representation of a four-dimensional spacetime with a compactified spatial dimension. The spacetime is composed by the compactified spatial dimension \(z\), \(S^{1}\), and a tridimensional space \(R^{3}\) of coordinates \(t,x,y\). in Eq. (19) and performing the resulting Gaussian integrals in \(k_{\tau}\), \(k_{x}\) and \(k_{z}\), one obtains the generalized zeta function in the form \[\zeta_{\alpha}\left(s\right)=\frac{\Omega_{3}}{\left(2\pi\right)^{2}}\frac{\pi^ {\frac{1}{2}}}{\Gamma\left(s\right)}\sum_{n=-\infty}^{+\infty}\int_{0}^{\infty }d\tau\ \tau^{2s-4}\exp\left\{-\tau^{2}\left[\left(\frac{2\pi n}{L}\right)^{2}+M_{ \lambda}^{2}\right]\right\}. \tag{21}\] The expression obtained in Eq. (21) is suited for the use of the well known integral representation of the gamma function \(\Gamma\left(z\right)\)[33] \[\Gamma\left(z\right)=2\int_{0}^{\infty}d\mu\ \mu^{2z-1}e^{-\mu^{2}}, \tag{22}\] which allows us to rewrite the generalized zeta function (21) in terms only of the summation in \(n\), i.e., \[\zeta_{\alpha}\left(s\right)=\frac{\Omega_{4}\pi^{\frac{3}{2}-2s}}{2^{2s}L^{4 -2s}}\frac{\Gamma\left(s-\frac{3}{2}\right)}{\Gamma\left(s\right)}\sum_{n=- \infty}^{+\infty}\left[n^{2}+\left(\frac{M_{\lambda}L}{2\pi}\right)^{2} \right]^{\frac{3}{2}-s}. \tag{23}\] The quantity \(\Omega_{4}\) stands for the 4-dimensional volume in Euclidean spacetime, which takes into account the spacetime topology \(S^{1}\times R^{3}\) as a consequence of the periodic condition imposed on the field. In the case under consideration, the 4-dimensional volume is written as \(\Omega_{4}=\Omega_{3}L\). In order to perform the sum in Eq. (23) we use the following analytic continuation of the inhomogeneous generalized Epstein function [16; 19; 34]: \[\sum_{n=-\infty}^{+\infty}\left[\left(n+\vartheta\right)^{2}+\kappa^{2}\right] ^{-z}=\frac{\pi^{\frac{1}{2}}\kappa^{1-2z}}{\Gamma\left(z\right)}\left\{\Gamma \left(z-\frac{1}{2}\right)+4(\pi\kappa)^{z-\frac{1}{2}}\sum_{j=1}^{\infty}j^ {z-\frac{1}{2}}\cos\left(2\pi j\vartheta\right)K_{\left(\frac{1}{2}-z\right) }\left(2\pi j\kappa\right)\right\}, \tag{24}\] where \(K_{\gamma}(x)\) is the modified Bessel function of the second kind or, as it is also known, the Macdonald function [33]. After the use of Eq. (24), the generalized zeta function in Eq. (23) is presented in the form \[\zeta_{\alpha}\left(s\right)=\frac{\Omega_{4}M_{\lambda}^{4-2s}}{2^{4}\pi^{2} \Gamma\left(s\right)}\left\{\Gamma\left(s-2\right)+2^{4-s}\sum_{j=1}^{\infty} f_{\left(2-s\right)}\left(jM_{\lambda}L\right)\right\}, \tag{25}\] where we have defined the function \(f_{\mu}\left(x\right)\) as \[f_{\gamma}\left(x\right)=\frac{K_{\gamma}\left(x\right)}{x^{\gamma}}. \tag{26}\] By evaluating the generalized zeta function in Eq. (25) and its derivative with respect to \(s\), in the limit \(s\to 0\), one finds from Eq. (10) the one-loop contribution to the effective potential as \[V_{\alpha}^{\left(1\right)}\left(\Psi\right)=\frac{M_{\lambda}^{4}}{64\pi^{2} }\left[\ln\left(\frac{M_{\lambda}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]- \frac{M_{\lambda}^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left(jM_{\lambda}L \right). \tag{27}\] The expression above is the first order correction, that is, the one-loop correction to the effective potential due to the periodic condition. It remains to be evaluated the contribution due to the complex field for the one-loop correction, which we shall analyze below. For the components of the complex scalar field, the eigenvalues are presented in Eq. (18), allowing us to construct the associated generalized zeta function in the form \[\zeta_{\beta}\left(s\right)=\frac{\Omega_{3}}{\left(2\pi\right)^{3}}\int dp_ {\tau}dp_{x}dp_{y}\sum_{n=-\infty}^{+\infty}\left[p_{\tau}^{2}+p_{x}^{2}+p_{y }^{2}+\frac{4\pi^{2}}{L^{2}}\left(n+\theta\right)^{2}+M_{g}^{2}\right]^{-s}. \tag{28}\] The expression for the generalizzed zeta function, \(\zeta_{\beta}\left(s\right)\), arising due to a real scalar field has been obtained in detail in [25] and, of course, for our case is very similar since the componentes of the complex field considered are real. Therefore, we present only the final result, i.e., \[\zeta_{\beta}\left(s\right)=\frac{\Omega_{4}M_{g}^{4-2s}}{16\pi^{2}\Gamma\left( s\right)}\left\{\Gamma\left(s-2\right)+2^{4-s}\sum_{j=1}^{\infty}\cos\left(2\pi j \theta\right)f_{\left(2-s\right)}\left(jM_{g}L\right)\right\}. \tag{29}\] From Eq. (10) and using the generalized zeta function presented above, one is able to write the first order correction to the effective potential due to the complex field as \[V_{\beta}^{\left(1\right)}\left(\Psi\right)=\frac{M_{g}^{4}}{32\pi^{2}}\left[ \ln\left(\frac{M_{g}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]-\frac{M_{g}^{4}} {\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{2}\left(jM_{g}L \right). \tag{30}\] Collecting the results presented in Eqs. (27) and (30), we can write the first order correction to the effective potential associated with the interacting real and complex scalar fields in the following form: \[V^{\left(1\right)}\left(\Psi\right) = \frac{M_{\lambda}^{4}}{64\pi^{2}}\left[\ln\left(\frac{M_{\lambda} ^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+\frac{M_{g}^{4}}{32\pi^{2}}\left[\ln \left(\frac{M_{g}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+ \tag{31}\] \[-\frac{M_{\lambda}^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left( jM_{\lambda}L\right)-\frac{M_{g}^{4}}{\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j \theta\right)f_{2}\left(jM_{g}L\right).\] Therefore, from Eq. (9), the nonrenormalized effective potential up to one-loop correction reads \[V_{\rm eff}\left(\Psi\right) = \frac{m^{2}+C_{2}}{2}\Psi^{2}+\frac{\lambda_{\psi}+C_{1}}{4!}\Psi ^{4}+C_{3}+ \tag{32}\] \[+\frac{M_{\lambda}^{4}}{64\pi^{2}}\left[\ln\left(\frac{M_{\lambda }^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+\frac{M_{g}^{4}}{32\pi^{2}}\left[\ln \left(\frac{M_{g}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+\] \[-\frac{M_{\lambda}^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left( jM_{\lambda}L\right)-\frac{M_{g}^{4}}{\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j \theta\right)f_{2}\left(jM_{g}L\right).\] It is clear that the one-loop corrections in Eqs. (27) and (30) for the cases of real and complex fields, respectively, differ by a factor of two when \(\theta=0\), the periodic condition particular case. This is justified since the complex field has two components. Note that the masses are also, in general, different. That is, for the real scalar field is \(m\) and for the complex scalar field is \(\mu\). Once we obtain the effective potential \(V_{\rm eff}\left(\Psi\right)\) in Eq. (32), our task is now to renormalize it. Hence, by following the renormalization procedure, from the conditions presented in Eqs. (11), (12) and (14), in the limit of Minkowski spacetime \(L\rightarrow\infty\), one obtains the renormalization constants as \[C_{1} =\frac{3\lambda_{\psi}^{2}}{32\pi^{2}}\ln\left(\frac{\nu^{2}}{m^{2 }}\right)+\frac{3g^{2}}{4\pi^{2}}\ln\left(\frac{\nu^{2}}{m^{2}}\right),\] \[C_{2} =\frac{\lambda_{\psi}m^{2}}{32\pi^{2}}\left[\ln\left(\frac{\nu^{ 2}}{m^{2}}\right)+1\right]+\frac{g\mu^{2}}{8\pi^{2}}\left[\ln\left(\frac{\nu ^{2}}{\mu^{2}}\right)+1\right],\] \[C_{3} =\frac{m^{4}}{64\pi^{2}}\left[\ln\frac{\nu^{2}}{m^{2}}+\frac{3} {2}\right]+\frac{\mu^{4}}{32\pi^{2}}\left[\ln\left(\frac{\nu^{2}}{\mu^{2}} \right)+\frac{3}{2}\right]. \tag{33}\] Furthermore, by substituting the renormalization constants above into the nonrenormalized effective potential given in Eq. (32), we are able to write the renormalized effective potential in the following form: \[V_{\rm eff}^{R}\left(\Psi\right) = \frac{m^{2}}{2}\Psi^{2}+\frac{\lambda_{\psi}}{4!}\Psi^{4}+\frac{ \mu^{4}}{32\pi^{2}}\ln\left(\frac{M_{g}^{2}}{\mu^{2}}\right)+\frac{m^{4}}{64 \pi^{2}}\ln\left(\frac{M_{\lambda}^{2}}{m^{2}}\right)+ \tag{34}\] \[+\frac{\beta\mu^{2}\Psi^{2}}{16\pi^{2}}\left[\ln\left(\frac{M_{g }^{2}}{\mu^{2}}\right)-\frac{1}{2}\right]+\frac{\lambda_{\psi}^{2}\Psi^{4}}{2 56\pi^{2}}\left[\ln\left(\frac{M_{\lambda}^{2}}{m^{2}}\right)-\frac{3}{2} \right]+\] \[+\frac{\lambda_{\psi}m^{2}\Psi^{2}}{64\pi^{2}}\left[\ln\left( \frac{M_{\lambda}^{2}}{m^{2}}\right)-\frac{1}{2}\right]+\frac{g^{2}\Psi^{4}}{32 \pi^{2}}\left[\ln\left(\frac{M_{g}^{2}}{\mu^{2}}\right)-\frac{3}{2}\right]+\] \[-\frac{M_{\lambda}^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left(jM _{\lambda}L\right)-\frac{M_{g}^{4}}{\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j \theta\right)f_{2}\left(jM_{g}L\right).\] The explicit form of the renormalized effective potential presented in Eq. (34) makes possible to evaluate the Casimir energy density and also the topological mass, up to first order correction. In order to proceed and calculate the vacuum energy density, let us consider \(\Psi=0\) as the stable vacuum state of the theory, although there are other possible stable vacuum states, as analyzed in Sec.III.3. Thus, from the renormalized effective potential \(V_{\rm eff}^{R}\left(\Psi\right)\) in Eq. (34) one can evaluate the Casimir energy density in a straightforwardly way since the vacuum state is obtained by setting \(\Psi=0\). Hence, the Casimir energy density is found to be \[\mathcal{E}_{\rm C} = V_{\rm eff}^{R}\left(\Psi\right)\bigr{|}_{\Psi=0} \tag{35}\] \[= -\frac{m^{4}}{2\pi^{2}}\sum_{n=1}^{\infty}f_{2}\left(nmL\right)- \frac{\mu^{4}}{\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{2} \left(j\mu L\right).\] Note that the first term on the r.h.s. of Eq. (35) is the contribution to the Casimir energy density from the real scalar field \(\psi\) subjected to a periodic condition, while the second term is the contribution from the complex field subjected to a quasiperiodic condition [25]. In the particular case, \(\theta=0\), this contribution is twice the one from the real scalar field if the masses are equal. In order to show the influence of the complex field, under a quasiperiodic condition, on the Casimir energy density of the real field, we have plotted the expression in Eq. (35) as a function of \(mL\) which is shown on the left side of Fig.2 for different values of \(\theta\) and taken \(\mu=m\). The black solid line is the Casimir energy density free of interaction with the complex field, only with the effect of the real field self-interaction. It is clear that depending on the value of the quasiperiodic parameter \(\theta\), the Casimir energy density can be bigger or smaller than the free case, including the possibility of assuming positive or negative values. Note that the curves tend to repeat their behavior for values such that \(\theta>0.5\). For instance, the curve represented by the green dot-dashed line for \(\theta=0.3\) is the same as the one for \(\theta=0.7\), and so on. Furthermore, all the curves end in their corresponding massless field constant value cases at \(mL=0\), as it can be checked from Eq. (38). Also, in the regime \(mL\gg 1\), the Casimir energy density in Eq. (35) goes to zero for all curves, as revealed by the plot on the left side of Fig.2. This a consequence of the exponentially suppressed behavior of the Macdonald function for large arguments [33]. It is interesting to consider the case of massless scalar fields, i.e., the limit \(m,\mu\to 0\) of Eq. (35). For the massless scalar field case, we can make use of the limit for small arguments of the Macdonald function, i.e., \(K_{\mu}\left(x\right)\simeq\frac{\Gamma\left(\mu\right)}{2}\left(\frac{2}{x} \right)^{\mu}\)[33]. Hence, from Eq. (35), we obtain the Casimir energy density for interacting massless scalar fields as \[\mathcal{E}_{\rm C}=-\frac{\pi^{2}}{90L^{4}}-\frac{2}{\pi^{2}L^{4}}\sum_{j=1} ^{\infty}j^{-4}\cos\left(2\pi j\theta\right), \tag{36}\] where we have used the following result for the Riemann zeta function \(\zeta\left(4\right)=\frac{\pi^{4}}{90}\)[34; 35], on the first term on the r.h.s. of Eq. (36). This term is the already known Casimir effect result for a free massless scalar field subjected to a periodic condition [23]. In addition, the second term on the r.h.s. of Eq. (36) can be rewritten in terms of the well known Bernoulli polynomials, \[B_{2k}\left(\theta\right)=\frac{\left(-1\right)^{k-1}2\left(2k\right)!}{\left( 2\pi\right)^{2k}}\sum_{n=1}^{\infty}\frac{\cos\left(2\pi n\theta\right)}{n^{ 2k}}. \tag{37}\] Figure 2: Plot of the dimensionless Casimir energy density, \(E(mL)=2\pi^{2}L^{4}\mathcal{E}_{\rm C}\), defined from Eq. (35), as a function of \(mL\), is shown on the left. The plot on the right shows the dimensionless two-loop contribution to the Casimir energy density, \(E^{c}(mL)=32\pi^{4}L^{4}\Delta\mathcal{E}_{\rm C}\), defined from Eq. (48), as a function of \(mL\) and considering \(\lambda_{\psi}=10^{-2}\), \(\lambda_{\varphi}=10^{-2}\) and \(g=10^{-3}\). For both cases we have taken \(\mu=m\) and different values of \(\theta\). Hence, the expression for the Casimir energy density, in the massless scalar fields case, is found to be \[\mathcal{E}_{\rm C}=-\frac{\pi^{2}}{90L^{4}}+2\frac{\pi^{2}}{3L^{4}}\left(\theta ^{4}-2\theta^{3}+\theta^{2}-\frac{1}{30}\right), \tag{38}\] where we have made use of the Bernoulli polynomial of fourth order, that is, \(B_{4}\left(\theta\right)=\left(\theta^{4}-2\theta^{3}+\theta^{2}-\frac{1}{30}\right)\). As one should expect, the first term on the r.h.s. of Eq. (38) is consistent with the result found in [23] while the second term is consistent with the result found in [16; 25] (taking into account the two components of the complex field). The latter also provides the right expressions for the periodic (\(\theta=0\)) and antiperiodic (\(\theta=\frac{1}{2}\)) cases, also known as untwisted and twisted cases, respectively. We wish now to investigate the influence of the conditions in the mass \(m\) of the real scalar field, i.e., the generation of topological mass at one-loop level. From the condition presented in Eq. (12) and making use of the renormalized effective potential (34), one obtains the following expression for the topological mass of the real scalar field \(\psi\): \[m_{T}^{2}=m^{2}\left[1+\frac{\lambda_{\psi}}{4\pi^{2}}\sum_{n=1}^{\infty}f_{1 }\left(nmL\right)+\frac{\mu^{2}}{m^{2}}\frac{g}{\pi^{2}}\sum_{j=1}^{\infty} \cos\left(2\pi j\theta\right)f_{1}\left(j\mu L\right)\right]. \tag{39}\] Note that the topological mass \(m_{T}^{2}\) does not present any divergencies, making possible for us to consider the massless field limit, that is, \(m,\mu\to 0\). Hence, by using the same approximation for the Macdonald function as the one applied to obtain Eq. (36) from Eq. (35), we find the topological mass as \[m_{T}^{2}=\frac{\lambda_{\psi}}{24L^{2}}+\frac{g}{L^{2}}\left(\theta^{2}- \theta+\frac{1}{6}\right). \tag{40}\] In the expression above we have used the Riemann zeta function \(\zeta\left(2\right)=\frac{\pi^{2}}{6}\)[34; 35] and also the Bernoulli polynomials presented in Eq. (37), with \(B_{2}\left(\theta\right)=\theta^{2}-\theta+\frac{1}{6}\). As one can see, the mass correction comes from the self-interaction term, which is proportional to \(\lambda_{\psi}\), and also from the interaction between the fields, which is proportional to the coupling constant \(g\). Note that the first term on the r.h.s. of Eq. (40) has been previously obtained in Ref. [23] in a real scalar field theory with only self-interaction. Another interesting aspect associated with the topological mass is that if \(\lambda_{\psi}<g\), Eq. (40) may become negative depending on the value of \(\theta\), which would in principle indicate vacuum instability. Had we, for instance, considered a complex scalar field theory with only self-interaction (no interaction between the fields) this would be a problem since it does not make sense to consider a constant complex field, \(\Phi_{i}\neq 0\), compatible with the quasiperiodic condition for \(\theta\neq 0\). The case \(\theta=0\) is not problematic in this regard and that is why we have made the real scalar field to obey the periodic condition. Nevertheless, this problem is solved by taking into account an interaction theory as the one considered here (see also [20]). Within this theory it is possible to study the vacuum stability, which in fact is made in Sec.III.3 by considering for simplicity a massless scalar field theory. The analysis indicates that the vacuum \(\Psi=0\) is stable only if \(\lambda_{\psi}>-24gB_{2}(\theta)\), otherwise it is necessary to consider the two other possible stable vacuum states, \(\Psi_{\pm}\), in Eq. (55). In Fig.3 we have plotted the dimensionless mass squared, \(M^{2}(mL)=m_{T}^{2}L^{2}\), defined from Eq. (39), as a function of \(mL\), for different values of \(\theta\) and taken \(\mu=m\). On the left of Fig.3 the plot shows the curves for \(\lambda_{\psi}=10^{-2}\) and Figure 3: Plot of the dimensionless topological mass squared, \(M^{2}(mL)=m_{T}^{2}L^{2}\), defined from Eq. (39), as a function of \(mL\), for different values of \(\theta\) and taken \(\mu=m\). On the left, the plot shows the curves for \(\lambda_{\psi}=10^{-2}\) and \(g=10^{-3}\) while on the right the plot shows the curves for \(\lambda_{\psi}=10^{-3}\) and \(g=10^{-2}\). \(g=10^{-3}\), which satisfies the condition \(\lambda_{\psi}>-24gB_{2}(\theta)\) in order \(\Psi=0\) be a stable vacuum. This plot provides positive values in Eq. (40) for all values of the quasiperiodic parameter \(\theta\), as we should expect. In contrast, the plot on the right shows the curves for \(\lambda_{\psi}=10^{-3}\) and \(g=10^{-2}\), which satisfies the condition \(\lambda_{\psi}<-24gB_{2}(\theta)\). In this case, there exist negative values of Eq. (40) for \(\theta=0.3\) and \(\theta=0.5\), showing that \(\Psi=0\) is in fact an unstable vacuum state. However, in a massive scalar field theory, \(\Psi=0\), becomes stable for larger values of \(mL\) even if \(\lambda_{\psi}<g\), as indicates the plot on the right. Note that all curves, at \(mL=0\), end in their corresponding constant massless scalar field values for the topological mass in Eq. (40). For large values of \(mL\) the Macdonald function is exponentially suppressed and the curves are dominated by the first term on the r.h.s. of Eq. (39). Note also that the curves tend to repeat themselves for \(\theta>0.5\). The one-loop correction analysis is now done, so one can proceed to the two-loop correction contribution by still considering \(\Psi=0\) as the stable vacuum state. As we now know, it means that we have to consider the restriction \(\lambda_{\psi}>-24gB_{2}(\theta)\). ### Two-loop correction We want now to analyze the loop correction to the Casimir energy density obtained in Eqs. (35) and (36). This can be done by considering the second order correction to the effective potential, which can be obtained from the two-loop Feynman graphs. Since we have more than one contribution, we evaluate all the two-loop contributions from each Feynman graph separately. Hence, we write \[V^{(2)}\left(\Psi\right)=V^{(2)}_{\lambda_{\psi}}\left(\Psi\right)+V^{(2)}_{ \lambda_{\varphi}}\left(\Psi\right)+V^{(2)}_{g}\left(\Psi\right)+V^{(2)}_{2 \lambda_{\varphi}}\left(\Psi\right), \tag{41}\] where \(V^{(2)}_{\lambda_{\varphi}}\) is the contribution from the self-interaction term, \(\frac{\lambda_{\psi}}{4!}\psi^{4}\), of the real field, \(V^{(2)}_{\lambda_{\varphi}}\) is the contribution from the self-interaction of the complex field, that is, \(\frac{\lambda_{\varphi}}{4!}\varphi^{4}_{1}\) and \(\frac{\lambda_{\varphi}}{4!}\varphi^{4}_{2}\), \(V^{(2)}_{g}\) is associated with the interaction between the real and complex fields \(\frac{2}{2}\varphi^{2}_{1}\psi^{2}\) and \(\frac{g}{2}\varphi^{2}_{2}\psi^{2}\) and, finally, \(V^{(2)}_{2\lambda_{\varphi}}\) is associated with the cross terms of the components of the complex field \(\frac{\lambda_{\varphi}}{4!}2\varphi^{2}_{1}\varphi^{2}_{2}\). Let us first consider the contribution from the self-interaction term associated with the real field, \(V^{(2)}_{\lambda_{\psi}}\left(\Psi\right)\). Since we are interested in the vacuum state where \(\Psi=0\), the only nonvanishg contribution comes from the graph exhibited in Fig.4. With the help of this Feynman graph one can write the two-loop contribution in terms of the generalized zeta function presented in Eq. (25) in the following form [19; 25]: \[V^{(2)}_{\lambda_{\varphi}}\left(0\right)=\frac{\lambda_{\psi}}{8}\left.\left[ \frac{\zeta^{R}_{\alpha}\left(1\right)}{\Omega_{4}}\right]^{2}\right|_{\Psi=0}. \tag{42}\] The zeta function \(\zeta^{R}_{\alpha}\left(s\right)\) is defined as the non-divergent part of the generalized zeta function given by Eq. (25), at \(s=1\)[19; 25], i.e., \[\zeta^{R}_{\alpha}\left(s\right)=\zeta_{\alpha}\left(s\right)-\frac{\Omega_{4 }M^{4-2s}_{\lambda}}{16\pi^{2}}\frac{\Gamma\left(s-2\right)}{\Gamma\left(s \right)}. \tag{43}\] Figure 4: Feynman graph representing the only non-vanishing self-interaction contribution to the two-loop correction calculated at \(\Psi=0\). Note that the term that is being subtracted in the above equation is independent of the parameter \(L\), when divided by \(\Omega_{4}\), characterizing the conditions and, as usual, should be dropped. Explicitly, one obtains the following result for the two-loop contribution due to the self-interaction term of the real field: \[V_{\lambda_{\varphi}}^{\left(2\right)}\left(0\right)=\frac{\lambda_{\psi}m^{4}} {32\pi^{4}}\left[\sum_{j=1}^{\infty}f_{1}\left(jmL\right)\right]^{2}. \tag{44}\] The above result shows that the two-loop contribution is proportional to the coupling constant \(\lambda_{\psi}\), as it should. The second contribution, \(V_{\lambda_{\varphi}}^{\left(2\right)}\left(\Psi\right)\), to the total two-loop correction in Eq. (41) can be read from the same graph as the one in Fig.4. We can construct the function \(\zeta_{\beta}^{R}\left(s\right)\) from the generalized zeta function (29), subtracting the divergent part at \(s=1\), and then obtain the contribution from the self-interaction of the complex field, that is, \[V_{\lambda_{\varphi}}^{\left(2\right)}\left(0\right)=2\frac{\lambda_{\varphi} \mu^{4}}{32\pi^{4}}\left[\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{1 }\left(j\mu L\right)\right]^{2}, \tag{45}\] which is proportional to \(\lambda_{\varphi}\). The factor of two accounts for the two components of the complex field, that is, \(\varphi_{1}\) and \(\varphi_{2}\) that give rise to equal contributions. Next we analyze the contributions from the interaction, \(V_{g}^{\left(2\right)}\left(\Psi\right)\), between the fields. This correction can be read from the graph shown in Fig.5 and calculated, at \(s=1\), by using the non-divergent part of the zeta functions in Eqs. (25) and (29). Hence, one finds the contribution from the interaction between the fields as \[V_{g}^{\left(2\right)}\left(0\right)=2\frac{gm^{2}\mu^{2}}{8\pi^{4}}\left[\sum _{n=1}^{\infty}f_{1}\left(nmL\right)\right]\left[\sum_{j=1}^{\infty}\cos\left( 2\pi j\theta\right)f_{1}\left(j\mu L\right)\right]. \tag{46}\] Since the expression (46) comes from the interaction term, it is proportional to the coupling constant \(g\). Finally, the last contribution, \(V_{2\lambda_{\varphi}}^{\left(2\right)}\left(\Psi\right)\), comes from the interaction of the components of the complex field (also a self-interaction). The contribution from this term is obtained from the graph in Fig.5, considering the solid line as representing the propagator associated with the field \(\varphi_{1}\) and the dashed one associated with the field \(\varphi_{2}\). Then, by using again the non-divergent part of the zeta function (29), calculated at \(s=1\), the result is written as \[V_{2\lambda_{\varphi}}^{\left(2\right)}\left(0\right)=\frac{\lambda_{\varphi} \mu^{4}}{48\pi^{4}}\left[\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{1 }\left(j\mu L\right)\right]^{2}. \tag{47}\] Note that, likewise the result presented in Eq. (45), the above expression is proportional to \(\lambda_{\varphi}\). Figure 5: Feynman graph representing the only non-vanishing contribution to the two-loop correction, calculated at \(\Psi=0\), due to the interaction between the real and complex fields. This graph also provides the only non-vanishing contribution due to the interaction of the components of the complex field. In this case, the solid line represents the propagator associated with the component \(\varphi_{1}\) while the dashed line represents the propagator associated with the component \(\varphi_{2}\). Collecting all the results obtained in Eqs (44), (46), (45) and (47), one can write the total two-loop correction to the effective potential in Eq. (41), at the vacuum state \(\Psi=0\), as \[\Delta\mathcal{E}_{\mathrm{C}} = V^{\left(2\right)}\left(\Psi\right)\Big{|}_{\Psi=0} \tag{48}\] \[= \frac{\lambda_{\psi}m^{4}}{32\pi^{4}}\left[\sum_{j=1}^{\infty}f_ {1}\left(jmL\right)\right]^{2}+\frac{\lambda_{\varphi}\mu^{4}}{12\pi^{4}}\left[ \sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{1}\left(j\mu L\right) \right]^{2}+\] \[+\frac{gm^{2}\mu^{2}}{4\pi^{4}}\left[\sum_{n=1}^{\infty}f_{1} \left(nmL\right)\right]\left[\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right) f_{1}\left(j\mu L\right)\right].\] Therefore, combining the results presented in Eqs. (35) and (48), one obtains a correction to the Casimir energy density in Eq. (35), which is first order in all coupling constants of the theory. As we can notice, while the first order correction to the efective potential gives the Casimir energy density associated with a free scalar and complex fields theory, the second order correction to the effective potential in Eq. (48) provides a contribution to the Casimir energy density that is linearly proportional to all coupling constants. Moreover, one can also consider the massless scalar fields limit of Eq. (48), that is, \(\mu,m\to 0\). This gives \[\Delta\mathcal{E}_{\mathrm{C}}=\frac{\lambda_{\psi}}{1152L^{4}}+\frac{\lambda _{\varphi}}{12L^{4}}\left(\theta^{2}-\theta+\frac{1}{6}\right)^{2}+\frac{g}{2 4L^{4}}\left(\theta^{2}-\theta+\frac{1}{6}\right). \tag{49}\] Hence, the expression above is the first order correction, in all coupling constants of the theory, to the massless Casimir energy density in Eq. (38). Note that the first term on the r.h.s. of Eq. (49) has been obtained in Ref. [23], whilst the second term is consistent with the result obtained in Ref. [25] in a real scalar field theory considering only self-interaction. In contrast, the third term is a new one arising from the interaction between the fields. In Fig.2, the plot on the right shows the influence of the complex field, under a quasiperiodic condition, on the correction (48) to the Casimir energy density. Thus, the expression in Eq. (48) has been plotted as a function of \(mL\), for different values of \(\theta\) and taken \(\mu=m\). We also have considered \(\lambda_{\psi}=10^{-2}\), \(\lambda_{\varphi}=10^{-2}\) and \(g=10^{-3}\). The black solid line is the correction free of interaction with the complex field, only with the effect of the real field self-interaction. It is clear that the curves for \(\theta\neq 0\) increase the correction when compared to the black solid line. Note that the curves here also tend to repeat their behavior for values such that \(\theta>0.5\). For instance, the curve represented by the orange dot-dashed line for \(\theta=0.3\) is the same as the one for \(\theta=0.7\). Furthermore, all the curves tend to their corresponding massless field constant value cases at \(mL=0\), as it can be checked from Eq. (49). Also, in the regime \(mL\gg 1\), the correction in Eq. (48) goes to zero for all curves, as revealed by the plot on the right side of Fig.2. This a consequence of the exponentially suppressed behavior of the Macdonald function for large arguments [33]. Next, we shall analyze the vacuum stability of the theory, since the state \(\Psi=0\) is not the only possible vacuum state, as we have already anticipated. ### Vacuum stability We want to analyze here the stability of the possible vacuum states associated with the effective potential, up to first order loop correction, of the theory described by the action in Eq. (1). For simplicity we consider the case where the fields are massless, i.e., \(m,\mu\to 0\). It is import to point out again that, for the complex scalar field obeying the condition in Eq. (17), the only constant field that can satisfy such a condition is the zero field, hence, we set \(\Phi_{i}=0\). This fact also turns the approximation discussed below Eq. (2) into an exact expression, namely, the one in Eq. (5), which does not consider cross terms. By following the same steps as the ones to obtain Eq. (32), the nonrenormalized effective potential for the massless scalar fields case is written as \[V_{\mathrm{eff}}\left(\Psi\right)=\frac{\lambda_{\psi}+C}{4!} \Psi^{4}+\frac{\lambda_{\psi}^{2}\Psi^{4}}{256\pi^{2}}\left[\ln\left(\frac{ \lambda_{\psi}\Psi^{2}}{2\nu^{2}}\right)-\frac{3}{2}\right]+\frac{g^{2}\Psi^ {4}}{32\pi^{2}}\left[\ln\left(\frac{g\Psi^{2}}{\nu^{2}}\right)-\frac{3}{2} \right]+\] \[-\frac{\lambda_{\psi}^{2}\Psi^{4}}{8\pi^{2}}\sum_{j=1}^{\infty}f _{2}\left(j\sqrt{\frac{\lambda_{\psi}}{2}\Psi^{2}}L\right)-\frac{g^{2}\Psi^ {4}}{\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{2}\left(j \sqrt{g\Psi^{2}}L\right). \tag{50}\] Furthermore, the condition which takes care of the renormalization constant \(C\), is given by Eq. (11). Thereby, by applying this condition on the effective potential of Eq. (50), in the Minkowski limit \(L\rightarrow\infty\), one finds that the constant \(C\) is given by \[C=\frac{3\lambda_{\psi}^{2}}{32\pi^{2}}\ln\left(\frac{2\nu^{2}}{ \lambda_{\psi}M^{2}}\right)+\frac{3g^{2}}{4\pi^{2}}\ln\left(\frac{\nu^{2}}{gM^ {2}}\right)-\frac{\lambda_{\psi}^{2}}{4\pi^{2}}-\frac{2g^{2}}{\pi^{2}}. \tag{51}\] Next, by substituting \(C\) in the effective potential (50), we obtain the renormalized effective potential for the massless scalar fields theory, i.e., \[V_{\rm eff}^{R}\left(\Psi\right)=\frac{\lambda_{\psi}}{4!}\Psi^ {4}+\left[\frac{\lambda_{\psi}^{2}}{8}+g^{2}\right]\frac{\Psi^{4}}{32\pi^{2}} \ln\left(\frac{\Psi^{2}}{M^{2}}\right)-\left[\frac{\lambda_{\psi}^{2}}{8}+g^{ 2}\right]\frac{25\Psi^{4}}{192\pi^{2}}+\] \[-\frac{\lambda_{\psi}^{2}\Psi^{4}}{8\pi^{2}}\sum_{j=1}^{\infty}f _{2}\left(j\sqrt{\frac{\lambda_{\psi}}{2}\Psi^{2}}L\right)-\frac{g^{2}\Psi^{ 4}}{\pi^{2}}\sum_{j=1}^{\infty}\cos\left(2\pi j\theta\right)f_{2}\left(j\sqrt{ g\Psi^{2}}L\right). \tag{52}\] Let us now investigate the possible vacuum states of the above renormalized effective potential, up to first order in the coupling constants \(\lambda_{\psi}\) and \(g\), which is more than enough since we have considered corrections to the Casimir energy density as well as to the mass of the scalar field only up to first order in the coupling constants. Thus, expanding the renormalized effective potential given by Eq. (52) in powers of \(\lambda_{\psi}\) and \(g\)[20], up to first order, results in the following expression: \[V_{\rm eff}^{R}\left(\Psi\right)\simeq-\frac{\pi^{2}}{90L^{4}}+ \frac{2\pi^{2}}{3L^{4}}B_{4}\left(\theta\right)+\frac{\lambda_{\psi}}{4!}\Psi^ {4}+\frac{\Psi^{2}}{48L^{2}}\left[\lambda_{\psi}+24gB_{2}\left(\theta\right) \right], \tag{53}\] where \(B_{4}\left(\theta\right)\) and \(B_{2}\left(\theta\right)\) are the Bernoulli polynomials defined in Eq. (37). The minimum of the potential, which corresponds to the vacuum state, is obtained as usual by taking its derivative and equating the resulting expression to zero, that is, \[\frac{\lambda_{\psi}}{6}\Psi^{3}+\frac{\Psi}{24L^{2}}\left[ \lambda_{\psi}+24gB_{2}\left(\theta\right)\right]=0. \tag{54}\] The roots of Eq. (54) represent possible vacuum states and are given by \[\Psi=0,\qquad\qquad\Psi_{\pm}=\pm\sqrt{-\frac{1}{4\lambda_{\psi} L^{2}}\left[\lambda_{\psi}+24gB_{2}\left(\theta\right)\right]}. \tag{55}\] In order to know which solution above may be a physical vacuum state, one has to analyze the stability of the effective potential (53). This is achieved by means of its second derivative, i.e., \[\frac{d^{2}V_{\rm eff}^{R}\left(\Psi\right)}{d\Psi^{2}}=\frac{ \lambda_{\psi}}{2}\Psi^{2}+\frac{1}{24L^{2}}\left[\lambda_{\psi}+24gB_{2} \left(\theta\right)\right]. \tag{56}\] For the vacuum state to be stable the second derivative of the potential, evaluated at (55), must be greater than zero. In this sense, we investigate for which values of the parameter \(\theta\) of the quasiperiodic condition and of the coupling constants the stability is achieved. Let us then first consider the vacuum state, \(\Psi=0\), which is the case considered previously in the analysis of the Casimir energy density, its loop correction and the topological mass. Hence, from Eq. (56), one sees that \(\Psi=0\) is stable only if the following condition is satisfied: \[\lambda_{\psi}>-24gB_{2}\left(\theta\right). \tag{57}\] As we can see, the parameter \(\theta\) plays a crucial role in determining whether or not this vacuum state is stable. Additionally, the coupling constants has also a great influence in the vacuum stability. Thus, by taking the coupling constants, \(\lambda_{\psi}\) and \(g\), to be positive, and for \(B_{2}\left(\theta\right)\) also positive, the condition above is always satisfied. However, if \(B_{2}\left(\theta\right)\) is negative the condition in Eq. (57) may be violated if \(\lambda_{\psi}<g\). In Fig.6 we have plotted the Bernoulli polynomial \(B_{2}\left(\theta\right)\), from where we can see its positive and negative values. By using the explicit form of the Bernoulli polynomial, that is, \(B_{2}\left(\theta\right)=\theta^{2}-\theta+\frac{1}{6}\), one finds that its negative values are provided for values of \(\theta\) in the interval \[\frac{1}{2}-\frac{\sqrt{3}}{6}<\theta<\frac{1}{2}+\frac{\sqrt{3}}{6}. \tag{58}\] Of course, the values of \(\theta\) for which \(B_{2}(\theta)\) is positive reside out of the above interval. As an example, let us consider the particular case where \(\theta=0.5\). Hence, the condition of stability written in Eq. (57) becomes \[\lambda_{\psi}>2g. \tag{59}\] The above condition is in agreement with the one found in [20]. Note, however, that we are considering a complex scalar field. The topological mass, in this case, takes the following form: \[m_{T}^{2}=\frac{\left(\lambda_{\psi}-2g\right)}{24L^{2}}, \tag{60}\] which agrees with the result obtained in Eq. (40) for \(\theta=0.5\). Note also that if the interaction between the fields is not presente, that is, \(g=0\), we recover the previous known result found in Refs. [23; 24]. According to Eq. (55) it is also possible to consider, \(\Psi=\Psi_{\pm}\), as the vacuum states, instead of \(\Psi=0\). In this case, evaluating the second derivative of the potential in Eq. (56), at \(\Psi=\Psi_{\pm}\), and setting the result to be greater than zero, one obtains the vacuum stability condition as \[\lambda_{\psi}<-24gB_{2}\left(\theta\right). \tag{61}\] For positive coupling constants, the above condition is satisfied only if the Bernoulli polynomial, \(B_{2}\left(\theta\right)\), is negative. This is in fact possible for values of \(\theta\) in the interval (58). By considering the same example as before, that is, \(\theta=0.5\), we obtain the stability condition for a twisted scalar field as \[\lambda_{\psi}<2g. \tag{62}\] Consequently, the topological mass for this case reads \[m_{T}^{2}=\frac{2g-\lambda_{\psi}}{12L^{2}}. \tag{63}\] Note that the above result for the topological mass differs from the one presented in Eq. (40) for \(\theta=0.5\). This difference arises from the fact that the considered stable vacuum state for Eq. (63), that is, \(\Psi=\Psi_{\pm}\), is not the same as the one in Eq. (40), that is, \(\Psi=0\). For the latter to be stable, as we have seen above, it is necessary to consider the restriction in Eq. (57). The result in Eq. (63) is in agreement with the one found in Ref. [23]. The Casimir energy density can also be obtained by taking, \(\Psi=\Psi_{\pm}\), as the stable vacuum state. Thus, from Eq. (55), the effective potential given by Eq (53) provides the following expression for the Casimir energy density: \[\mathcal{E}_{C} = V_{\text{eff}}^{R}\left(\Psi\right)\bigr{|}_{\Psi_{\pm}} \tag{64}\] \[\simeq -\frac{\pi^{2}}{90L^{4}}+2\frac{\pi^{2}}{3L^{4}}B_{4}\left( \theta\right)-\frac{1}{384\lambda_{\psi}L^{4}}\left[\lambda_{\psi}+24gB_{2} \left(\theta\right)\right]^{2}.\] Note that the first two terms on the r.h.s. of Eq. (64) are in agreement with the Casimir energy density presented in Eq. (38). However, the third term presents a dependency on the coupling constants \(\lambda_{\psi}\) and \(g\), which does not Figure 6: Bernoulli polynomials \(B_{2}(\theta)\), solid line, and \(B_{4}(\theta)\), dashed line, as functions of the quasiperiodic parameter \(\theta\). appear in the case where the stable vacuum state is \(\Psi=0\). It is important to point out that Eq. (64) is only an approximation, up to first order in the coupling constants, since we are taking into account the expansion of the effective potential presented in Eq. (53). The first and third terms are always negative, while the second term can be positive or negative, depending on the value of the Bernoulli Polynomial \(B_{4}\left(\theta\right)\), shown in Fig.6. In order to calculate the two-loop correction contribution to the Casimir energy density in Eq. (64) would be necessary to consider additional Feynman graphs other than the ones shown in Figs.4, 5. These additional Feynman graphs come from the second term on the r.h.s. of Eq. (24) in Ref. [23], which vanishes in the case \(\Psi=0\) is the stable vacuum state. The consideration of the two-loop contribution, of course, would make our problem extremely difficult so that we restrict our analysis only to the one-loop correction that provides the Casimir energy density in Eq. (64). From Eqs. (57) and (61), we can conclude that the stability of the vacuum states is determined by the values of the coupling constants \(\lambda_{\psi}\) and \(g\), as well as by the value of the parameter \(\theta\) of the quasiperiodic condition for the complex field. However there is no dependency on the parameter \(L\). In the next section we consider the same system as the one considered in this section, but the complex field is now subjected to mixed boundary conditions. ## IV Periodic condition and mixed boundary conditions In this section we consider the real scalar field obeying periodic condition as before, but now the complex scalar field is subject to mixed boundary conditions. In practice, the first order loop correction to the effective potential associated with the real scalar field is the same as in Eq. (27), differently from the complex field which yields a different contribution since it obeys a different condition. In this case, it is sufficient to evaluate only the correction associated with the complex field. We will also assume that \(\Psi=0\) is the stable vacuum state for the analysis below, although a discussion of other possible stable vacuum states is given in Sec.IV.3. The complex field real components are subject to the following mixed boundary conditions applied on the planes shown in Fig.7[17; 18; 36]: \[\left.\varphi_{i}\left(w\right)\right|_{z=0}=\left.\frac{\partial\varphi_{i} \left(w\right)}{\partial z}\right|_{z=L},\qquad\qquad\left.\frac{\partial \varphi_{i}\left(w\right)}{\partial z}\right|_{z=0}=\left.\varphi_{i}\left(w \right)\right|_{z=L}, \tag{65}\] where \(w=\left(\tau,x,y,z\right)\). By taking into account the boundary conditions above, the eigenvalues of the operator \(\hat{B}\) given in Eq. (7), takes the form [17; 18] \[\beta_{\rho}=k_{\tau}^{2}+k_{x}^{2}+k_{y}^{2}+\left(n+\frac{1}{2}\right)^{2} \frac{\pi^{2}}{L^{2}}+M_{g}^{2},\qquad\qquad M_{g}^{2}=\mu^{2}+g\Psi^{2}, \tag{66}\] where \(n=0,1,2,...\), and the subscript \(\rho\) stands for the set of quantum numbers \(\left(k_{\tau},k_{x},k_{y},n\right)\). It is worth pointing out that, from Eq. (65), two configurations are possible on the parallel planes in Fig.7. For the plane at \(z=0\) we can have Dirichlet boundary condition while for the plane at \(z=L\) we can have the Neumann one. Conversely, for the plane at \(z=0\) we can have Neumann boundary condition while for the plane at \(z=L\) we can have the Dirichlet one. However, both configurations provide the same eigenvalues in Eq. (66). Having the eigenvalues obtained in Eq. (66) we can now proceed to the investigation of the first order correction, that is, the one-loop correction to the effective potential associated with the complex scalar field subjected to mixed boundary conditions on the planes shown in Fig.7. ### One-loop correction The required steps for the obtention of the generalized zeta function for the case under consideration, goes in a similar way as the one presented in the previous sections and also in [18]. Therefore, we present only the main steps for the reader's convenience. Constructing the generalized zeta function with the eigenvalues presented in (66) requires the use of the identity in Eq. (20) which, after the integration of the momenta, one can use the integral representation of the gamma function, (22), finding the expression \[\zeta_{\beta}\left(s\right)=\frac{\Omega_{4}\pi^{\frac{3}{2}-2s}}{8L^{4-2s}} \frac{\Gamma\left(s-\frac{3}{2}\right)}{\Gamma\left(s\right)}\sum_{n=0}^{+ \infty}\left[\left(n+\frac{1}{2}\right)^{2}+\left(\frac{M_{g}L}{\pi}\right)^ {2}\right]^{\frac{3}{2}-s}, \tag{67}\] where \(\Omega_{4}\) is the 4-dimensional volume written as \(\Omega_{4}=\Omega_{3}L\), with \(\Omega_{3}\) being the 3-dimensional volume associated with the Euclidean spacetime coordinates \(\tau,x,y\). In order to perform the sum in Eq. (67), we write it as a sum of two terms [13; 18], i.e., \[\sum_{n=0}^{+\infty}\left[\left(n+\frac{1}{2}\right)^{2}+\vartheta^{2}\right]^{ \frac{3}{2}-s}=\frac{1}{2^{3-2s}}\left\{\sum_{n=1}^{\infty}\left[n^{2}+\left( 2\vartheta\right)^{2}\right]^{\frac{3}{2}-s}-2^{3-2s}\sum_{n=1}^{\infty}\left[ n^{2}+\vartheta^{2}\right]^{\frac{3}{2}-s}\right\}. \tag{68}\] Each sum on the r.h.s. of Eq. (68) can be written in terms of the Epstein-Hurwitz zeta function [37] \[\zeta_{EH}\left(z,\kappa\right) = \sum_{n=1}^{+\infty}\left(n^{2}+\kappa^{2}\right)^{-z} \tag{69}\] \[= -\frac{\kappa^{-2z}}{2}+\frac{\pi^{\frac{1}{2}}}{2}\frac{\Gamma \left(z-\frac{1}{2}\right)}{\Gamma\left(z\right)}\kappa^{1-2z}+\frac{2^{1-z} \left(2\pi\right)^{2z-\frac{1}{2}}}{\Gamma\left(z\right)}\sum_{n=1}^{\infty}n ^{2z-1}f_{\left(z-\frac{1}{2}\right)}\left(2\pi n\kappa\right).\] Hence, with the help of the Eq. (69) one obtains the generalized zeta function as \[\zeta_{\beta}\left(s\right)=\frac{\Omega_{4}}{16\pi^{2}\Gamma\left(s\right)} \left\{M_{g}^{4-2s}\Gamma\left(s-2\right)+\frac{2^{s}}{L^{4-2s}}\sum_{n=1}^{ \infty}n^{2s-4}\left[2^{2s-3}f_{\left(s-2\right)}\left(4nM_{g}L\right)-f_{ \left(s-2\right)}\left(2nM_{g}L\right)\right]\right\}. \tag{70}\] Evaluating the above expression and its derivative in the limit \(s\to 0\), one finds the complex field contribution to the first order loop correction to the effective potential from Eq. (10), i.e., \[V_{\beta}^{\left(1\right)}\left(\Psi\right)=\frac{M_{g}^{4}}{32\pi^{2}}\left[ \ln\left(\frac{M_{g}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]-\frac{M_{g}^{4}} {\pi^{2}}\sum_{n=1}^{\infty}\left[2f_{2}\left(4nM_{g}L\right)-f_{2}\left(2nM_{ g}L\right)\right]. \tag{71}\] Taking into consideration the contribution from the real field, Eq. (27), along with the contribution above of the complex field, the effective potential, up to one-loop correction, is presented in the form \[V_{\rm eff}\left(\Psi\right) = \frac{m^{2}+C_{2}}{2}\Psi^{2}+\frac{\lambda_{\psi}+C_{1}}{4!} \Psi^{4}+C_{3}+ \tag{72}\] \[+\frac{M_{\lambda}^{4}}{64\pi^{2}}\left[\ln\left(\frac{M_{ \lambda}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+\frac{M_{g}^{4}}{32\pi^{2}} \left[\ln\left(\frac{M_{g}^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+\] \[-\frac{M_{\lambda}^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left(jM _{\lambda}L\right)-\frac{M_{g}^{4}}{\pi^{2}}\sum_{n=1}^{\infty}\left[2f_{2} \left(4nM_{g}L\right)-f_{2}\left(2nM_{g}L\right)\right].\] Knowing the effective potential expressed in Eq. (72), one needs to renormalize it. Hence, by applying the renormalization conditions given by Eqs. (11), (12) and (14), we find the renormalization constants \(C_{i}\) as the same as the ones Figure 7: Two identical and perfectly reflecting parallel planes placed at \(z=0\) and \(z=L\), confining the field modes of a complex scalar field. On the planes the mixed boundary conditions in Eq. (65) are applied. obtained in Eq. (33), as it should be. After the substitution of these constants \(C_{i}\)'s in the effective potential, (72), one can write the renormalized effective potential as \[V_{\rm eff}^{R}\left(\Psi\right) = \frac{m^{2}}{2}\Psi^{2}+\frac{\lambda_{\psi}}{4!}\Psi^{4}+\frac{ \mu^{4}}{32\pi^{2}}\ln\left(\frac{M_{g}^{2}}{\mu^{2}}\right)+\frac{m^{4}}{64\pi ^{2}}\ln\left(\frac{M_{\lambda}^{2}}{m^{2}}\right)+ \tag{73}\] \[+\frac{g\mu^{2}\Psi^{2}}{16\pi^{2}}\left[\ln\left(\frac{M_{g}^{2} }{\mu^{2}}\right)-\frac{1}{2}\right]+\frac{\lambda_{\psi}^{2}\Psi^{4}}{256\pi^ {2}}\left[\ln\left(\frac{M_{\lambda}^{2}}{m^{2}}\right)-\frac{3}{2}\right]+\] \[+\frac{\lambda_{\psi}m^{2}\Psi^{2}}{64\pi^{2}}\left[\ln\left( \frac{M_{\lambda}^{2}}{m^{2}}\right)-\frac{1}{2}\right]+\frac{g^{2}\Psi^{4}}{ 32\pi^{2}}\left[\ln\left(\frac{M_{g}^{2}}{\mu^{2}}\right)-\frac{3}{2}\right]+\] \[-\frac{M_{\lambda}^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left(jM _{\lambda}L\right)-\frac{M_{g}^{4}}{\pi^{2}}\sum_{n=1}^{\infty}\left[2f_{2} \left(4nM_{g}L\right)-f_{2}\left(2nM_{g}L\right)\right].\] Once we obtain the renormalized effective potential found in Eq. (73), the Casimir energy density is written in a straightforwardly way by setting \(\Psi=0\), i.e., \[\mathcal{E}_{\rm C}=\left.V_{\rm eff}^{R}\left(\Psi\right)\right|_{\Psi=0}=- \frac{m^{4}}{2\pi^{2}}\sum_{j=1}^{\infty}f_{2}\left(jmL\right)-\frac{\mu^{4}} {\pi^{2}}\sum_{n=1}^{\infty}\left[2f_{2}\left(4n\mu L\right)-f_{2}\left(2n\mu L \right)\right]. \tag{74}\] The first term on the r.h.s. of Eq. (74) is the contribution from the real field which is equal to the one in Eq. (35) as it should, since the boundary condition applied to the real field is the same. However, the second term on the r.h.s. of Eq. (74) is the contribution from the complex field and differs from the case of quasiperiodic condition presented in Eq. (35). This contribution is consistent with the result shown in Ref. [17] where the authors considered a self-interacting real scalar field. The massless scalar field case is obtained by taking the limit for small arguments of the Macdonald function [33]. This yields the following Casimir energy density: \[\mathcal{E}_{\rm C} = -\frac{\pi^{2}}{90L^{4}}\left[1-\frac{7}{64}\right] \tag{75}\] \[= \frac{57}{64}\times\left(-\frac{\pi^{2}}{90L^{4}}\right),\] where we can see that the effect of the interaction with the complex field subjected to mixed boundary conditions is to increase the Casimir energy density of the real scalar field under a periodic condition. In Fig.8 we have plotted the Casimir energy density in Eq. (74) as a function of \(mL\) and taken \(m=\mu\), showing it on the left side. The latter shows how the curve for the free real scalar field (black solid line) differs from the curve when considering the influence of the interaction (blue dotted line). In fact, the interaction increases the value of the Casimir energy density, as shown the curves. This plot also shows that the Casimir energy density goes to zero for large values of \(mL\). This a consequence of the exponentially suppressed behavior of the Macdonald function for large arguments [33]. Also, the two curves end in their corresponding massless field constant value cases at \(mL=0\), as it can be checked from Eq. (75). Let us now investigate how the topological mass associated with the real field changes under the influence of mixed boundary conditions imposed on the complex field. Thus, applying the renormalization condition (12), with the renormalized effective potential in Eq. (73), provides the topological mass written as \[m_{T}^{2}=m^{2}\left\{1+\frac{\lambda_{\psi}}{4\pi^{2}}\sum_{j=1}^{\infty}f_{1 }\left(jmL\right)+\frac{\mu^{2}}{m^{2}}\frac{g}{\pi^{2}}\sum_{n=1}^{\infty} \left[2f_{1}\left(4n\mu L\right)-f_{1}\left(2n\mu L\right)\right]\right\}. \tag{76}\] Of course the difference between the above result and the topological mass found in Eq. (39), relies in the third term on the r.h.s. of Eq. (76). Furthermore, by considering the massless scalar fields case, \(m,\mu\to 0\), one obtains the topological mass as follows \[m_{T}^{2}=\frac{2\lambda_{\psi}-g}{48L^{2}}. \tag{77}\] Note that the topological mass above coincides with the particular cases of quasiperiodic condition in Eq. (40), for \(\theta=1/4,3/4\), which are in the range of Eq. (58). Similarly to the discussion presented in the previous section, here the topological mass squared can also become negative depending on whether \(\lambda_{\psi}\) is bigger or smaller than \(g\). For instance, if \(2\lambda_{\psi}<g\), Eq. (77) becomes negative, indicating vacuum instability. Again, had we considered a complex scalar field theory with only self-interaction (no interaction between the fields) this would be a problem since it does not make sense to consider a constant complex field, \(\Phi_{i}\neq 0\), compatible with mixed boundary conditions. This problem is solved by taking into account an interaction theory as the one considered in the present section (see also [20]). Within this theory it is possible to study the vacuum stability, which here is made in Sec.IV.3 for massless scalar fields. The analysis indicates that the vacuum \(\Psi=0\) is stable only if \(2\lambda_{\psi}>g\), otherwise it is necessary to consider the two other possible vacuum states, \(\Psi_{\pm}\), in Eq. (86). In Fig.9, we have plotted the dimensionless mass squared, \(M^{2}(mL)=m_{T}^{2}L^{2}\), defined from Eq. (76), as a function of \(mL\), taken \(\mu=m\). On the left of Fig.9 the plot shows the curves for \(\lambda_{\psi}=10^{-2}\) and \(g=10^{-3}\), which satisfies the condition \(2\lambda_{\psi}>g\) in order \(\Psi=0\) be a stable vacuum. In contrast, the plot on the right shows the curves for \(\lambda_{\psi}=10^{-3}\) and \(g=10^{-2}\), which satisfies the condition \(2\lambda_{\psi}<g\). In this case, Eq. (77) becomes negative, showing that \(\Psi=0\) is in fact an unstable vacuum state. Note that in an interacting massive scalar field theory \(\Psi=0\) may still be a stable vacuum state even if \(2\lambda_{\psi}<g\) for large values of \(mL\), as shown in the plot on the right side. Note also that each curve, at \(mL=0\), end in their corresponding constant massless scalar field values for the topological mass in Eq. (77). For large values of \(mL\) the Macdonald function is exponentially suppressed and the curves are dominated by the first term on the r.h.s. of Eq. (76). The one-loop correction analysis is now done, so one can proceed to the two-loop correction contribution by still considering \(\Psi=0\) as the stable vacuum state. As we now know, it means that we have to consider the restriction \(2\lambda_{\psi}>g\). The vacuum stability analysis for the present case we postpone until Sec.IV.3. Figure 9: Plot of the dimensionless topological mass squared, \(M^{2}(mL)=m_{T}^{2}L^{2}\), defined from Eq. (76), as a function of \(mL\), taken \(\mu=m\). On the left, the plot shows the curves for \(\lambda_{\psi}=10^{-2}\) and \(g=10^{-3}\) while on the right the plot shows the curves for \(\lambda_{\psi}=10^{-3}\) and \(g=10^{-2}\). ### Two-loop correction Now we wish to evaluate the two-loop correction to the effective potential. We use the same graphs as the ones used in the case of quasiperiodic condition in Figs.4 and 5, and also a similar notation as the one used in Sec.III.2. The first correction comes from the self-interaction term of the real scalar field, that is, \(\frac{\lambda_{\varphi}}{4!}\psi^{4}\). Since we are interested in the vacuum state, \(\Psi=0\), the only non-vanishing contribution is the same as the one obtained in Eq. (44). The second contribution comes from the self-interaction of the complex scalar field, i.e., \(\frac{\lambda_{\varphi}}{4!}\varphi_{i}^{4}\). For the case under consideration, this contribution reads, \[V_{\lambda_{\varphi}}^{(2)}\left(0\right)=2\frac{\lambda_{\varphi}}{8}\left[ \frac{\zeta_{\gamma}^{R}\left(1\right)}{\Omega_{4}}\right]^{2}\Bigg{|}_{\Psi= 0}=2\frac{\lambda_{\varphi}\mu^{4}}{32\pi^{4}}\left\{\sum_{n=1}^{\infty}\left[2 f_{1}\left(4n\mu L\right)-f_{1}\left(2n\mu L\right)\right]\right\}^{2}. \tag{78}\] Note that in the above expression, \(\zeta_{\beta}^{R}\left(1\right)\), stands for the non-divergent part of \(\zeta_{\beta}\left(s\right)\) given by Eq. (70), at \(s=1\), and the factor of two in front of the constant \(\lambda_{\varphi}\) is to remind ourselves that we are taking into account the two components of the complex field. Next we obtain the contribution from the interaction between the fields, that is, from the term \(\frac{g}{2}\varphi_{i}^{2}\psi^{2}\). Hence, this term yields the following correction to the effective potential: \[V_{g}^{(2)}\left(0\right)=2\frac{gm^{2}\mu^{2}}{8\pi^{4}}\left[\sum_{j=1}^{ \infty}f_{1}\left(jmL\right)\right]\left\{\sum_{n=1}^{\infty}\left[2f_{1}\left( 4n\mu L\right)-f_{1}\left(2n\mu L\right)\right]\right\}. \tag{79}\] The last correction to the effective potential comes from the interaction between the real components of the complex field (also a self-interaction), that is, from the term \(\frac{\lambda_{\varphi}}{4!}2\varphi_{1}^{2}\varphi_{2}^{2}\). Thus, one is able to write it as \[V_{2\lambda_{\varphi}}^{(2)}\left(0\right)=\frac{\lambda_{\varphi}\mu^{4}}{48 \pi^{4}}\left\{\sum_{n=1}^{\infty}\left[2f_{1}\left(4n\mu L\right)-f_{1}\left( 2n\mu L\right)\right]\right\}^{2}. \tag{80}\] Therefore, from the results obtained in Eqs. (44), (78), (79) and (80), we may write the Casimir energy density, up to second order correction, that is, up to two-loop correction, as follows \[\Delta\mathcal{E}_{\mathrm{C}} = V^{(2)}\left(\Psi\right)\biggr{|}_{\Psi=0} \tag{81}\] \[= \frac{\lambda_{\varphi}m^{4}}{32\pi^{4}}\left[\sum_{j=1}^{\infty }f_{1}\left(jmL\right)\right]^{2}+\frac{\lambda_{\varphi}\mu^{4}}{12\pi^{4}} \left\{\sum_{n=1}^{\infty}\left[2f_{1}\left(4n\mu L\right)-f_{1}\left(2n\mu L \right)\right]\right\}^{2}+\] \[+\frac{gm^{2}\mu^{2}}{4\pi^{4}}\left[\sum_{j=1}^{\infty}f_{1} \left(jmL\right)\right]\left\{\sum_{n=1}^{\infty}\left[2f_{1}\left(4n\mu L \right)-f_{1}\left(2n\mu L\right)\right]\right\}.\] Note that the expression above is proportional to the coupling constants \(\lambda_{\psi}\), \(\lambda_{\varphi}\) and \(g\), representing the self-interaction of each field and also the interaction between the fields. Moreover, from the correction to the Casimir energy density presented in Eq. (81), one can consider the massless scalar fields limit, \(m,\mu\to 0\). Recalling the limit of small arguments for the Macdonald function, i.e., \(K_{\mu}\left(x\right)\simeq\frac{\Gamma\left(\mu\right)}{2}\left(\frac{2}{x} \right)^{\mu}\)[33], one finds the correction in Eq. (81) for the massless fields in the form \[\Delta\mathcal{E}_{\mathrm{C}}=\frac{\lambda_{\psi}}{1152L^{4}}+\frac{\lambda _{\varphi}}{27648L^{4}}-\frac{g}{1152L^{4}}. \tag{82}\] As we can see, the corrections proportional to the coupling constants \(\lambda_{\psi}\) and \(\lambda_{\varphi}\), which come from the self-interaction of the fields, increase the Casimir energy density in Eq. (75) while the term coming from the interaction between the fields, codified by the coupling constant \(g\), have the effect of decrease the Casimir energy density. Note that the contribution proportional to \(\lambda_{\varphi}\) present in Eq. (82) is not the same as the one obtained in Ref. [17] for the self-interacting real scalar field. In fact, our result for the second term on the r.h.s. of Eq. (82) is \(8/3\) bigger than the one obtained in Ref. [17]. This is due to the fact that, besides the contribution in Eq. (78), we also have an additional contribution proportional to \(\lambda_{\varphi}\) coming from the interaction between the components of the complex field in Eq. (80). The same is valid for the massive contribution on the second term on the r.h.s. of Eq. (81). Note also that, in order to compare our results with the ones present in Ref. [17] we need to define the Casimir energy correction, \(\Delta E_{C}\), per unit area, \(A\), of the planes as \(\frac{\Delta E_{C}}{A}=L\Delta\mathcal{E}_{C}\). In Fig.8, the plot on the right shows the influence of the complex field, under mixed boundary conditions, on the correction (81) to the Casimir energy density of a massive real scalar field. The expression in Eq. (81) has been plotted as a function of \(mL\), taken \(\mu=m\). We also have considered \(\lambda_{\psi}=10^{-2}\), \(\lambda_{\varphi}=10^{-2}\) and \(g=10^{-3}\). The black solid line is the correction free of interaction with the complex field, only with the effect of the real field self-interaction, while the blue dotted line is the correction (81) taking into account the interaction with the complex field subjected to mixed boundary conditions. The effect of the latter is to increase the correction, as revealed by the plot in Fig.8. Note that the two curves tend to their corresponding massless field constant value cases at \(mL=0\), as it can be checked from Eq. (82). Also, in the regime, \(mL\gg 1\), the correction in Eq. (81) goes to zero. This is once again a consequence of the exponentially suppressed behavior of the Macdonald function for large arguments [33]. Next, we shall analyze the vacuum stability of the theory, since the state \(\Psi=0\) is not the only possible vacuum state, as we have already mentioned. For simplicity, we shall consider a massless scalar field theory. ### Vacuum stability Let us analyze here the stability of the possible vacuum states associated with the effective potential, up to first order loop correction, of the theory described by the action in Eq. (1). For simplicity we consider the case where the fields are massless, i.e., \(\mu,m\to 0\). It is import to point out again that, for the complex scalar field obeying the boundary conditions in Eq. (65), the only constant field that can satisfy such a condition is the zero field, hence, we set \(\Phi_{i}=0\). As mentioned before, this fact also turns the approximation discussed below Eq. (2) into an exact expression, namely, the one in Eq. (5), which does not consider cross terms. By following the same steps as the ones to obtain Eq. (32), the nonrenormalized effective potential for the massless scalar fields case is written as \[V_{\text{eff}}\left(\Psi\right)=\frac{\lambda_{\psi}+C}{4!}\Psi^ {4}+\frac{\lambda_{\psi}^{2}\Psi^{4}}{256\pi^{2}}\left[\ln\left(\frac{\lambda _{\psi}\Psi^{2}}{2\nu^{2}}\right)-\frac{3}{2}\right]+\frac{g^{2}\Psi^{4}}{32 \pi^{2}}\left[\ln\left(\frac{g\Psi^{2}}{\nu^{2}}\right)-\frac{3}{2}\right]+\] \[-\left(\frac{\lambda_{\psi}}{2}\right)^{2}\frac{\Psi^{4}}{2\pi^{ 2}}\sum_{j=1}^{\infty}f_{2}\left(j\sqrt{\frac{\lambda_{\psi}}{2}\Psi^{2}}L \right)-\frac{g^{2}\Psi^{4}}{\pi^{2}}\sum_{n=1}^{\infty}\left[2f_{2}\left(4n \sqrt{g\Psi^{2}}L\right)-f_{2}\left(2n\sqrt{g\Psi^{2}}L\right)\right]. \tag{83}\] Now it is required to renormalize the effective potential in Eq. (83). In this sense, by applying the renormalization condition given by Eq. (11), one obtains the renormalization constant \(C\) as the same as the one in Eq. (51). Thus, by substituting this renormalization constant into the effective potential presented in Eq. (83), yields the renormalized effective potential, i.e., \[V_{\text{eff}}^{R}\left(\Psi\right)=\frac{\lambda_{\psi}}{4!} \Psi^{4}-\left[\frac{\lambda_{\psi}^{2}}{8}+g^{2}\right]\frac{25\Psi^{4}}{192 \pi^{2}}+\left[\frac{\lambda_{\psi}^{2}}{8}+g^{2}\right]\frac{\Psi^{4}}{32 \pi^{2}}\ln\left(\frac{\Psi^{2}}{M^{2}}\right)+\] \[-\left(\frac{\lambda_{\psi}}{2}\right)^{2}\frac{\Psi^{4}}{2\pi^{ 2}}\sum_{j=1}^{\infty}f_{2}\left(j\sqrt{\frac{\lambda_{\psi}}{2}\Psi^{2}}L \right)-\frac{g^{2}\Psi^{4}}{\pi^{2}}\sum_{n=1}^{\infty}\left[2f_{2}\left(4n \sqrt{g\Psi^{2}}L\right)-f_{2}\left(2n\sqrt{g\Psi^{2}}L\right)\right]. \tag{84}\] In order to analyze the vacuum stability the renormalized effective potential, (84), can be expanded in terms of the coupling constant \(\lambda_{\psi},g\), keeping the terms only to first order. This results in the following expression: \[V_{\text{eff}}^{R}\left(\Psi\right)\simeq-\frac{19\pi^{2}}{30L^{4}}+\frac{ \lambda_{\psi}\Psi^{4}}{4!}+\frac{\lambda_{\psi}\Psi^{2}}{48L^{2}}-\frac{g \Psi^{2}}{96L^{2}}. \tag{85}\] The possible vacuum states are obtained as the value of \(\Psi\) which corresponds to the minimum of the expanded effective potential in Eq. (85). Therefore, by deriving the effective potential in Eq. (85) with respect to \(\Psi\) and equating it to zero, gives the following values of \(\Psi\), which correspond to the possible vacuum states: \[\Psi=0,\qquad\qquad\Psi_{\pm}=\pm\sqrt{\frac{g-2\lambda_{\psi}}{8\lambda_{\psi }L^{2}}}. \tag{86}\] Whether the vacuum states presented in Eq. (86) are stable or not, is decided from the second derivative of the expandend effective potential given in Eq. (85). Let us first consider the vacuum state as \(\Psi=0\). Then, by taking the second derivative of the expanded potential in Eq. (85), evaluated at \(\Psi=0\), one finds that the condition for the vacuum stability is presented as \[2\lambda_{\psi}>g. \tag{87}\] For this vacuum state the Casimir energy density is given by the same expression as the one in Eq. (75). Moreover, it is straightforward to see that the topological mass also does not change, that is, it gives the same result as the one in Eq. (77). From Eq. (86), on the other hand, one can also consider the vacuum state as \(\Psi=\Psi_{\pm}\). By evaluating the second derivative of the expanded effective potential at the vacuum states \(\Psi=\Psi_{\pm}\), we learn that the stability condition reads \[g>2\lambda_{\psi}. \tag{88}\] Hence, the topological mass for the case under consideration takes the following form: \[m_{T}^{2}=\frac{g-2\lambda_{\psi}}{24L^{2}}, \tag{89}\] which is also a consistent quantity since it is strictly positive, in accordance with Eq. (88). Besides, the Casimir energy density, considering \(\Psi=\Psi_{\pm}\) as the vacuum states, is obtained by the substitution of \(\Psi_{\pm}\) on the expanded effective potential in Eq. (85), providing the Casimir energy density \[\mathcal{E}_{C} = V_{\rm eff}^{R}\left(\Psi\right)\bigr{|}_{\Psi_{\pm}} \tag{90}\] \[\simeq -\frac{19\pi^{2}}{30L^{4}}-\frac{\left(g-2\lambda_{\psi}\right)^ {2}}{1536\lambda_{\psi}L^{4}}.\] We emphasize that the above result is an approximation since we are considering the expansion of the effective potential in Eq. (85) up to first order in the coupling constants. Note that the first term on the r.h.s. of Eq. (90) is the Casimir energy density obtained in Eq. (75) while the second term brings coupling constant corrections. The discussion presented at the end of Sec.III.3 also applies here. That is, the calculation of the two-loop correction contribution to the Casimir energy density in Eq. (90) requires additional Feynman graphs other than the ones shown in Figs.4, 5. The consideration of the two-loop contribution, of course, would make our problem extremely difficult so that we restrict our analysis only to the one-loop correction that provides the Casimir energy density in Eq. (90). From the results presented in Eqs. (87) and (88), one can conclude that the stable vacuum state is determined by the values of the coupling constants \(\lambda_{\psi}\) and \(g\) and not on the value of the parameter \(L\) characterizing the boundary condition. ## V Concluding remarks Loop correction to the Casimir effect and generation of topological mass have been investigated. Both the Casimir energy density and the topological mass arise from the nontrivial topology of the Minkowski spacetime, which takes place in the form of periodic and quasiperiodic conditions. These physical quantities also arise from mixed boundary conditions considered. More specifically, the system that has been taken into consideration consists of real and complex scalar fields interacting by means of a quartic interaction in addition to the self-interactions of the fields. The real scalar field has been subjected to a periodic boundary condition while the complex scalar field has been assumed to satisfy quasiperiodic condition, and also mixed boundary conditions. The Casimir energy density, up to one-loop correction to the effective potential, has been obtained in Eq. (35) for massive fields, and in Eq. (36), for massless fields, considering the case where the complex field obeys quasiperiodic condition. In this context, the topological mass has also been obtained in Eqs. (39) and (40) for the massive and massless cases, respectively. The two-loop correction contribution to the Casimir energy density, considering both massive and massless fields cases have been presented, respectively, in Eqs. (48) and (49), which turn out to be proportional to the coupling constants \(\lambda_{\psi}\), \(\lambda_{\varphi}\) and \(g\). Moreover, it has also been investigated the possible stable vacuum states and the stability conditions for such states. These vacuum states have been presented in Eq. (55) and the corresponding stability conditions expressed in Eqs. (57) and (61), which depend on the values of the coupling constants \(\lambda_{\psi}\), \(g\) and on the parameter \(\theta\) of the quasiperiodic condition. Furthermore, by assuming that the complex field satisfies mixed boundary conditions, the Casimir energy density, up to one-loop correction to the effective potential, for both massive and massless field cases have been presented in Eqs. (74) and (75), respectively. The topological mass analysis for such a system has been performed as well and the results are given by Eqs. (76) and (77) for massive and massless fields, respectively. In this case, the two-loop correction contribution to the Casimir energy density has also been presented in Eqs. (81) and (82), for massive and massless cases, respectively. The investigation of vacuum stability has determined the possible vacuum states and the condition to achieve stability in each case. From the results presented in Eqs. (87) and (88), we can conclude that the stable vacuum is determined by the values of the coupling constants \(\lambda_{\psi}\) and \(g\) and not on the value of the parameter, \(L\), of the boundary condition. Therefore, by extending the analysis performed in Ref. [20] to the complex field and considering other conditions we have also generalized the results found in Refs. [23; 24; 17; 25; 18] for a self-interacting real scalar field theory. ###### Acknowledgements. A.J.D.F.J would like to thank the Brazilian agency Coordination for the Improvement of Higher Education Personnel (CAPES) for financial support. The author H.F.S.M. is partially supported by the Brazilian agency National Council for Scientific and Technological Development (CNPq) under grant No 311031/2020-0.
2309.15504
Embedding simply connected 2-complexes in 3-space
Firstly, we characterise the embeddability of simply connected locally 3-connected 2-dimensional simplicial complexes in 3-space in a way analogous to Kuratowski's characterisation of graph planarity, by nine excluded minors. This answers questions of Lov\'asz, Pardon and U. Wagner. The excluded minors are the cones over $K_5$ and $K_{3,3}$, five related constructions, and the remaining two are obtained from triangulations of the M\"obius strip by attaching a disc at its central cycle. Secondly, we extend the above theorem to all simply connected 2-dimensional simplicial complexes.
Johannes Carmesin
2023-09-27T09:11:08Z
http://arxiv.org/abs/2309.15504v1
# Embedding simply connected 2-complexes in 3-space ###### Abstract Firstly, we characterise the embeddability of simply connected locally 3-connected 2-dimensional simplicial complexes in 3-space in a way analogous to Kuratowski's characterisation of graph planarity, by nine excluded minors. This answers questions of Lovasz, Pardon and U. Wagner. The excluded minors are the cones over \(K_{5}\) and \(K_{3,3}\), five related constructions, and the remaining two are obtained from triangulations of the Mobius strip by attaching a disc at its central cycle. Secondly, we extend the above theorem to all simply connected 2-dimensional simplicial complexes. ## Introduction In 1930, Kuratowski proved that a graph can be embedded in the plane if and only if it has none of the two non-planar graphs \(K_{5}\) or \(K_{3,3}\) as a minor1[21]. The main result of this paper may be regarded as a 3-dimensional analogue of this theorem. Footnote 1: A _minor_ of a graph is obtained by deleting or contracting edges. Kuratowski’s theorem is stated in terms of subdivisions but here we refer to it in the modern version in terms of minors; both statements are trivially equivalent. The perspective in terms of the minor operation is due to K. Wagner, who used it to prove the equivalence between the 4 Colour Theorem and Hadwiger’s Conjecture [16] for 4 colours; and thus founded graph minor theory [37]. Kuratowski's theorem gives a way how embeddings in the plane could be understood through the minor relation. A far reaching extension of Kuratowski's theorem is the Robertson-Seymour theorem [34]. Any minor-closed class of graphs is characterised by the list of minor-minimal graphs not in the class. This theorem says that this list always must be finite. The methods developed to prove this theorem are nowadays used in many results in the area of structural graph theory [11] - and beyond; recently Geelen, Gerards and Whittle extended the Robertson-Seymour theorem to representable matroids by proving Rota's conjecture [13]. Very roughly, the Robertson-Seymour structure theorem establishes a correspondence between minor closed classes of graphs and classes of graphs almost embeddable in 2-dimensional surfaces.
2309.05717
Role of ionizing background on the non-thermal broadening inferred for the aligned absorbers
Using cosmological hydrodynamical simulations at $z\sim0.5$, we measure the thermal ($b_{t}$) and non-thermal ($b_{nt}$) contribution to the line broadening for the intergalactic absorbers having \OVI\ and \HI\ absorption well aligned in the velocity space. We find that the inferred temperature based on $b_{t}$ correlates strongly with the optical depth-weighted kinetic temperature of the absorbing gas, albeit with a large scatter. We show this scatter comes from the spread in the kinetic temperature of the gas contributing to the absorption and hence depends on the feedback processes and the ionizing UV background (UVB) used in the simulations. We show the distribution of $b_{nt}$ is also affected by both feedback processes and the ionizing UVB. Therefore, $b_{nt}$ derived using aligned absorbers may not be a good probe of sub-grid turbulence. Therefore, $b_{nt}$ derived using aligned absorbers may not be a good discriminator between the effect of microscopic turbulence and UVB. Instead, the distribution of $b_{t}$ and $b_{nt}$ together with the frequency of occurrence of the aligned absorbers can be used to place additional constraints on the parameters of the simulation for a given assumed UVB.
Sukanya Mallik, Raghunathan Srianand
2023-09-11T18:00:08Z
http://arxiv.org/abs/2309.05717v2
# Role of ionizing background on the non-thermal broadening inferred for the aligned absorbers ###### Abstract Using cosmological hydrodynamical simulations at \(z\sim 0.5\) we identify the aligned absorbers and measure the thermal (\(b_{t}\)) and non-thermal (\(b_{nt}\)) contribution to the line broadening using O vi and H i absorption lines. We find that the inferred temperature based on \(b_{t}\) correlates strongly with the optical depth-weighted kinetic temperature of the absorbing gas albeit with a large scatter. We show this scatter comes from the spread in the kinetic temperature of the gas contributing to the absorption and hence depends on the feedback processes and the ionizing UV background (UVB) used in the simulations. We show the distribution of \(b_{nt}\) is also affected by both feedback processes and the ionizing UVB. Therefore, \(b_{nt}\) derived using aligned absorbers may not be a good probe of sub-grid turbulence. Instead, the distribution of \(b_{t}\) and \(b_{nt}\) together with the frequency of occurrence of the aligned absorbers can be used to place additional constraints on the parameters of the simulation for a given assumed UVB. keywords: Cosmology: large-scale structure of Universe - Cosmology: diffuse radiation - Galaxies: intergalactic medium - Galaxies: quasars : absorption lines ## 1 Introduction The Ly\(\alpha\) and metal absorption lines detected in the spectra of distant quasars are frequently used to probe the physics of the intervening medium. Heavier elements (metals) are believed to originate in stars and are transported into the low-density inter-galactic and circum-galactic medium (IGM and CGM, respectively) via different feedback processes. The detectability of the metals depends on their covering fraction and the thermal and ionization state of the absorbing medium, which are governed by the spectra of the ionizing Ultra-Violet background (UVB). Different feedback mechanisms, such as kinetic and thermal contribution due to stellar and AGN feedback, affect the metal distribution and the physical conditions in the intervening medium. Many cosmological hydrodynamical simulations incorporating a varied range of feedback prescriptions (see for example, Oppenheimer & Dave, 2009; Tepper-Garcia et al., 2011; Oppenheimer et al., 2012; Tepper-Garcia et al., 2013; Rahmani et al., 2016; Nelson et al., 2018; Bradley et al., 2022; Mallik et al., 2023; Khaire et al., 2023) are used to understand the influence of feedback processes on statistics of the Ly\(\alpha\) and metal absorbers like column density distribution function (hereafter CDDF), Doppler parameter distribution, line-width distribution and association of metal absorbers with Ly\(\alpha\). The spectra of the UVB at high redshifts can not be directly observed and are computed using the spectral energy distribution (SED) and volume densities of quasars and galaxies after taking into account cosmological radiative transport (refer to Haardt & Madau, 1996; Faucher-Giguere et al., 2009; Haardt & Madau, 2012; Khaire & Srianand, 2019; Faucher-Giguere, 2020). The UVB spectra computed in the literature are uncertain in the extreme UV to soft X-ray energies because of the absence of direct measurements of quasar SED due to attenuation by Milky Way's interstellar medium. The effect of uncertainties in the UVB on the estimated physical parameters like density, temperature, metallicity, elemental abundance, and ionization state of the absorbers has been demonstrated in many previous works, (see for example, Schaye et al., 2003; Simcoe et al., 2004; Aguirre et al., 2004, 2008; Howk et al., 2009; Simcoe, 2011; Fechner, 2011; Hussain et al., 2017; Haislmaier et al., 2021; Acharya & Khaire, 2022). Very few works like Oppenheimer & Dave (2009) and recently in Appleby et al. (2021) and Mallik et al. (2023) discuss the effect of uncertainties in the UVB on the statistics of observed and simulated metal absorbers. Mallik et al. (2023) (hereafter Paper I) have shown that the CDDF, cumulative distribution function (CDF) of Doppler parameter (b) and system width (\(\Delta V_{90}\)), and metal association statistics of Ly\(\alpha\) are affected by the feedback models as well as the UVB. More importantly, it is found that the effect of uncertainty in UVB on different absorber statistics depends on the feedback used in the simulation, indicating a possible degeneracy in constraining the feedback mechanisms using these statistics. The kinetic feedback due to the SNe explosion not only influences the metal enrichment in lower-density regions but is also a major source of turbulence in the interstellar and intergalactic medium (MacLow, 2004; Evoli & Ferrara, 2011). While stellar feedback (star formation, stellar wind, and SNe explosion) dominates the turbulent energy input at the scale of supernova-rermant (10-100 pc) (see for example McKee & Ostriker, 1977; Norman & Ferrara, 1996; Dib et al., 2006; de Avillez & Breitschwerdt, 2007; Joung et al., 2009; Lu et al., 2020), intermediate and large-scale turbulence also originates due to gravitational (Schaye, 2004; Fensch et al., 2023; Forbes et al., 2023) and magneto-rotational instabilities (Fleck, 1981; Sellwood & Balbus, 1999; Beck, 2015). At small scales (\(<\)1 pc), plasma instabilities driven by cosmic rays lead to cosmic ray acceleration and magnetic field amplification. One of the manifestations of the resulting turbulent motion is an additional source of line broadening apart from the thermal contribution (Armstrong et al., 1995; Chepurnov & Lazarian, 2010; Lazarian et al., 2023). Such small-scale turbulence is usually not captured well in cosmological simulations due to insufficient spatial resolution. A hint of turbulent energy contribution as non-thermal line broadening comes from the fact that the median Doppler parameter found for the low-\(z\) Ly\(\alpha\) forest in the simulations is much lower than that found in the observations (Meiksin et al., 2001; Viel et al., 2017; Nasir et al., 2017). This is sometimes compensated by introducing turbulent broadening as a subgrid physics in simulations (Oppenheimer & Dave, 2009; Turner et al., 2016; Gaikwade et al., 2017; Maitra et al., 2020; Bolton et al., 2022). The non-thermal line broadening is also introduced by Hubble flow, local kinematics, the approximation of blended lines with single/multiple Voigt profiles, etc. The effect of Hubble flow may not be a dominant source as the length of the absorbing medium is, at most, a few hundred kpcs (as discussed for low redshift absorbers in Richter et al., 2006; Prause et al., 2007; Hussain et al., 2017; Mohapatra et al., 2021). The higher non-thermal broadening in the multiphase system reported in Tripp et al. (2008) indicates that fluctuations in density-temperature-peculiar velocity fields of the absorbing media can create such effects. Hence, simulations need to either incorporate appropriate subgrid physics to include the effects of turbulence or introduce some form of non-thermal line broadening to match the observed Doppler parameter distribution. Therefore, the measurement of non-thermal broadening in aligned absorbers could also provide some useful constraints on various parameters of the cosmological simulations. This is what we explore in this work. It is a common practice to find non-thermal contribution to the line broadening by separating the thermal and non-thermal parts in the Doppler parameter of the aligned absorption of two different species having widely different masses. For high redshift absorbers, Rauch et al. (1996) (using aligned C iv - Si iv lines), Muzahid et al. (2012) (using aligned O vi - Ly\(\alpha\) lines) have reported low values of the non-thermal Doppler parameters (with median value \(<\) 10 km s\({}^{-1}\)). In contrast, Tripp et al. (2008), Savage et al. (2014) have reported a much higher value of non-thermal broadening with a median value of 20-26 km s\({}^{-1}\) for low-redshift observations. In the photoionized gas (i.e., T of few 10\({}^{4}\)K), this non-thermal b-value will indicate super-sonic turbulence. However, the relative contribution of thermal and non-thermal broadening varies in different samples. In the low redshift Broad Lyman Alpha (BLA) sample, Danforth et al. (2010) considers only 10% non-thermal contribution in line broadening, as also found by Richter et al. (2006) in simulated BLA absorbers. Although a more recent work by Pessa et al. (2018) finds the non-thermal contribution for BLAs in inter-cluster medium to be as high as 90%. For the aligned absorbers (not classified as BLA or non-BLA) in the sample reported in Tripp et al. (2008) and Savage et al. (2014), more than 80% non-thermal contribution is found in 25% and 33% of absorbers, respectively. Some previous works (see for example, Cen & Chisari, 2011; Tepper-Garcia et al., 2012; Pessa et al., 2018) finds a trend of higher non-thermal contribution in the H i high column density absorbers. In the simulations, absorption at a given redshift can originate from spatially well-separated regions (Peeples et al., 2019; Marra et al., 2021; Mallik et al., 2023). The gradients in the density-temperature field and local kinematics and the decomposition of the absorption profile into single/multiple Voigt profiles can introduce non-thermal line broadening in the simulated absorbers. Mallik et al. (2023) found variation in UVB alters the region contributing to the absorption, and the effect depends on the feedback models incorporated in the simulation. This motivates us to explore the effect of uncertainty in UVB on the thermal and non-thermal broadening of the aligned O vi and Ly\(\alpha\) absorbers in two sets of cosmological hydrodynamical simulations with different implementations of feedback mechanisms. This paper is arranged as follows. In section 2, we present a summary of the simulations and different ionizing backgrounds used in this work, along with the overview of simulated spectra generation and Voigt profile fitting of the simulated spectra. We also provide details about the method followed to identify the aligned Ly\(\alpha\) and O vi absorbers. We study various statistical distributions of thermal and non-thermal line broadening and their dependence on other physical parameters in section 3. We discuss our results in section 4. ## 2 Details of simulations used and different ionizing backgrounds In this work, we use the Sherwood simulation suit (Bolton et al., 2017) based on parallel Tree-PM SPH code P-Gadget-3, incorporating two different feedback models. The comparison between these two simulations is discussed in detail in Paper I (see section 2 in Paper I). Briefly, both the simulations have boxsize of 80 h\({}^{-1}\)cMpc containing 2\(\times\)512\({}^{3}\) dark matter and baryonic particles and use cosmological parameters (\(Q_{m}\), \(Q_{b}\), \(Q_{\Lambda}\), \(\sigma_{8}\), \(n_{s}\), \(h\)) = (0.308, 0.0482, 0.692, 0.829, 0.961, 0.678) from Planck Collaboration et al. (2014). The initial conditions were generated on a regular grid using the N-GENIC code (Springel, 2005) and transfer functions generated by CAMB (Lewis et al., 2000) at \(z=99\). Initial particle mass in Sherwood simulation are \(2.75\times 10^{8}h^{-1}M_{\odot}(\mathrm{DM})\), \(5.1\times 10^{7}h^{-1}M_{\odot}(\mathrm{baryon})\) with gravitational softening length set to \(1/25^{th}\) of the mean inter-particle spacing. We use two models of the Sherwood simulation suit: the Sherwood "WIND" model, which has only stellar wind feedback, and the Sherwood "WIND+AGN" model, which includes both the wind feedback and AGN feedback. These two models were initiated with the same seed, and all other parameters were the same except for the feedback prescriptions. The feedback parameters implemented in the Sherwood simulations were chosen such that the galaxy properties like galaxy stellar mass function and redshift evolution of star formation rate density are consistent with the observations (see section 3.1-3.3 in Puchwein & Springel, 2013). The wind and AGN feedback models used here broadly agree with the other recent simulations like EAGLE (Schaye et al., 2015; Crain et al., 2015) and Illustris TNG (Pillepich et al., 2018; Weinberger et al., 2017), but are less severe than the AGN feedback used in SIMBA simulation (Dave et al., 2019). Puchwein & Springel (2013) found the convergence for the wind feedback-only models is well established, but the models, including both wind and AGN feedback, may have some issues in converges for their lowest resolution model (with mass resolution 2.75 times higher than the model used in this work). This may not be a major issues for our study, as we aim to probe how well the temperature derived from the aligned absorbers trace the underlying gas temperature. The global metallicity of the gas particles is retained, and we assume solar relative abundance of different elements. The UVB given by Haardt & Madau (2012) was used in these simulation runs. We generate sightlines for both the simulation boxes for a given UVB at \(z=0.5\). The volume, as well as the resolution of the simulation box and the implemented feedback prescriptions, are comparable to the state-of-the-art simulations used to study metal absorbers in IGM (for example, Oppenheimer & Dave, 2009; Oppenheimer et al., 2012; Tepper-Garcia et al., 2011; Tepper-Garcia et al., 2013; Nelson et al., 2018). As discussed in Paper I, variation in UVB strongly influences neither the baryon fraction in the diffuse gas phase nor the parameters of the temperature-density relation. Following Mallik et al. (2023), we assume the effect of variation in UVB on the gas temperature is negligible, and we will mainly study its effect on the ionization state of the absorbing gas. Our fiducial UVB is given by Khaire & Srianand (2019) with a spectral index of -1.8 (denoted as "ks19q18" hereafter). However, other UVB spectra are reported in the literature. A detailed comparison among them can be found in section 2.1 of Paper I. Our fiducial UVB and the UVB given by Faucher-Giguere et al. (2009) (the 2011 updated version of it, hereafter referred to as "fg11") span the range of UVB in the literature in the energy range relevant for O vi ionization. We use "ks19q18" and "fg11" UVBs to study the effect of uncertainty in the UVB on absorber statistics. Figure 1: An example of the simulated spectra (black) of O vi and Ly \(\alpha\) absorber aligned in redshift space from Sherwood “WND+AGN” simulations. The left and right columns show the spectra obtained using “ks19q18” and “fg11” UVBs, respectively. The Voigt profile fits of the aligned O vi - Ly \(\alpha\) absorbers are performed simultaneously using the vrfit (green). The thermal and non-thermal Doppler parameters and the column density (in log unit) obtained from the simultaneous fit to O vi and H i are indicated in panels (a) and (f) for O vi and in panels (c) and (d) for Ly \(\alpha\). The vertical dashed lines in these panels indicate the region used to calculate \(\Delta V_{00}\). We mention different properties of the absorbing gas, e.g., average density (\(\overline{n_{H}}\)), temperature (\(\overline{T}\)), metallicity (\(\overline{Z/Z_{\odot}}\)), and the length of the contributing region \(\overline{\Delta x}\) in these panels. ### Generating simulated metal spectra We shoot lines-of-sight along the simulation box and generate transmitted flux spectra. We calculate the number-density of a given ion species from the density and metallicity of SPH particles considering the ionization correction obtained from photoionization code cloudy for a given UVB, with the assumption that the absorbing gas is optically thin. This is a valid assumption as we found later that most of our Ly\(\alpha\) absorbers have H i column density lower than the threshold H i column density (given by Rahmati et al., 2013) where self-shielding becomes essential. We divide the length of the sightline (80 h\({}^{-1}\)cMpc) into 1024 equispaced grids (\(\sim\)7 km s\({}^{-1}\)), and at each grid point, we calculate the SPH smoothed ion number density (n\({}_{XI}\)), temperature (T), and peculiar velocity (v) field of the SPH particles within the smoothing length from the sightline. We use the SPH smoothed quantities in the grid points along the line of sight to calculate the optical depth of a given ion species (\(r_{XI}\)), following the sightline generation procedure described in section 2.3 in Paper I. We convolve the transmitted flux (F\({}_{XI}\)= exp(\(-\tau_{XI}\))) with a Gaussian profile with a FWHM of 17 kms\({}^{-1}\), mimicking the instrumental resolution of HST/COS spectrograph and added Gaussian noise corresponding to SNR=10 per pixel. The spectral resolution of the simulated spectra is sufficient to compare the results with HST/COS observations. ### Selection criteria for aligned absorbers We have generated 10000 sightlines following the procedure described above. The initial identification of absorption lines and Voigt profile fitting were performed using the automated Voigt profile fitting code, viter(see Gaikwad et al., 2017, for details). Currently, viter can automatically identify absorption systems above a chosen significant level (i.e Rigorous Significant Level, RSL \(\geq\) 4, in our case) and fit multiple Voigt profile components minimizing \(\chi^{2}\) and Akaike information criterion(AIC). However, simultaneous fitting of several absorption lines (that is required to decompose the thermal and non-thermal contribution to the line broadening) is at present not possible with viter. Therefore, we considered two steps in identifying and fitting the absorption lines to optimize the process. We initially use viter to independently identify and fit the Ly\(\alpha\) and O vi\(\lambda\)1032 lines along the 10000 spectra generated from our simulations. Then we identify the components that produce significant absorption lines (i.e., with RSL\(\geq\) 4) both in Ly\(\alpha\) and O vi absorption. Next, identify the Ly\(\alpha\) and O vi absorption components having redshift difference corresponding to a velocity offset within 10 kms\({}^{-1}\) (we refer to them as aligned absorbers). Our definition of aligned absorbers is slightly conservative compared to previous work both in observation (see for example Rauch et al., 1996; Tripp et al., 2008; Muzahid et al., 2012; Savage et al., 2014) and simulations (such as Oppenheimer & Dave, 2009; Tepper-Garcia et al., 2012; Pessa et al., 2018) which considers O vi and Ly\(\alpha\) components either having same redshift or a velocity offset within the error of the velocity-centroid measurement, as the aligned absorbers. We then performed simultaneous multi-component Voigt profile fitting of all the detectable Lyman-series absorption lines O vi doublet absorption in the aligned absorbers using vrfit Carswell & Webb (version 10.4, 2014). During the fit, we constrained the redshift to be identical (i.e., complete alignment for H i and O vi components). We obtained the column density (\(N_{\rm OVI}\) and \(N_{\rm HI}\)) and the Doppler parameter due to thermal Figure 2: _Upper panels_: Comparison of the temperature estimated from the thermal part of Doppler parameter (b\({}_{\rm J}\)) and optical depth weighted temperature for models using “ks19q18” (left panel) and “f511” (right panel) UVBs. The dashed line is the equality line. The absorbers in “set1” and “set2” (as defined in section 3.1) are shown in filled and open symbols respectively. _Lower panels_: The deviation between the temperatures of the absorbers estimated from two different methods are shown for “ks19q18” (left panel) and “f511” (right panel) UVBs. The RMS in the fractional deviation of measured temperature with respect to \(T_{\rm r}\) for the “WIND” and the “WIND+AGN” models are indicated in light and dark-yellow shaded regions, respectively. (b\({}_{\rm r}\)) and non-thermal (b\({}_{nt}\)) broadening of the line for individual components. As an example, in Figure 1, we show the simultaneous Voigt profile fitting results to H i and O vi absorption in one of the aligned absorbers from "WIND+AGN" simulations when we consider the "ks19q18" (left panels) and "fg11" (right panels) UVBs. The mean density, temperature, metallicity, and the length of the gas contributing to the absorption are influenced by the UVB, as indicated by the legends provided in Figure 1. ## 3 Results It has been shown in Paper I that the b-distribution is significantly affected by the feedback processes (see their Figure 8). Here, when we decompose the b into thermal (b\({}_{\rm r}\)) and non-thermal (b\({}_{nt}\)) parts, we notice that the feedback effects are minimal for the thermal broadening but significant for the non-thermal line broadening. We also study how these results are affected by the assumed UVB. ### Temperature estimation of the absorbers The width (i.e., b) of an absorption line is contributed by both thermal and non-thermal broadening. If we assume a Gaussian line shape for the non-thermal broadening, then the total Doppler parameter is given by b= (b\({}_{\rm r}^{2}\) + b\({}_{nt}^{2}\))\({}^{1/2}\). As in observations, the aligned absorbers are assumed to trace the gas with the same temperature, ionizing conditions, and identical non-thermal broadening. The temperature, T(b\({}_{\rm r}\)), of the aligned absorbers, is deduced from b\({}_{\rm r}\), which is inversely proportional to the square root of the mass of the species. In simulations, the average temperature of the gas traced by the aligned absorption component can also be estimated from the optical-depth-weighted temperature (\(T_{\rm r}\)). Let \(\tau_{ij}\) is the optical depth contribution at the \(i^{th}\) pixel from the gas at \(j^{th}\) pixel having a temperature \(T_{j}\). Then the optical depth weighted temperature at the \(i^{th}\) pixel \(T_{\rm r}(i)\) is given by, \(T_{\rm r}(i)=\sum_{j=1}^{N}T_{j}\tau_{ij}/\sum_{j=1}^{N}\tau_{ij}\). As Ly\(\alpha\) profile often shows multiple components and not all of them produce detectable O vi absorption, we used O vi optical depth for calculating \(T_{\rm r}\). We have assigned the median value of \(T_{\rm r}(i)\) for pixels within the system-width of the absorption to be the optical-depth-weighted temperature (\(T_{\rm r}\)) of the absorber. In the left panel of Figure 2, we compare the kinetic temperature [\(T(b_{\rm r})\)] derived from \(b_{\rm r}\) for the aligned absorption components with \(T_{\rm r}\) for Sherwood "WIND" and "WIND+AGN" simulations that use "ks19q18" UVB. Firstly, we notice a strong correlation between \(T_{\rm r}\) and \(T(b_{\rm r})\) for all the absorbers. This suggests that T(\(b_{\rm r}\)) is a good representation of the optical depth-weighted temperature of the regions contributing to the absorption. However, there is a large scatter around the equality line. The scatter seems to be higher for the WING+AGN simulations compared to the WIND-only simulations. To quantify this, in the lower panel, we show the fractional deviation between these two estimates of temperature (defined as \((T(b_{\rm r})-T_{\rm r})/T_{\rm r}\)) for both the simulations. We find the RMS of the scatter to be 0.37 and 0.65 for "WIND" and "WIND+AGN" simulations, respectively. To explore this further, we have divided the absorbers into two sets. The first set ("set1") contains absorbers where both O vi and Ly\(\alpha\) absorption are fitted with a single Voigt component. The "set2" contains absorbers with Ly\(\alpha\) absorption fitted with multiple components and only one or few of these components are aligned with O vi absorption component(s). In this case it is possible that some of the derived parameters of Ly\(\alpha\) absorption may be influenced by the uncertainty in the voigt profile decomposition. The "WIND" and "WIND+AGN" simulations have 26-29% absorbers in "set1" for "ks19q18" UVB. The absorbers in "set1" show lesser scatter in the fractional deviation in temperature with respect to the unity line compared to the absorbers in "set2". In the case of WIND-only simulations, the RMS of the fractional deviation is 0.36 and 0.38 for "set1" and "set2" respectively. The same in the case of "WIND+AGN" simulations are 0.52 and 0.70, respectively. While measuring the temperature from b\({}_{\rm r}\), we assume the absorption originates from a cloud with a single temperature. In reality, a given absorption component is contributed by gas with a range of density and temperature. Thus, the difference between the optical depth-weighted temperature and T(b\({}_{\rm r}\)) originates from diverse regions contributing to the absorption. This explains why there is a large scatter in the case of "WIND+AGN" simulations. We have also shown that some deviation can originate from the uncertainties involved in the Voigt profile decomposition in the case of Ly\(\alpha\) absorbers with multiple components. Next, we explore the influence of UVB on the measured non-thermal broadening. In the right panel of Figure 2, we show the results considering the "fg11" UVB. As in the case of "ks19q18" UVB, the inferred T(b\({}_{\rm r}\)) in the case of "fg11" UVB is broadly consistent with \(T_{\rm r}\). However, we do see the scatter around the unity line is increased in the case of "fg11", i.e., the RMS increases to 0.49 and 0.75 for the "WIND" and "WIND+AGN" models, respectively (shown in the yellow shaded regions in the right-lower panel) when we considered the full sample. Similar to the case of "ks19q18" UVB, around 30% of the absorbers in both simulations are in "set1" for "fg11" UVB. The multi-component absorbers ("set2") in the case of "fg11" also show a larger scatter from the equality line compared to the single-component absorbers ("set1"). The RMS of fractional deviation for absorbers in "set1" and "set2" in the "WIND" simulation is 0.31 and 0.55, respectively. The scatter is considerably higher in the case of the "WIND+AGN" simulation, with RMS of fractional deviation being 0.70 and 0.77, respectively. This is easy to understand as more low-density regions contribute to the absorption when softer UVB like "fg11" is used. This increases the range of density (temperature) of the gas contributing to a given absorption, and hence, we expect the T(b\({}_{\rm r}\)) to deviate more from the optical depth weighted value, as reported here. To explore this scatter further, in the case of "set2" we regenerated a modified profile of Ly\(\alpha\) absorbers, considering only the pixels that contribute to the O vi absorption of the aligned component. This eliminates the possible source of scatter originating from the uncertainty in assigning a particular Ly\(\alpha\) component as an aligned component in a multi-component blended Ly\(\alpha\) profile. We find the scatter in \(T(b_{\rm r})\) around the optical depth weighted temperature in the "WIND+AGN" model reduces by 20 - 25% compared to the full sample of absorbers. However, the scatter (RMS \(\sim 0.52\) - 0.63) is comparable to or slightly less than the scatter found for absorbers in "set1". _Overall the temperature estimated from b\({}_{\rm r}\) roughly agrees with the optical-depth-weighted value for both "WIND" and "WIND+AGN" simulations for spectra obtained using both "ks19q18" and "f611" UVBs. In individual cases, the estimated temperature is expected to have a scatter introduced by our assumption of absorption originating from a gas having a single temperature. Therefore, the scatter also depends on the nature of feedback used in simulations._ ### Temperature distribution of the absorbers In Figure 3 left panel, we show the distribution of T(b\({}_{t}\)) for simulated spectra using "ks19q18" UVB for "WIND" and "WIND+AGN" models. We compare the simulation results with the allowed range in observation, reported by Tripp et al. (2008) and Savage et al. (2014), as shown in black and red lines in Figure 3. We generated 500 sets of mock observation values from Gaussian random number selection within the error range of each observed temperature value and calculated CDF for each set. The mean value of these CDFs is plotted in red/black symbols, and error in CDF is obtained from the standard deviation. We notice both simulations produce O vi absorption from a much hotter medium compared to the observation reported in Tripp et al. (2008) and Savage et al. (2014). This is related to shortcomings of the Sherwood simulations discussed in Mallik et al. (2023). The median temperatures in the two simulations (\(10^{5.35}\)K and \(10^{5.43}\)K for "WIND" and "WIND+AGN" models, respectively) dif Figure 4: The optical depth weighted temperature (\(T_{r}\)) and density (\(n_{H}(\tau)\)) of the aligned absorbers in the “WIND” and “WIND+AGN” simulations for “ks19q18” UVB is shown in the left panel. The absorbers in “set1” and “set2” (as defined in section 3.1) are shown in filled and open symbols. The vertical and horizontal lines (solid line for “set1” and dashed line for “set2”) indicate the median density and temperature, respectively. A similar plot for “fg11” UVB is shown in the right panel. Figure 3: The cumulative distribution function for temperature estimated from b\({}_{t}\) with simulated spectra obtained from Sherwood “WIND” and “WIND+AGN” using UVB “ks19q18” are shown in the left panel. Following the procedure described in section 3.2, the observed \(T(b_{t})\)-CDF and corresponding errors from Tripp et al. (2008) (black) and Savage et al. (2014) (red), shown in dotted line and shaded region respectively. The comparison between \(T(b_{t})\) between spectra obtained using UVB “ks19q18” and “fg11” is shown for the “WIND” model in the middle panel and for the “WIND+AGN” model in the right panel. The vertical dotted lines indicating the median \(T(b_{t})\) for the “WIND” (grey) and the “WIND+AGN” (black) model are close to the temperature of peak fraction in collisionally ionized O vi (T\(\sim 3\times 10^{5}\)K), shown in orange dashed line. fer by 20%. However, the temperature distribution of the aligned absorbers in these two simulations is not significantly different, having median T(b\({}_{r}\)) close to the temperature of peak fraction in collisionally ionized O vi (T\(\sim 3\times 10^{5}\)K). This is also confirmed by the KS test, resulting in a p-value of 0.20. The temperature distribution of absorbers also does not differ significantly between "set1" and "set2" in both simulations, although the KS-test, in this case, may be affected by the small number of data points. Just for the completeness sake the observed median value of \(T(b_{t})\) are \(10^{4.5}\)K and \(10^{4.6}\)K for Tripp et al. (2008) and Savage et al. (2014) respectively. We compare the T(b\({}_{r}\))-CDF obtained for simulated spectra generated using "ks19q18" uvb with "fg11" in the middle (for the "WIND") model and right (for the "WIND+AGN" model) panel. Both these panels show T(b\({}_{r}\))-CDF in "fg11" UVB also does not agree with the allowed range from observation. The median value in "fg11" increases by 20% for the "WIND" model but decreases by 10% for the "WIND+AGN" models. The p-values in the KS-test for both the feedback models are \(>0.05\), which implies T(b\({}_{r}\)) distribution does not change significantly with variation in UVB. This is expected as the temperature of individual SPH particles is not modified when we use different UVBs. To further investigate the physical conditions of the aligned absorbers, we compared the absorbers in optical depth weighted temperature-density plane in Figure 4. The left panel shows the absorbers in "ks19q18" UVB. Although the absorbers in both simulations trace temperatures in the collisional ionization range, the majority of absorbers in the WIND-only model trace higher density medium compared to the absorbers in the "WIND+AGN" simulation. This behaviour is commensurate with the finding in Paper-I that the absorbers found in the "WIND" model trace higher-density regions compared to those from the "WIND+AGN" model. The absorbers for "fg11" UVB in two simulations also show a similar trend, as shown in the right panel of Figure 4. These absorbers also trace lower-density regions compared to "ks19q18" UVB, again confirm Figure 5: Same as that of Figure 3, but for the non-thermal part of Doppler parameter (b\({}_{nt}\)). The observed data are taken from Tripp et al. (2008) and Savage et al. (2014) are shown in black and red markers, respectively. Figure 6: The distribution of parameter \(\eta\) in the Sherwood “WIND” and the “WIND+AGN” models for “ks19q18” UVB are shown in the left panel. Both simulations show higher thermal contribution than the observed data from Tripp et al. (2008) and Savage et al. (2014) indicated in black and red filled histograms, respectively. The effect of using “fg11” UVB instead of “ks19q18” is shown in using hatched histograms in the middle and the right panels for the “WIND” and “WIND+AGN” models, respectively. ing the conclusion in Paper-I that more absorbers can originate in lower-density regions for softer UVB. ### Distribution of non-thermal b-parameters We show the Cumulative distribution of b\({}_{nt}\) for aligned b(O vi)-b(H i) pairs in Figure 5. For completeness, we compare our results with the observations reported in Tripp et al. (2008) and Savage et al. (2014), plotted in black and red points. The b\({}_{nt}\)-CDF and corresponding errors from the observation data are calculated in a similar procedure as T(b\({}_{tr}\))-CDF of observation, as described in section 3.2. The error in b\({}_{nt}\) is found by propagating the errors in \(b\) and \(T\). The b\({}_{nt}\)-CDF for the "WIND+AGN" model roughly agrees with the observed range, while the "WIND" only model has lower b\({}_{nt}\) values compared to the observations. The KS-test between b\({}_{nt}\) of "WIND" and "WIND+AGN" models for "ks19q18" UVB tells us these two distributions are significantly different, as also evident from the left panel of Figure 5. The median b\({}_{nt}\) of the "WIND+AGN" model (=16.06 km s\({}^{-1}\)) is 1.8 times higher than the "WIND" model (=9.09 km s\({}^{-1}\)) for "ks19q18" UVB. Also, the KS-test p-value (\(<10^{-4}\)) shows b\({}_{nt}\)-CDF values in the "WIND" and "WIND+AGN" models can not be tracing the same distribution. The observed data from Tripp et al. (2008) and Savage et al. (2014) has median b\({}_{nt}\) values of 20 km s\({}^{-1}\) and 26 km s\({}^{-1}\), respectively, which is higher than both the simulations in "ks19q18" UVB. The middle and right panels show that the results obtained using a softer UVB like "fg11" statistically result in a higher non-thermal broadening of the lines for both the "WIND" and the "WIND+AGN" models. In "fg11" UVB, the median b\({}_{nt}\) value for the "WIND" model and the "WIND+AGN" model increases to 15.84 km s\({}^{-1}\) and 21.83 km s\({}^{-1}\), respectively. The KS-test shows b\({}_{nt}\) distributions for both models differ from "ks19q18" to "fg11" with more than 90% probability. The median b\({}_{nt}\) values for both simulations are lower than the observed data in Savage et al. (2014). ### Thermal vs. non-thermal contribution in line-broadening We characterize the nature of line broadening (i.e., whether thermal or non-thermal broadening dominates) of aligned absorbers using a parameter \(\eta\equiv\sqrt{\frac{2kT}{m_{\rm{ion}}b^{2}}}\), as previously done in Cen and Chisari (2011). Note that the value of \(\eta\) varies between 0 to 1 depending on the relative importance of thermal and non-thermal broadening in the Doppler parameter; for a purely thermal broadened absorber, \(\eta=1\) and a non-thermal broadening-dominated absorber has \(\eta=0\). We show the histogram of \(\eta\) for the "WIND" and "WIND+AGN" model for "ks19q18" UVB in the left panel of Figure 6 along with the observations from Tripp et al. (2008) and Savage et al. (2014). As expected based on the discussions presented till now, most of the absorbers in the "WIND" model have \(\eta\) values near 1, implying mostly thermal broadening dominated Doppler width, whereas the "WIND+AGN" model has relatively lower \(\eta\) values, implying higher importance of non-thermal broadening. This suggests that the distribution of \(\eta\) can provide interesting constraints on the feedback models used in the simulations. As expected based on the discussions above, both models have lower non-thermal contributions in line broadening compared to the observation, which is also influenced by the fact that the inferred temperatures simulation temperatures are high for the simulations used here. The \(\eta\) histogram in the middle and right panels shows non-thermal contribution increases when "fg11" UVB is used in both the "WIND" and the "WIND+AGN" simulations, respectively. We believe the effect of UVB on the \(\eta\) distribution will be higher if the gas temperature is lower compared to what we find in Sherwood simulations. This is because the contribution of photoionization is expected to be higher when the gas temperature is lower. Some previous works (see for example, Cen and Chisari 2011; Tepper-Garcia et al. 2012; Pessa et al. 2018) reported \(\eta\) could depend on the Ly\(\alpha\) column density with high column density absorbers having more non-thermal contribution. The observed data from Savage et al. (2014) shows weak anti-correlation with a Pearson correlation coefficient equal to -0.37 (p-value =0.01). However, we do not find any significant correlation in the observed data from Tripp et al. (2008) or for the simulations used here for both the UVB as confirmed by a low value of the Pearson correlation coefficient (\(<0.30\)) and/or high p-values (\(>0.05\)). ## 4 Summary and discussions Absorption lines produced by different ion species of the aligned absorbers are often used to probe the thermal and non-thermal contribution to the absorption line-broadening. Such measurements are used to infer the kinetic temperature of the absorbing gas and the non-thermal contribution to the velocity spread. The latter is broadly interpreted as turbulence in the literature. Recently, we have shown that the distribution of Doppler parameters of the simulated O vi absorbers is influenced by the feedback model and the choice of UVB used in the hydrodynamical simulation (Mallik et al. 2023). This study also emphasized the importance of the simultaneous usage of several statistical distributions to fit degeneracies between the effect of feedback and the UVB assumed. Recently, we have shown that the line of sight two-point correlation function at different length scales can provide non-degenerate constraints. Here, using cosmological simulations, we probe how well the temperature derived using the aligned absorbers probe the underlying gas temperature. For this, we consider two different simulations (viz., Sherwood "WIND" model and Sherwood "WIND+AGN" model) having varied feedback. We also explore the effects of the background radiation field, using two different UVB models viz. "ks19q18" and "fg11". Similar to the method followed in observation, we identify the Ly\(\alpha\) absorbers aligned to a O vi absorber and simultaneously fit these aligned absorbers using vpfit to find the thermal and non-thermal contribution in the line-broadening. We show the temperature of the absorbers estimated from the thermal contribution in the line-broadening correlates strongly with the optical depth weighted temperature of the absorbing gas, although showing RMS scatter of 0.37-0.49 and 0.65-0.75 (in the fractional deviation from zero) for the "WIND" and the "WIND+AGN" models, respectively considering two different UVB. The single-component isolated O vi-Ly\(\alpha\) aligned absorbers have much less scatter compared to the aligned absorbers in a multi-component blended system. We show the scatter primarily comes from the spread in kinetic temperature of the gas contributing to the absorption and hence depends on the feedback processes and the UVB. The uncertainty in the Voigt-profile decomposition further increases the scatter in the case of blended systems. We find the median temperature (\(T(b_{t})\)) in the "WIND" model (\(\sim 10^{5.35}\)K) is 20% higher than that of the "WIND+AGN" model (\(\sim 10^{5.43}\)K) for the "ks19q18" UVB. In "fg11" UVB, the median temperature in the "WIND" model increases by 20% and decreases by 10% in the "WIND+AGN" model. However, the temperature distribution of the absorbers in both simulations are similar. This is expected as we have ignored any different radiative heating of the gas when we use different UVBs. We find the median temperature value found in the simulations used here are higher compared to the observed median value of temperature (\(<10^{4.6}\) K) from Tripp et al. (2008) and Savage et al. (2014). This confirms some of the shortcomings of Sherwood simulations found in Mallik et al. (2023). The median value of non-thermal line broadening of aligned absorbers in the "WIND+AGN" model (= 16.06 km s\({}^{-1}\)) is 1.8 times higher than that in the "WIND" model (= 9.09 km s\({}^{-1}\)). The non-thermal contribution increases in the case of "fg1" UVB, but median non-thermal broadening values in such simulations are lower than the observed values by Savage et al. (2014). The relative contribution of thermal broadening in the Doppler parameter of an absorber is estimated using a parameter \(\eta\) (as defined in section 3.4). The absorbers in the "WIND" model have \(\eta\) values mostly near 1, whereas the absorbers in the "WIND+AGN" model have relatively lower \(\eta\) values. The histogram of \(\eta\) for simulated absorbers does not agree with the observed distribution, which is partly because of higher temperature estimation in the simulations used here. The main caveats in this work are that we assumed the change in UVB only alters the ionization state and not the temperature of the gas and also assumed the gas to be optically thin. We plan to address this issue by using different UVBs during the simulation run in our future work. It will also be an interesting exercise to explore the effects of the feedback model variations on the non-thermal line broadening using a wide range of feedback models in a higher resolution simulation box like Illustris TNG, as varied feedback results in large dispersion in O vi CDDF (see Figure 6 in Nelson et al., 2018). In conclusion, we find that the derived temperature using the aligned absorber roughly provides the optical depth-weighted kinetic temperature of the gas. The non-thermal broadening inferred arises due to the inhomogeneities in the temperature-density fields and from the uncertainties associated with the multiple-component Voigt profile decomposition. Therefore, the derived non-thermal line-broadening using aligned absorbers may not be probing the sub-grid turbulence arising due to stellar feedback or gravitational instabilities. Together with flux statistics (flux probability distribution function and two-point correlation function) as well as column density and Doppler parameter distribution, the number of metal absorption aligned with a Ly\(\alpha\) component and distribution of thermal and non-thermal line broadening can be used to place additional constraints on the feedback models used in the simulation for a given UVB. ## Acknowledgments We acknowledge the use of High-performance computing facilities PERSEUS and PEGASUS at IUCAA. We are grateful to K. Subramanian, S. Muzahid, Vikram Khaire and A. Mohapatra for insightful discussions regarding this work. The Sherwood simulations were performed using the Curie supercomputer at the Tre Grand Centre de Calcul (TGCC), and the DiRAC Data Analytic system at the University of Cambridge, operated by the University of Cambridge High Performance Computing Service on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This was funded by BIS National E-infrastructure capital grant (ST/K001590/1), STFC capital grants ST/H008861/1 and ST/H00887X/1, and STFC DiRAC Operations grant ST/K00333X/1. DiRAC is part of the National E-Infrastructure. ## Data availability The data underlying this article are available in the article and in its online supplementary material.
2309.08233
Quantum Hall effect in topological Dirac semimetals modulated by the Lifshitz transition of the Fermi arc surface states
We investigate the magnetotransport of topological Dirac semimetals (DSMs) by taking into account the Lifshitz transition of the Fermi arc surface states. We demonstrate that a bulk momentum-dependent gap term, which is usually neglected in study of the bulk energy-band topology, can cause the Lifshitz transition by developing an additional Dirac cone for the surface to prevent the Fermi arcs from connecting the bulk Dirac points. As a result, the Weyl orbits can be turned off by the surface Dirac cone without destroying the bulk Dirac points. In response to the surface Lifshitz transition, the Weyl-orbit mechanism for the 3D quantum Hall effect (QHE) in topological DSMs will break down. The resulting quantized Hall plateaus can be thickness-dependent, similar to the Weyl-orbit mechanism, but their widths and quantized values become irregular. Accordingly, we propose that apart from the bulk Weyl nodes and Fermi arcs, the surface Lifshitz transition is also crucial for realizing stable Weyl orbits and 3D QHE in real materials.
Tao-Rui Qin, Zhuo-Hua Chen, Tian-Xing Liu, Fu-Yang Chen, Hou-Jian Duan, Ming-Xun Deng, Rui-Qiang Wang
2023-09-15T08:09:43Z
http://arxiv.org/abs/2309.08233v1
Quantum Hall effect in topological Dirac semimetals modulated by the Lifshitz transition of the Fermi arc surface states ###### Abstract We investigate the magnetotransport of topological Dirac semimetals (DSMs) by taking into account the Lifshitz transition of the Fermi arc surface states. We demonstrate that a bulk momentum-dependent gap term, which is usually neglected in study of the bulk energy-band topology, can cause the Lifshitz transition by developing an additional Dirac cone for the surface to prevent the Fermi arcs from connecting the bulk Dirac points. As a result, the Weyl orbits can be turned off by the surface Dirac cone without destroying the bulk Dirac points. In response to the surface Lifshitz transition, the Weyl-orbit mechanism for the 3D quantum Hall effect (QHE) in topological DSMs will break down. The resulting quantized Hall plateaus can be thickness-dependent, similar to the Weyl-orbit mechanism, but their widths and quantized values become irregular. Accordingly, we propose that apart from the bulk Weyl nodes and Fermi arcs, the surface Lifshitz transition is also crucial for realizing stable Weyl orbits and 3D QHE in real materials. ## I Introduction Topological semimetals are novel quantum states of matter, in which the conduction and valence bands cross near the Fermi level at certain discrete momentum points or lines[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. The gap-closing points or lines are protected either by crystalline symmetry or topological invariants[11; 12]. A topological Dirac semimetal (DSM) hosts paired gap-closing points, referred to as the Dirac points, which are stabilized by the time-reversal, spatial-inversion and crystalline symmetries. By breaking the time-reversal or spatial-inversion symmetry, a single Dirac point can split into a pair of Weyl nodes of opposite chiralities[13; 14], leading to the topological transition from a Dirac to a Weyl semimetal[15; 16; 17; 18; 19; 20]. Accompanied with the bulk topological transition, topological states which are protected by the quantized Chern flux will emerge in the surface to connect the split Weyl nodes, known as the Fermi-arc surface states[7]. In topological DSMs, such as A\({}_{3}\)Bi (A = Na,K,Rb)[4; 5] and Cd\({}_{2}\)As\({}_{3}\)[2; 3], the Weyl nodes at the same Dirac point, belonging to different irreducible representations, cannot be coupled and have to seek for a partner from the other Dirac point. As a consequence, the two Dirac points including two pairs of Weyl nodes are connected by two spin-polarized Fermi arcs[3; 4; 5; 6; 7]. The Fermi-arc surface states are the most distinctive observable spectroscopic feature of topological semimetals. However, their observation is sometime limited by spectroscopic resolutions. There have been searching for alternative smoking-gun features of topological semimetals, such as by means of transport phenomena[21; 22; 23]. Many interesting transport properties have been revealed in topological semimetals, for example, chiral anomaly induced negative magnetoresistance[24; 25; 26; 27], Weyl-orbit related quantum oscillations[28; 29], Berry's phase \(\pi\) related Aharonov-Bohm effect[30; 31], bulk-surface interference induced Fano effect[32], topological pumping effect[33], etc. Recently, 3D quantum Hall effect (QHE) because of the Fermi arcs was proposed in Weyl semimetals[10] and has led to an explosion of theoretical[34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44] and experimental[45; 46; 47; 48; 49] activities in the field of condensed matter physics. In a Weyl semimetal slab, the Weyl orbit which consists of Fermi arcs from opposite surfaces can support the electron cyclotron orbit for the QHE, making the 3D QHE available. The 3D QHE has been observed experimentally in topological DSMs[45; 46; 47; 48; 49] and the one from the Weyl-orbit mechanism is demonstrated to be thickness-dependent [47]. However, for the topological DSMs, a single surface with two Fermi arcs can also support a complete Fermi loop required by the QHE, which can compete with the Weyl-orbit mechanism. The same-surface Fermi loop is not stable and can be deformed by bulk perturbations[12]. In real materials, the bulk perturbations are inevitable. As we will show, when a bulk momentum-dependent gap term is included, Lifshitz transition can happen for the Fermi arc surface states, in which the double Fermi arcs on the DSM surface can be continuously deformed into a closed Fermi loop and separate from the bulk Dirac points. The Lifshitz transition involves a change of the Fermi surface topology that is not connected with a change in the symmetry of the lattice[50; 51; 52; 53; 54]. Therefore, the Lifshitz transition can take place without destroying the topology of the bulk energy band. A natural question in this regard is how the deformation of the Fermi arcs influences the 3D QHE of topological DSMs, especially when the Fermi arcs, as key ingredients for the Weyl or bits, breaks free from the bulk Dirac points. In this paper, we investigate the QHE in topological DSMs by taking into account the surface Lifshitz transition, which can be modulated by a bulk momentum-dependent gap term. It is demonstrated that while the bulk Dirac points are robust against the momentum-dependent gap term, the surface can develop an additional 2D Dirac cone, which deforms the surface Fermi arcs from a curve to some discrete points and further to a Fermi loop coexisting with the bulk Dirac points. During this process, the bulk topological properties do not change, but the Weyl orbits can be turned off. The joint effect of the Weyl orbits and surface Lifshitz transition can make the QHE quite complicated. We find that when the Weyl orbits are broken by the surface Dirac cone, the bulk and surface states can form the Landau levels (LLs) and contribute the QHE, independently. The resulting Hall plateaus are sensitive to the thickness of the sample, but their widths and quantized values are irregularly distributed. The rest of this paper is organized as follows. In Sec. II, we introduce the model Hamiltonian and bulk spectrum. The Lifshitz transition of the Fermi arcs and the LLs are analyzed in Sec. III and Sec. IV, respectively. The QHE is studied in Sec. V and the last section contains a short summary. ## II Hamiltonian and bulk spectrum We begin with a low-energy effective Hamiltonian for the topological DSMs \[\mathcal{H}(\mathbf{k})=\varepsilon_{\mathbf{k}}+\lambda(k_{x}\sigma_{z}\tau_{x}-k_{y }\tau_{y})+m_{\mathbf{k}}\tau_{z}+\Lambda(\mathbf{k}) \tag{1}\] with \(m_{\mathbf{k}}=m_{0}-m_{1}k_{z}^{2}-m_{2}k_{\parallel}^{2}\) and \(\varepsilon_{\mathbf{k}}=c_{0}+c_{1}k_{z}^{2}+c_{2}k_{\parallel}^{2}\), where \(k_{\parallel}=\sqrt{k_{x}^{2}+k_{y}^{2}}\) and \(\sigma_{x,y,z}\) (\(\tau_{x,y,z}\)) is the Pauli matrix acting on the spin (orbital parity) degree of freedom. This model has been widely-adopted to capture the topological properties of topological DSMs Cd\({}_{3}\)As\({}_{2}\)[3] and A\({}_{3}\)Bi (A = Na, K, Rb)[4]. In the absence of \(\Lambda(\mathbf{k})\), \([\sigma_{z},\mathcal{H}(\mathbf{k})]=0\) and the topological DSMs characterized by Hamiltonian (1) can be viewed as two superposed copies of a Weyl semimetal with two Weyl nodes, which possesses two sets of surface Fermi arcs in the surface Brillouin zone, as illustrated by Figs. 1(a)-(b) and (e)-(f). \(\Lambda(\mathbf{k})\) mixes the eigenstates of opposite spins away from the Dirac points and plays the role of a momentum-dependent gap term, whose form is determined by the crystal symmetries. Specifically, for a DSM with fourfold rotational symmetry, such as Cd\({}_{3}\)As\({}_{2}\), the momentum-dependent gap term can take the form \(\Lambda(\mathbf{k})=\alpha k_{z}\left(k_{\perp}^{2}\sigma_{-}+k_{x}^{2}\sigma_{+} \right)\tau_{x}/2\) with \(k_{\pm}=k_{x}\pm ik_{y}\) and \(\sigma_{\pm}=\sigma_{x}\pm i\sigma_{y}\). Diagonalizing Hamiltonian (1) yields the continuum bulk spectrum \[E_{\pm}(\mathbf{k})=\varepsilon_{\mathbf{k}}\pm\sqrt{\lambda^{2}k_{\parallel}^{2}+m_{ \mathbf{k}}^{2}+\alpha^{2}k_{\parallel}^{4}k_{z}^{2}}, \tag{2}\] from which we can determine the energy location \(E_{D}=c_{0}+c_{1}k_{w}^{2}\) and momentum locations \(\mathbf{K}_{\pm}=(0,0,\pm k_{w})\) of the bulk Dirac points with \(k_{w}=\sqrt{m_{0}/m_{1}}\). As \(\varepsilon_{\mathbf{k}}\) possesses the symmetries of \(m_{\mathbf{k}}\), it does not qualitatively change the Dirac spectrum in the bulk, but introduces an asymmetry between the positive (electrons) and negative (holes) energy branches and, consequently, breaks the particle-hole symmetry. The electron-hole asymmetry will curve the Fermi arcs, which was demonstrated to be crucial for the LLs around the Weyl nodes[10]. While \(\Lambda(\mathbf{k})\) can profoundly change the spectrum of quasiparticles for sufficiently large \(k_{\parallel}\), it preserves all the symmetries of the crystal structure and vanishes at the Dirac points. Therefore, the bulk Dirac points are robust against \(\Lambda(\mathbf{k})\), as seen from Eq. (2) that both the momentum and energy locations of the Dirac points are regardless of \(\alpha\). For this reason, \(\Lambda(\mathbf{k})\) was usually treated as a bulk perturbation, which does not destroy the bulk topology. For the states close to the bulk Dirac points, the quasiparticles can be described by linearizing Hamiltonian (1) around \(\mathbf{K}_{\pm}\), such that \(\Lambda(\mathbf{k})\) can be neglected. However, for a topological DSM slab, the Fermi arcs, which connect the bulk Dirac points separated far away in momentum space, can extend to large \(\mathbf{k}\) where the spectrum can be dramatically modified by the momentum-dependent gap term. In the following, we elucidate that, in response to the momentum-dependent gap term, the surface states of topological DSMs will experience a Lifshitz transition, during which process, the Fermi arcs can exist without connecting the bulk Dirac points. Figure 1: (a)-(d) The dispersion (red) by diagonalizing Hamiltonian (12) for (a) \(\alpha=0\), \(c_{1}=c_{2}=0\), (b) \(\alpha=0\), \(c_{1}=3c_{2}=0.15\), (c) \(\alpha=0.5\), \(c_{1}=c_{2}=0\), and (d) \(\alpha=0.5\), \(c_{1}=3c_{2}=0.15\), with \(k_{x}=0\) and \(E_{D}=0\) denoted by the blue-dashed lines. The green dotted curves are determined from Eq. (9), self-consistently. (e)-(h) The \(\mathbf{k}\)-resolved DOSs corresponding to (a)-(d) at \(E_{F}=E_{D}\). The rest parameters are set as \(\lambda=0.5\), \(k_{w}=\pi/4\), \(m_{0}=\pi^{2}/32\), \(m_{2}=0.5\) and \(N_{y}=50\). ## III Lifshitz transition of the Fermi arcs To better understand how the Fermi arc surface states evolve with the momentum-dependent gap term, we consider a topological DSM slab with open boundaries at \(y=\pm L/2\) and derive the surface states for the \(x\)-\(z\) plane. For such a finite-size system, \(k_{y}\) in Eq. (1) is not longer a good quantum number and should be replaced with the operator \(-i\partial_{y}\). In the spirit of the perturbation theory, we construct an unperturbed surface basis by the surface wavefunctions without \(\Lambda(\mathbf{k})\) and the effective Hamiltonian for the surface states can be obtained by projecting Eq. (1) onto the unperturbed surface basis. For the convenience of discussion, we assume \(m_{2}>|c_{2}|\) and select \(E_{D}\) as potential energy zero, i.e., \(c_{0}=-c_{1}k_{w}\). By setting \(\alpha=0\) and solving the differential equation \(\mathcal{H}(k_{x},-i\partial_{y},k_{z})\Psi_{k_{x},k_{z}}(y)=E\Psi_{k_{x},k_{z }}(y)\) with the open boundary conditions \(\Psi_{k_{x},k_{z}}(\pm L/2)=0\), we can evaluate the unperturbed surface wavefunctions as \(\Psi_{k_{x},k_{z}}^{\beta,s}(y)=\left(\frac{1+\pi}{2},\frac{1-\pi}{2}\right)^ {T}\otimes\Psi_{\beta}(y)\), which are spin-resolved, with \(s=\pm 1\) being the eigenvalue of \(\sigma_{z}\) and \[\Psi_{\beta}(y)=\left(\begin{array}{c}\beta\cos\frac{\vartheta}{2}\\ -\sin\frac{\vartheta}{2}\end{array}\right)\frac{e^{\kappa_{-}(\beta y-\frac{L }{2})}-e^{\kappa_{+}(\beta y-\frac{L}{2})}}{\sqrt{N_{\beta}}}. \tag{3}\] Here, \(\beta=\pm\) corresponds to the surface at \(y=\pm L/2\), \(N_{\beta}=\int_{-\frac{L}{2}}^{\frac{L}{2}}|e^{\kappa_{-}(\beta y-\frac{L}{2} )}-e^{\kappa_{+}(\beta y-\frac{L}{2})}|^{2}dy\) is the normalization coefficient and \[\vartheta=\tan^{-1}\left[\frac{2\lambda\left(m_{2}-c_{2}\right)\left(\kappa_ {+}+\kappa_{-}\right)}{\lambda^{2}-\left(m_{2}-c_{2}\right)^{2}\left(\kappa_{ +}+\kappa_{-}\right)^{2}}\right], \tag{4}\] in which \(\kappa_{\pm}\) is a solution to \(E_{\pm}(k_{x},-i\kappa,k_{z})=E\), reading as \[\kappa_{\pm}=\sqrt{k_{x}^{2}+\frac{\zeta_{\lambda}\pm\left(\zeta_{\lambda}^{ 2}-\zeta_{+}\zeta_{-}\right)^{1/2}}{m_{2}^{2}-c_{2}^{2}}} \tag{5}\] with \(\zeta_{\lambda}=\frac{\lambda^{2}-\zeta_{+}+\zeta_{-}}{2}\) and \[\zeta_{\pm}=\left(m_{2}\pm c_{2}\right)\left[\left(m_{1}\mp c_{1}\right)\left( k_{w}^{2}-k_{z}^{2}\right)\mp E\right]. \tag{6}\] The surface states are confined within the region defined by \(\mathrm{Re}(\kappa_{\pm})>0\). Subsequently, by performing the projection operation \[\mathcal{H}_{\mathrm{surf}}^{\beta,ss^{\prime}}=\langle\Psi_{k_{x},k_{z}}^{ \beta,s}(y)|\mathcal{H}(k_{x},-i\partial_{y},k_{z})|\Psi_{k_{x},k_{z}}^{\beta, s^{\prime}}(y)\rangle, \tag{7}\] we can obtain the effective surface Hamiltonian \[\mathcal{H}_{\mathrm{surf}}^{\beta}\left(k_{x},k_{z}\right)=\tilde{\varepsilon }_{\mathbf{k}}-\beta\sin\vartheta\left(\lambda k_{x}\sigma_{z}+\tilde{\alpha}k_{ z}\sigma_{x}\right), \tag{8}\] where \(\tilde{\alpha}=\alpha\left(k_{x}^{2}-\kappa_{+}\kappa_{-}\right)\), \(\tilde{\varepsilon}_{\mathbf{k}}=\tilde{c}_{1}\left(k_{z}^{2}-k_{w}^{2}\right)+ \tilde{c}_{2}\left(k_{x}^{2}+\kappa_{+}\kappa_{-}\right)\) and \(\tilde{c}_{l}=c_{l}-m_{l}\cos\vartheta\) with \(l=1,2\). As shown by Eq. (8), the surface Hamiltonian exhibits a 2D Dirac structure with spin-momentum locking, which resembles the surface states for 3D topological insulators. However, as different from the 3D topological insulators, the bulk spectrum here is also gapless, namely, the surface Dirac point can coexist with the bulk Dirac points. By diagonalizing Eq. (8), we can arrive at \[E_{\eta}^{\beta}\left(k_{x},k_{z}\right)=\tilde{\varepsilon}_{\mathbf{k}}-\eta \beta\sin\vartheta\sqrt{\lambda^{2}k_{x}^{2}+\tilde{\alpha}^{2}k_{z}^{2}}, \tag{9}\] where \(\eta=\pm 1\) labels the conduction/valence band. It should be noted that, because \(\kappa_{\pm}\) is energy dependent, Eq. (9) is only a formal solution for the surface spectrum. The exact surface dispersion involves a self-consistent calculation of Eq. (9) via replacing \(E\to E_{\eta}^{\beta}\left(k_{x},k_{z}\right)\) in Eq. (6). The numerical results of the surface dispersion are presented by the green dotted curves in Figs. 1(a)-(d). In order to show the bulk states and surface Fermi arcs, simultaneously, we evaluate the \(\mathbf{k}\)-resolved density of states (DOSs) for a given energy \(E\) through \[\rho\left(E,k_{x},k_{z}\right)=-\frac{1}{\pi}\operatorname{Im}\operatorname{ Tr}\left(\frac{1}{E+i0^{+}-H}\right), \tag{10}\] in which \(H=\sum_{\mathbf{k}}c_{\mathbf{k}}^{\dagger}\mathcal{H}_{\mathbf{k}}^{\mathrm{tb}}c_{\mathbf{k}}\) is the tight-binding Hamiltonian corresponding to Eq. (1). Here, \(c_{\mathbf{k}}^{\dagger}\) (\(c_{\mathbf{k}}\)) is the fermion creation (annihilation) operator and \[\mathcal{H}_{\mathbf{k}}^{\mathrm{tb}} =M_{\mathbf{k}}+\lambda\left(\sin k_{x}\sigma_{z}\tau_{x}-\sin k_{y} \tau_{y}\right)-2\alpha\sin k_{z}\] \[\times\left[\left(\cos k_{x}-\cos k_{y}\right)\sigma_{x}-\sin k_{x} \sin k_{y}\sigma_{y}\right]\tau_{x} \tag{11}\] is the single-particle Hamiltonian obtained from Eq. (1) via the transforms \(k_{a}\to\sin k_{a}\) and \(k_{a}^{2}\to 2-2\cos k_{a}\), with \(a=x,y,z\), \(M_{\mathbf{k}}=f_{1}\left(\cos k_{z}-\cos k_{w}\right)+f_{2}\left(\cos k_{x}+\cos k_{y}-2\right)\) and \(f_{l}=2(m_{l}\tau_{z}-c_{l})\). By performing the Fourier transform \(c_{\mathbf{k}}\to\sum_{n}c_{n}e^{-ik_{y}y_{n}}\), we can discretize the tight-binding Hamiltonian as \[H=\sum_{n}c_{n}^{\dagger}h_{0}c_{n}+\sum_{n}\left(c_{n}^{\dagger}h_{y}c_{n+1}+h. c.\right), \tag{12}\] where the hopping matrices are given by \[h_{0} =f_{0}+f_{1}\cos k_{z}+f_{2}\cos k_{x}+f_{x}\sin k_{x}\] \[-\alpha\sigma_{x}\tau_{x}\left[\sin\left(k_{z}+k_{x}\right)+\sin \left(k_{z}-k_{x}\right)\right] \tag{13}\] \[h_{y} =\frac{f_{2}+if_{y}}{2}+\alpha\sigma_{x}\tau_{x}\sin k_{z}\] \[+i\alpha\sigma_{y}\tau_{x}\frac{\cos\left(k_{z}+k_{x}\right)-\cos \left(k_{z}-k_{x}\right)}{2} \tag{14}\] with \(f_{0}=-2f_{2}-f_{1}\cos k_{w}\) and \(f_{x(y)}=\lambda\sigma_{z(0)}\tau_{x(y)}\). In Fig. 1, we plot the numerical spectrum and \(\mathbf{k}\)-resolved DOSs for Eq. (12). As shown by the green dotted curves in Figs. 1(a)-(d), the self-consistent calculation of Eq. (9) coincides with the results by the numerical diagonalization of Eq. (12). As illustrated by Figs. 1(a) and (e), the surface spectrums without \(\varepsilon_{\mathbf{k}}\) and \(\Lambda(\mathbf{k})\) are \(k_{z}\)-independent and cross at \(E_{D}\) with a quadruply degenerate line connecting the two Dirac points. The flat Fermi line can be easily modified by perturbations. This explains why the particle-hole symmetry and the momentum-dependent gap term are important to the surface states. A nonzero \(\varepsilon_{\mathbf{k}}\) can not gap the surface spectrum, but will bend the Fermi lines to a parabola, as shown by Fig. 1(b). As a result, the Fermi arcs with opposite spin, due to the spin-dependent term in Eq. (8), will curve in opposite direction at the Fermi level, forming a closed loop with a discontinuous kink at the Dirac points, as indicated by Fig. 1(f). By contrast, when the momentum-dependent gap term is included, the Fermi lines with opposite spin, because of the noncommutation between \(\sigma_{z}\) and \(\mathcal{H}_{\rm surf}^{\beta}\left(k_{x},k_{z}\right)\), will repel each other and so remove the spin degeneracy. Since \(\Lambda(\mathbf{k})=0\) for \(k_{\parallel}=0\) or \(k_{z}=0\), the surface spectrums keep crossing at \(\mathbf{k}=0,\mathbf{K}_{\pm}\), as shown by Fig. 1(c). Therefore, a 2D Dirac point develops on each surface, which coexists with the bulk Dirac points. The surface Dirac points, similar to the bulk Dirac points, are robust against \(\Lambda(\mathbf{k})\), but, differently, the energy location of the surface Dirac points can be modulated by the particle-hole asymmetry. In the presence of particle-hole symmetry, i.e., \(\varepsilon_{\mathbf{k}}=0\), the energy location of the surface Dirac points is identical to the bulk Dirac points, such that at \(E_{F}=E_{D}\), the Fermi line in Fig. 1(e) deforms into three Fermi points in Fig. 1(g). When the particle-hole symmetry is broken by a finite \(\varepsilon_{\mathbf{k}}\), the surface Dirac points will shift away from \(E_{D}\), after that the surface Fermi point around \(\mathbf{k}=0\) turns to a Fermi loop, as demonstrated by Fig. 1(h), which prevents the Fermi arcs from connecting the bulk Dirac points. Consequently, although the surface spectrums remain intersecting with the bulk Dirac points, the surface states will experience a Lifshitz transition with the Fermi arcs changing from a curve to some discrete gap-closing points and further to a Fermi loop coexisting with the bulk Dirac points, during which process the Weyl orbits can be turned off without destroying the topological properties of the bulk energy band. ## IV LLS and probability density distribution Upon application of an external magnetic field, a Peierls phase should be added to the hopping matrices as electrons jump from position \(\mathbf{r}_{n}\) to \(\mathbf{r}_{m}\), i.e., \(t_{mn}\to t_{mn}e^{-i\varphi_{mn}}\) with \(\varphi_{mn}=\frac{2\pi}{\Phi_{0}}\int_{\mathbf{r}_{m}}^{\mathbf{r}_{n}}\mathbf{A}\cdot d \mathbf{l}\) and \(\Phi_{0}=h/e\) being the unit flux quantum. For the magnetic field is applied in the \(y\) direction, we can choose the Landau gauge \(\mathbf{A}=(0,0,-Bx)\). To include the magnetic field, we further discretize Eq. (12) in direction \(x\) as \[H =\sum_{mn}c_{m,n}^{\dagger}(T_{m}c_{m,n}+T_{m,x}c_{m+1,n}+T_{m,y} c_{m,n+1}\] \[+T_{m,\alpha}c_{m+1,n-1}-T_{m,\alpha}c_{m+1,n+1})+h.c., \tag{15}\] in which the Fourier transform \(c_{n}\to\sum_{m}c_{m,n}e^{-ik_{x}x_{m}}\) has been adopted and the hopping matrices are \[T_{m} =\frac{f_{0}+f_{1}\cos\left(k_{z}-m/\ell_{B}^{2}\right)}{2}, \tag{16}\] \[T_{m,\alpha} =\frac{\alpha}{2}\sigma_{y}\tau_{x}\sin\left(k_{z}-\frac{m+1/2}{ \ell_{B}^{2}}\right),\] (17) \[T_{m,y} =\frac{f_{2}+if_{y}}{2}+\alpha\sigma_{x}\tau_{x}\sin\left(k_{z}-m /\ell_{B}^{2}\right),\] (18) \[T_{m,x} =\frac{f_{2}-if_{x}}{2}-\alpha\sigma_{y}\tau_{x}\sin\left(k_{z}- \frac{m+1/2}{\ell_{B}^{2}}\right), \tag{19}\] with \(\ell_{B}=\sqrt{\hbar/\left(eB\right)}\) being the magnetic length. After introducing the magnetic field, we continue to demonstrate the relation between Eq. (11) and Eq. (15). By the inverse Fourier transform on Eq. (15), we can Figure 3: The Hall conductivity (left axis) and DOS (right axis) for a given \(k_{z}\) with (a) \(\alpha=0\), (b) \(\alpha=0.1\) and (c) \(\alpha=0.3\). (d) The Fermi surface for \(E_{F}=0\) and \(\alpha=0.3\) in the absence of magnetic field. The parameters are the same as Fig. 2. arrive at \[\tilde{\mathcal{H}}_{\mathbf{k}} =f_{1}\left(\cos\kappa_{z}-\cos k_{w}\right)+f_{2}\left(\cos k_{x}+ \cos k_{y}-2\right)\] \[+\lambda\left(\sin k_{x}\tau_{x}\sigma_{z}-\sin k_{y}\tau_{y} \right)-\alpha\tau_{x}[(\frac{\sin\kappa_{z}^{+}-\sin\kappa_{z}^{-}}{2}\] \[-\cos k_{y}\sin\kappa_{z})\sigma_{x}-\sin k_{y}\frac{\cos\kappa_{z }^{+}-\cos\kappa_{z}^{-}}{2}\sigma_{y}] \tag{20}\] with \(\kappa_{z}=k_{z}-x/\ell_{B}^{2}\) and \(\kappa_{z}^{\pm}=\pm\kappa_{z}+k_{x}\). Here, we have used the Baker-Campbell-Hausdorff formula, i.e., \(e^{\hat{A}+\hat{B}}=e^{\hat{A}}e^{\hat{B}}e^{-\frac{1}{2}[\hat{A},\hat{B}]}\). It is clear that \(\tilde{\mathcal{H}}_{\mathbf{k}}=\mathcal{H}_{\mathbf{k}\rightarrow\mathbf{k}+e\mathbf{A}/ \hbar}^{\rm tb}\) follows the Peierls substitution. However, for \(\alpha\neq 0\), one should be very careful when using the direct Peierls substitution, because of the cross momentum term in \(\Lambda(\mathbf{k})\). For example, as we transform Eq. (20) to the lattice form for numerical calculation, there will emerge an additional phase \(\ell_{B}^{-2}/2\), such as in Eqs. (17) and (19). The underlying physics for the additional phase is that after the Peierls substitution, the different momentum components can be noncommutative, e.g., \([k_{x},\kappa_{z}]=i\ell_{B}^{-2}\), and the Baker-Campbell-Hausdorff formula must be adopted. For instance, when transforming Eq. (12) to Eq. (15), one should express the trigonometric functions in the exponential form and then introduce the Peierls phase via the transforms \(e^{ik_{x}}\to e^{ik_{x}}\) and \(e^{i(k_{x}\pm k_{x})}\to e^{i\kappa_{z}^{\pm}}=e^{\pm i(\kappa_{z}- \ell_{B}^{-2}/2)}e^{ik_{x}}\). By diagonalizing Hamiltonian (15) numerically, we can obtain the LLs and spatial distribution of the electron probability density, as displayed in Fig. 2. Through the spatial distribution of the probability density, we can tell easily whether or not the LLs are formed by the Weyl orbits. From Figs. 2(a) and (e), we see that the LLs can be formed from the Weyl-orbit mechanism, even when there is no curvature in the Fermi arcs. In the Weyl-orbit mechanism, the surface fermions, driven by the magnetic field, will move along the Fermi arcs from one Dirac valley to the other and tunnel to the opposite surface at the Dirac points via the bulk states. Therefore, the probability density exhibits a closed loop with two bright stripes crossing the bulk and connecting the surface states, as seen from Fig. 2(e). The width of the bright stripes \(\sim 2\ell_{B}\) relates to the cyclotron radius of the bulk Dirac fermions and the distance between the stripes encodes the momentum distance between the Dirac points. The cyclotron center \(x_{c}=\ell_{B}^{2}k_{z}\) of the fermions in different Dirac valleys differs by \(\Delta x_{c}=2\ell_{B}^{2}k_{w}\), which is exactly the distance between the two bright stripes. A finite \(\varepsilon_{\mathbf{k}}\) that curves the Fermi arcs in the surface Brillouin zone will shift the LLs, integrally, by modifying the magnetic flux enclosed by the Weyl orbits, as demonstrated by Figs. 2(b) and (f). In this case, the Weyl orbits will not be destroyed, so that the LLs remain evenly spaced. As expected, the LLs will respond dramatically to \(\Lambda(\mathbf{k})\), see Figs. 2(c) and (d) where the LLs distribute irregularly, for the Weyl-orbit mechanism can be turned off by the surface Dirac cone. The probability densities displayed in Figs. 2(g) and (h), as well as Fig. 4(c), illustrate that the bulk and surface states can support the cyclotron orbit and form the LLs, independently. The superposition of the bulk and surface LLs results in the complicated LLs in Figs. 2(c) and (d). ## V Evolution of the QHE with the surface Lifshitz transition As discussed above, in the presence of the momentum-dependent gap term, the Weyl orbit near the Dirac points will vanish. However, the LLs remain discrete, implying that the QHE in topological DSMs can happen without involving the Weyl-orbit mechanism. As \(k_{z}\) is a good quantum number, we can calculate the Hall conductivity from the Kubo formula[34; 35; 36; 37; 10] \[\sigma_{xz} =\frac{i\hbar e^{2}}{L_{x}L_{z}}\sum_{k_{z},\mu\neq\nu}\left(f_{ \nu,k_{z}}-f_{\mu,k_{z}}\right)\] \[\times\frac{\langle\psi_{k_{z},\mu}|\hat{v}_{x}|\psi_{k_{z},\nu} \rangle\langle\psi_{k_{z},\nu}|\hat{v}_{z}|\psi_{k_{z},\mu}\rangle}{(\varepsilon _{k_{z},\mu}-\varepsilon_{k_{z},\nu})^{2}}, \tag{21}\] where \(\varepsilon_{\mu,k_{z}}\) denote respectively the \(\mu\)-th energy of Eq. (15), and the velocity operators are defined as \(\hat{v}_{x}=i\hbar^{-1}\sum_{mn}\left[H,c_{m,n}^{\dagger}x_{m}c_{m,n}\right]\) and \(\hat{v}_{z}=\partial H/(\hbar\partial k_{z})\). Here, \(f_{\mu,k_{z}}=[1+\exp(\frac{\varepsilon_{\mu,k_{z}}-E_{x}}{k_{B}T})]^{-1}\) is the Fermi-Dirac distribution function, with \(k_{B}\) and \(T\) being the Boltzmann's constant and temperature, respectively. Figure 4: (a)-(b) Evolution of the Hall conductivity with the sample thickness, i.e., \(N_{y}=(30,40,50)\), for (a) \(\alpha=0\) with the inset showing that the LLs are from the Weyl-orbit mechanism, and (b) \(\alpha=0.3\) with the red lines marked by \(A\)-\(D\) denoting the LLs for \(N_{y}=40\). (c) The probability density of the LLs marked in (b), which indicates that the surface and bulk states are disconnected, i.e., the Weyl orbits have been turned off. The parameters are chosen the same as Fig. 3. In Fig. 3, we present the numerical results for the Hall conductivity near the bulk Dirac points. As can be seen, the Hall conductivity exhibits a step-wise structure and jumps whenever the Fermi level goes across a LL. The Hall plateaus for \(\alpha=0\) are quantized as \(2ne^{2}/h\), with \(n\) as an integer and the factor 2 owing to the spin degeneracy of the LLs. The spin degeneracy of the LLs can be characterized by the \(k_{z}\)-resolved DOS plotted in the right axis of Fig. 3. In the absence of the momentum-dependent gap term, the LLs are evenly spaced and therefore, the widths of the Hall plateaus are identical, as shown by Fig. 3(a). With increasing \(\alpha\), the Weyl orbits near the bulk Dirac points will be broken gradually, and as a result, the LLs become complicated and the widths of the Hall plateaus turn to be irregular, as observed in Figs. 3(b) and (c). With the Weyl orbits destroyed, the spin degeneracy of the LLs will be removed, leading to the emergence of odd Hall plateaus in Figs. 3(b) and (c). The thickness dependence of the quantized Hall plateaus is one of the characteristic signals for the 3D QHE [47]. The Weyl-orbit-mediated 3D QHE is thickness-dependent, as shown by Fig. 4(a), where the Hall plateaus increase synchronously in width as \(N_{y}\) decreases. However, from Fig. 4(b), we see that when the Weyl orbits have been turned off by the surface Dirac cone, the Hall plateaus can still be sensitive to the thickness of the sample. The thickness-dependent QHE in the absence of the Weyl orbits can be attributable to the bulk LLs, as a result of the energy quantization by the confinement in direction \(y\), as illustrated by Fig. 4(c), where the probability density exhibits a standing wave configuration in analogous to the one for the infinite well. In contrast to the bulk LLs, the surface LLs are less sensitive to the thickness, as shown by the LL marked by \(C\) in Fig. 4(b). While the surface LLs are thickness-independent, the Hall plateau close to them can display thickness dependence for the surface LLs are surrounded by the bulk LLs. ## VI Conclusion In summary, we have investigated the QHE in topological DSMs under consideration of the surface Lifshitz transition modulated by a bulk momentum-dependent gap term. It is found that the bulk momentum-dependent gap term, as a higher-order momentum term, can be neglected in study of the bulk topological properties, but it will dramatically deform the Fermi arcs and lead to the surface Lifshitz transition. During the surface Lifshitz transition, a 2D Dirac cone will develop for the surface states, which keeps the Fermi arcs from connecting the bulk Dirac points and, as a result, the Weyl orbits can be turned off without breaking the topology of the bulk energy band. In response to the Lifshitz transition, the 3D QHE because of the Weyl orbits will break down, along with that the quantized Hall plateaus turn to be irregular. As a bulk perturbation, the momentum-dependent gap term is quite common in real materials. Therefore, in addition to the bulk Weyl nodes and surface Fermi arcs, the Lifshitz transition of the surface states also plays an important part in realization of stable Weyl orbits. ## VII Acknowledgements This work was supported by the National NSF of China under Grants No. 12274146, No. 11874016, No. 12174121 and No. 12104167, the Guangdong Basic and Applied Basic Research Foundation under Grant No. 2023B1515020050, the Science and Technology Program of Guangzhou under Grant No. 2019050001 and the Guangdong Provincial Key Laboratory under Grant No. 2020B1212060066.
2308.16442
Generalized polynomial functors
We define Schur categories, $\Gamma^d \mathcal C$, associated to a $\Bbbk$-linear category $\mathcal C$, over a commutative ring $\Bbbk$. The corresponding representation categories, $\mathbf{rep}\, \Gamma^d\mathcal C$, generalize categories of strict polynomial functors. Given a $\Bbbk$-superalgebra $A$, we show that for certain categories $\mathcal{V} = \boldsymbol{\mathcal V}_A$, $\boldsymbol{\mathcal E}_A$ of $A$-supermodules, there is a Morita equivalence between $\mathbf{rep}\, \Gamma^d\mathcal{V}$ and the category of supermodules over a generalized Schur superalgebra of the form $S^A(m|n,d)$ and $S^A(n,d)$, respectively. We also describe a formulation of generalized Schur-Weyl duality from the viewpoint of the category $\mathbf{rep}\, \Gamma^d \boldsymbol{\mathcal E}_A$.
Jonathan D. Axtell
2023-08-31T04:04:23Z
http://arxiv.org/abs/2308.16442v1
# Generalized polynomial functors ###### Abstract. We define Schur categories, \(\Gamma^{d}\mathcal{C}\), associated to a \(\Bbbk\)-linear category \(\mathcal{C}\), over a commutative ring \(\Bbbk\). The corresponding representation categories, \(\operatorname{\mathbf{rep}}\Gamma^{d}\mathcal{C}\), generalize categories of strict polynomial functors. Given a \(\Bbbk\)-superalgebra \(A\), we show that for certain categories \(\mathcal{V}=\mathcal{V}_{A}\), \(\mathcal{E}_{A}\) of \(A\)-supermodules, there is a Morita equivalence between \(\operatorname{\mathbf{rep}}\Gamma^{d}\mathcal{V}\) and the category of supermodules over generalized Schur superalgebras of the form \(S^{A}(m|n,d)\) and \(S^{A}(n,d)\), respectively. We also describe a formulation of generalized Schur-Weyl duality from the viewpoint of the category \(\operatorname{\mathbf{rep}}\Gamma^{d}\mathcal{E}_{A}\). ## 1. Introduction Let \(\Bbbk\) be a commutative ring and suppose \(A=A_{\bar{0}}\oplus A_{\bar{1}}\) is a \(\Bbbk\)-superalgebra such that each \(A_{\epsilon}\) (\(\epsilon=\bar{0},\bar{1}\)) is a finitely generated, free \(\Bbbk\)-module. The generalized Schur superalgebras \(S^{A}(n,d)\) were defined by Evseev and Kleshchev [6, 7] in order to prove the Turner double conjecture. These superalgebras are related to wreath product algebras by a generalized Schur-Weyl duality established in [6]. The category \(\mathcal{P}_{d}\) of (degree \(d\), homogeneous) strict polynomial functors was defined by Friedlander and Suslin in [8] in order to prove results about the cohomology of finite group schemes. In the case where \(\Bbbk\) is a field of characteristic \(p\neq 2\), certain super-analogues of these categories were defined in [1], denoted \(\mathsf{Pol}_{d}^{(\mathrm{I})}\) and \(\mathsf{Pol}_{d}^{(\mathrm{II})}\), which are isomorphic to the categories of finitely generated supermodules over the Schur superalgebras, \(S(m|n,d)\) and \(Q(n,d)\), respectively, provided that \(m,n\geq d\). In this paper, we describe a generalization of the categories \(\mathcal{P}_{d}\) and \(\mathsf{Pol}_{d}^{(\dagger)}\) (\(\dagger=\mathrm{I},\mathrm{II}\)). Suppose first that \(\mathcal{C}\) is a \(\Bbbk\)-linear category whose Hom sets belong to the category \(\mathcal{V}_{\Bbbk}\) of finitely generated, projective \(\Bbbk\)-supermodules, and such that composition induces an even \(\Bbbk\)-linear map (see Section 2.7 for more details). In Section 3, we define the Schur categories, \(\Gamma^{d}\mathcal{C}\), whose morphisms consist of divided powers of morphisms in the category \(\mathcal{C}\). Then in Section 4, we define the category of generalized polynomial functors \(\mathcal{P}_{d,\mathcal{C}}\) to be the representation category \[\mathcal{P}_{d,\mathcal{C}}:=\operatorname{\mathbf{rep}}\Gamma^{d}\mathcal{C}\] consisting of all even, \(\Bbbk\)-linear functors from \(\Gamma^{d}\mathcal{C}\) to \(\mathcal{V}_{\Bbbk}\). If the Hom-sets of \(\mathcal{C}\) belong to the subcategory \(\mathcal{V}_{\Bbbk}\subset\boldsymbol{\mathcal{V}}_{\Bbbk}\) of ordinary (non-graded) projective \(\Bbbk\)-modules, then the morphisms of \(\Gamma^{d}\mathcal{C}\) also belong to \(\mathcal{V}_{\Bbbk}\). In this case, we also define the category of generalized polynomial functors \(\mathcal{P}_{d,\mathcal{C}}\) to be the representation category \[\mathcal{P}_{d,\mathcal{C}}:=\operatorname{rep}\Gamma^{d}\mathcal{C},\] consisting of all \(\Bbbk\)-linear functors from \(\Gamma^{d}\mathcal{C}\) to \(\mathcal{V}_{\Bbbk}\). Now suppose that \(A\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) is a \(\Bbbk\)-superalgebra as described above. Let \(\boldsymbol{\mathcal{V}}_{A}\) denote the category of all finitely-generated, projective right \(A\)-supermodules. We then introduce _evenly-generated_\(A\)-supermodules in Section 2.5, and we let \(\boldsymbol{\mathcal{E}}_{A}\subset\boldsymbol{\mathcal{V}}_{A}\) denote the full subcategory of evenly-generated, projective right \(A\)-supermodules. Suppose that \(m,n\geq d\) are integers. Then we also describe a generalized Schur superalgebra of the form \(S^{A}(m|n,d)\) in Section 3.4. In our main result Theorem 4.3, we show that there are equivalences of categories \[\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{V}}_{A}}\ \simeq\ S^{A}(m|n,d)\text{-}\mathbf{ mod},\qquad\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{E}}_{A}}\ \simeq\ S^{A}(n,d)\text{-}\mathbf{ mod},\] between a category of generalized polynomial functors and a corresponding category of left supermodules over a Schur superalgebra. Considering the special cases where \(A=\Bbbk\) or \(A=\mathcal{C}(1)\), a rank one Clifford algebra, gives the following equivalences of categories: \[\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{V}}_{\Bbbk}}\simeq\mathsf{ Pol}_{d}^{(\mathrm{I})},\qquad\boldsymbol{\mathcal{P}}_{d,\boldsymbol{ \mathcal{E}}_{\mathcal{C}(1)}}\simeq\mathsf{Pol}_{d}^{(\mathrm{II})}\] respectively. Similarly, we have isomorphisms of superalgebras: \[S^{C(1)}(n,d)\cong Q(n,d),\qquad S^{\Bbbk}(m|n,d)\cong S(m|n,d).\] It follows that Theorem 4.3 generalizes Theorem 4.2 of [1]. Now consider the case of an ordinary ungraded algebra \(A\). Then we may regard \(S^{A}(n,d)\) as an ordinary algebra. In Theorem 4.4, we describe an equivalence \[\mathcal{P}_{d,\mathcal{V}_{A}}\ \simeq\ S^{A}(n,d)\text{-}\mathrm{mod}\] between \(\mathcal{P}_{d,\mathcal{V}_{A}}\) and the category of left \(S^{A}(n,d)\)-modules. In the case \(A=\Bbbk\), we have an equivalence of categories \[\mathcal{P}_{d,\mathcal{V}_{\Bbbk}}\simeq\mathcal{P}_{d,\Bbbk},\] and there is algebra isomorphism between \(S^{\Bbbk}(n,d)\) and the classical Schur algebra \(S(n,d)\) defined in [9]. It follows that Theorem 4.4 generalizes Theorem 3.2 of [8]. Finally, in Section 5, we describe an exact functor from the category \(\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{E}}_{A}}\) to the category of finite dimensional left supermodules over the wreath product superalgebra \(A\wr\mathfrak{S}_{d}\) (Theorem 5.1). This functor may be viewed as a categorical analogue of the generalized Schur-Weyl duality described by Kleshchev and Etsevev in [7]. ## 2. Preliminaries We assume throughout that \(\Bbbk\) is a commutative ring with unit. Let \(\mathbb{N}\) and \(\mathbb{N}_{0}\) denote the positive and nonnegative integers, respectively. ### Definitions Let \(\mathcal{M}_{\Bbbk}\) denote the category of all \(\Bbbk\)-modules and \(\Bbbk\)-linear maps. Given \(M,N\in\mathcal{M}_{\Bbbk}\), we write \(M\otimes N:=M\otimes_{\Bbbk}N\), \(\operatorname{Hom}(M,N):=\operatorname{Hom}_{\Bbbk}(M,N)\) and \(\operatorname{End}(M):=\operatorname{Hom}(M,M)\). A \(\Bbbk\)-_supermodule_ is a \(\mathbb{Z}/2\)-graded \(\Bbbk\)-module, \(M=M_{\bar{0}}\oplus M_{\bar{1}}\). We write \(|M|\in\mathcal{M}_{\Bbbk}\) to denote the ordinary \(\Bbbk\)-module obtained by forgetting the \(\mathbb{Z}/2\)-grading. Given \(\varepsilon\in\mathbb{Z}/2\) and homogeneous \(v\in M_{\varepsilon}\), we also write \(\overline{v}=\varepsilon\). Elements of \(M_{\bar{0}}\) (resp. \(M_{\bar{1}}\)) are called _even_ (resp. _odd_). Let us write \(\boldsymbol{\mathcal{M}}_{\Bbbk}\) to denote the category of all \(\Bbbk\)-supermodules and \(\Bbbk\)-linear maps. Given objects \(M,N\in\boldsymbol{\mathcal{M}}_{\Bbbk}\), both the tensor product \(M\otimes N\) and the hom-set \(\operatorname{Hom}(M,N)\) are again objects of \(\boldsymbol{\mathcal{M}}_{\Bbbk}\), with \(\mathbb{Z}/2\)-gradings defined by \[(M\otimes N)_{\epsilon}:=M_{\bar{0}}\otimes N_{\epsilon}\oplus M_{\bar{1}} \otimes N_{\bar{1}+\epsilon}\] and \[\operatorname{Hom}(M,N)_{\epsilon}:=\operatorname{Hom}(M_{\bar{0}},N_{ \epsilon})\oplus\operatorname{Hom}(M_{\bar{1}},N_{\bar{1}+\epsilon}),\] respectively, for each \(\epsilon\in\mathbb{Z}/2\). We will consider \(\mathcal{M}_{\Bbbk}\) as a subcategory of \(\boldsymbol{\mathcal{M}}_{\Bbbk}\) by identifying an ordinary \(\Bbbk\)-module \(M\in\mathcal{M}_{\Bbbk}\) as a \(\Bbbk\)-supermodule concentrated in degree \(\bar{0}\). ### Superalgebras A _superalgebra_ is a \(\Bbbk\)-supermodule \(A=A_{\bar{0}}\oplus A_{\bar{1}}\) which is a unital associative \(\Bbbk\)-algebra, such that multiplication \(m_{A}:A\otimes A\to A\) is an even \(\Bbbk\)-linear map. A superalgebra homomorphism \(\vartheta:A\to B\) is an even \(\Bbbk\)-linear map that is an algebra homomorphism in the usual sense. Given a pair of superalgebras \(A,B\), the tensor product \(A\otimes B\) is again a superalgebra in a natural way. The multiplication is defined by the usual _rule of signs_ convention \[(a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=(-1)^{\overline{b_{1}}\,\overline{a _{2}}}(a_{1}a_{2})\otimes(b_{1}b_{2}) \tag{1}\] for all \(a_{1},a_{2}\in A\), \(b_{1},b_{2}\in B\), and the unit is given by \(1_{A\otimes B}:=1_{A}\otimes 1_{B}\). Expressions of the form (1) are assumed to hold for homogeneous elements and are then extended linearly to the general case. ### Supermodules Let \(A\) be a superalgebra. A _left \(A\)-supermodule_ is a \(\Bbbk\)-supermodule \(M\in\boldsymbol{\mathcal{M}}_{\Bbbk}\) which is a left \(A\)-module in the usual sense, such that \(A_{\epsilon}M_{\epsilon^{\prime}}\subseteq M_{\epsilon+\epsilon^{\prime}}\) for \(\epsilon,\epsilon^{\prime}\in\mathbb{Z}/2\). One may define _right \(A\)-supermodules_ similarly. A _subsupermodule_ of a left (resp. right) \(A\)-supermodule \(M\) is an \(A\)-submodule \(N\subset M\) such that \(N=(N\cap M_{\bar{0}})\oplus(N\cap M_{\bar{1}})\). A _homomorphism_\(\varphi:M\to N\) of left (resp. right) \(A\)-supermodules \(M,N\) is a (not necessarily homogeneous) \(\Bbbk\)-linear map such that \[\varphi(av)=(-1)^{\overline{\varphi}\,\overline{a}}a\varphi(v),\quad\text{resp. } \ \varphi(va)=\varphi(v)a,\] for all \(a\in A,v\in V\). Considering \(A\) as a right supermodule over itself, notice that the map \[A\,\xrightarrow{\sim}\,\operatorname{End}_{A}(A):\,a\mapsto m_{A}(a\otimes-) \tag{2}\] is an isomorphism of superalgebras. Let \({}_{A}\boldsymbol{\mathcal{M}}\) (resp. \(\boldsymbol{\mathcal{M}}_{A}\)) denote the category of all left (right) \(A\)-supermodules and homomorphisms, with composition of morphisms defined by the rule of signs, as in (1). Given a pair \(M,N\) of left (right) \(A\)-supermodules, let \(\operatorname{Hom}_{A}(M,N)\) denote the set of homomorphisms, \(\varphi:M\to N\). We may also consider the superalgebra \(\operatorname{End}_{A}(M):=\operatorname{Hom}_{A}(M,M)\). We write \(A\)**-mod** (resp. \(\operatorname{\mathbf{mod}}\)-\(A\)) to denote the full subcategory of \(A\)-supermodules which are finitely generated as ordinary \(A\)-modules. The underlying \(\Bbbk\)-module \(|A|\in\mathcal{M}_{\Bbbk}\) is an ordinary \(\Bbbk\)-algebra. In this case, we respectively write \[{}_{|A|}\mathcal{M},\quad\mathcal{M}_{|A|},\quad|A|\text{-mod},\quad\text{mod -}|A|,\] to denote the corresponding categories of ordinary left \(|A|\)-modules, etc. ### Free \(A\)-supermodules Let \(A\) be a superalgebra. Given any right \(A\)-supermodule \(M\in\boldsymbol{\mathcal{M}}_{A}\), define the _parity change_\(\Pi M\) to be the same \(A\)-module, but with opposite \(\mathbb{Z}/2\)-grading. For example, given \(m,n\in\mathbb{N}\) and considering \(A\) as a right \(A\)-supermodule, we write \(A^{n|m}:=A^{n}\oplus(\Pi A)^{m}\). We will also write \[\text{$M_{n}$}(A):=\operatorname{End}_{A}(A^{n}),\qquad\quad\text{$M_{n|m}$}(A ):=\operatorname{End}_{A}(A^{n|m})\] to denote corresponding superalgebras. We define the (right) parity change functor \(\Pi:\operatorname{\mathbf{mod}}\)-\(A\to\operatorname{\mathbf{mod}}\)-\(A\) which sends \(M\to\Pi M\). On a right \(A\)-supermodule homomorphism \(\phi:M\to N\), we set \(\Pi(\phi)=(-1)^{\bar{\phi}}\phi\) as a linear map. Suppose the free right \(A\)-supermodule \(A^{n|m}\) (resp. \(A^{n^{\prime}|m^{\prime}}\)) has homogeneous \(A\)-linear basis \(v_{1},\ldots,v_{m+n}\) (resp. \(w_{1},\ldots,w_{m^{\prime}+n^{\prime}}\)), such that \(v_{1},\ldots,v_{n}\) (resp. \(w_{1},\ldots,w_{n^{\prime}}\)) are even and \(v_{n+1},\ldots,v_{m+n}\) (resp. \(w_{n^{\prime}+1},\ldots,w_{m^{\prime}+n^{\prime}}\)) are odd. Then for a given \(a\in A\), we write \(e(j,i)\) (resp. \(e^{(a)}(j,i)\)) to denote the homomorphism in \(\operatorname{Hom}_{A}(A^{n|m},A^{n^{\prime}|m^{\prime}})\) which sends \(v_{k}\mapsto 0\) for all \(k\neq i\) and sends \(v_{i}\mapsto w_{j}\) (resp. \(v_{i}\mapsto w_{j}a\)). Notice that the parity of the element \(e(j,i)\) is given by: \[\overline{e(j,i)}=\begin{cases}\bar{0},&\text{if }\,1\leq i\leq n\,\text{ and }\,1\leq j\leq n^{\prime},\\ &\text{or if }\,n<i\leq m+n\,\text{ and }\,n^{\prime}<j\leq m^{\prime}+n^{ \prime};\\ \bar{1},&\text{else}.\end{cases}\] Similarly, given a homogeneous element \(a\in A\), we have \[\overline{e^{(a)}(j,i)}=\bar{a}+\overline{e(j,i)}. \tag{3}\] Now suppose \(A\in\boldsymbol{\mathcal{V}}_{\Bbbk}\). If \(A\) is free as a \(\Bbbk\)-supermodule, then we may identify \(\operatorname{Hom}(\Bbbk^{n|m},\Bbbk^{n^{\prime}|m^{\prime}})\) as a \(\Bbbk\)-subsupermodule of \(\operatorname{Hom}_{A}(A^{n|m},A^{n^{\prime}|m^{\prime}})\) which is generated by the elements \(e(j,i)\). The isomorphism (2) may then be generalized as follows. **Lemma 2.1**.: _Let \(A\) be a superalgebra which is finitely-generated and free as a \(\Bbbk\)-module. Considering \(A\) as a right \(A\)-supermodule, there exists for any \(m,n,m^{\prime},n^{\prime}\in\mathbb{N}\) an even isomorphism of \(\Bbbk\)-supermodules_ \[A\otimes\operatorname{Hom}(\Bbbk^{n|m},\Bbbk^{n^{\prime}|m^{\prime}})\simeq \operatorname{Hom}_{A}(A^{n|m},A^{n^{\prime}|m^{\prime}}):\quad a\otimes e(j,i) \mapsto e^{(a)}(j,i).\] _In particular, we have the following superalgebra isomorphisms:_ 1. \(A\otimes\operatorname{End}(\Bbbk^{n|m})\simeq\text{M}_{n|m}(A)\)__ 2. \(A\otimes\operatorname{End}(\Bbbk^{n})\simeq\text{M}_{n}(A)\)__ Proof.: Since \(A\) is a free \(\Bbbk\)-supermodule, we have \(A\simeq\Bbbk^{p|q}\), for some \(p,q\in\mathbb{N}_{0}\). Let \(a_{1},\dots,a_{p}\) (resp. \(a_{p+1},\dots,a_{p+q}\)) be a \(\Bbbk\)-basis of \(A_{\bar{0}}\) (resp. \(A_{\bar{1}}\)). Then the set \[\{e^{(r)}(j,i):=e^{(a_{r})}(j,i)\ |\ 1\leq i\leq m+n,\,1\leq j\leq m^{\prime}+n^{ \prime},\,1\leq r\leq p+q\}\] is a homogeneous \(\Bbbk\)-basis of \(\operatorname{Hom}_{A}(A^{n|m},A^{n^{\prime}|m^{\prime}})\). It follows by (3) that \(\overline{e^{(r)}(j,i)}=\overline{a}_{r}+\overline{e(j,i)}\), for any \(r\). The mapping \[a\otimes e(j,i)\mapsto e^{(a)}(j,i)\] thus yields an even isomorphism. For the case \(m=m^{\prime}\) and \(n=n^{\prime}\), it is easy to check that the isomorphism \[A\otimes\operatorname{End}(\Bbbk^{n|m})\simeq\operatorname{End}_{A}(A^{n|m})\] is a superalgebra homomorphism. This shows part (i) of the lemma. Part (ii) follows by setting \(m=0\). ### Evenly generated supermodules Notice that \(\boldsymbol{\mathcal{M}}_{A}\) and \({}_{A}\boldsymbol{\mathcal{M}}\) are not abelian categories. It will thus be convenient to consider the abelian subcategory \({}_{A}\boldsymbol{\mathcal{M}}^{\text{ev}}\subset{}_{A}\boldsymbol{\mathcal{M}}\) (resp. \(\boldsymbol{\mathcal{M}}^{\text{ev}}_{A}\subset\boldsymbol{\mathcal{M}}_{A}\)), consisting of the same objects but only even morphisms. This allows us to make use of the basic notions of homological algebra by restricting our attention to only even morphisms. For example, by a short exact sequence in \({}_{A}\boldsymbol{\mathcal{M}}\) (resp. \(\boldsymbol{\mathcal{M}}_{A}\)), we mean a sequence \[0\to L\to M\to N\to 0\] with all the maps being even. Now suppose that \(A\) is a superalgebra and that \(M\) belongs to \(A\)-**mod** (resp. **mod**-\(A\)). Then \(M\) is automatically finitely generated as an \(A\)-module. It follows that there is a short exact sequence of the form \[A^{n|m}\to M\to 0.\] We will say that \(M\) is _evenly generated_ if there exists an even surjective homomorphism of the form \[A^{n}\twoheadrightarrow M.\] ### Projective supermodules We say that a right \(A\)-supermodule \(M\) is _projective_ if \(M\) is a projective object of \(\boldsymbol{\mathcal{M}}_{A}^{\mathsf{ev}}\). Let us write \(\boldsymbol{\mathcal{V}}_{A}\) to denote the full subcategory of \(\mathbf{mod}\text{-}A\) consisting of all finitely generated, projective right \(A\)-supermodules. Let us also write \[\boldsymbol{\mathcal{E}}_{A}\subset\boldsymbol{\mathcal{V}}_{A}\] to denote the full subcategory consisting of the evenly-generated, projective right \(A\)-supermodules. **Remark 2.2**.: The categories \(\boldsymbol{\mathcal{E}}_{A}\) and \(\boldsymbol{\mathcal{V}}_{A}\) are not distinct in general. For example, let \(C(1)\) denote the rank \(1\) Clifford algebra (see [1, Example 2.6]). Then one may check that every \(C(1)\)-supermodule is evenly generated, and thus \(\boldsymbol{\mathcal{E}}_{C(1)}=\boldsymbol{\mathcal{V}}_{C(1)}\). We further write \(\mathcal{V}_{|A|}\) to denote the category of ordinary (non-graded) finitely generated projective \(|A|\)-modules. Note for example that \(\boldsymbol{\mathcal{V}}_{\Bbbk}\) (resp. \(\mathcal{V}_{\Bbbk}\)) denotes the category of all finitely generated, projective \(\Bbbk\)-supermodules (resp. \(\Bbbk\)-modules). In this case, we may identify \(\boldsymbol{\mathcal{E}}_{\Bbbk}\) with \(\mathcal{V}_{\Bbbk}\), considered as a subcategory of \(\boldsymbol{\mathcal{M}}_{\Bbbk}\). ### Enriched categories We recall the definition of a specific type of \(\mathcal{M}\)-enriched category, where we assume that \(\mathcal{M}\subset\boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\). See [11] for more details about \(\mathcal{M}\)-enriched categories in general. First notice that \(\boldsymbol{\mathcal{M}}_{\Bbbk}=(\boldsymbol{\mathcal{M}}_{\Bbbk},\otimes, \Bbbk)\) is a symmetric monoidal category with symmetry isomorphism given by the so-called _supertwist_ map \[\tau:M\otimes N\xrightarrow{\sim}N\otimes M.\] This is the \(\Bbbk\)-linear map sending \[v\otimes w\mapsto(-1)^{\overline{v}}{}^{\overline{w}}w\otimes v\quad(v\in M, w\in N) \tag{4}\] for any \(M,N\in\boldsymbol{\mathcal{M}}_{\Bbbk}\). It follows by (4) that the supertwist \(\tau\) is even. It is thus natural to view each of the subcategories \[\mathcal{M}_{\Bbbk},\ \boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\ \subset\ \boldsymbol{\mathcal{M}}_{\Bbbk}\] as a monoidal subcategory by restriction. In the remainder, we assume that \(\mathcal{M}\) denotes a full subcategory of \(\boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\) such that: \[\Bbbk\in\mathcal{M}\ \ \text{and}\ \ M\otimes N\in\mathcal{M},\ \ \text{whenever}\ M,N\in\mathcal{M}. \tag{5}\] It follows that \(\mathcal{M}\) has the structure of a symmetric monoidal category given by restriction. We now recall the corresponding definition of enriched categories. **Definition 2.3** ([11]).: An _\(\mathcal{M}\)-enriched category_ (or _\(\mathcal{M}\)-category_) is a category \(\mathcal{C}\) such that the hom-set \(\operatorname{Hom}_{\mathcal{C}}(X,Y)\) is an object of \(\mathcal{M}\) for all \(X,Y\in\mathcal{C}\), the composition is \(\Bbbk\)-bilinear and the induced map \[{}_{X,Y,Z}^{\mathcal{C}}:\,\operatorname{Hom}_{\mathcal{C}}(Y,Z)\otimes \operatorname{Hom}_{\mathcal{C}}(X,Y)\ \to\ \operatorname{Hom}_{\mathcal{C}}(X,Z)\] is an even \(\Bbbk\)-linear map (that is, a morphism in \(\mathcal{M}\)) for all \(X,Y,Z\in\mathcal{C}\). If \(\mathcal{M}\subset\mathcal{M}^{\prime}\) are both full subcategories of \(\boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\) satisfying (5), then it is clear that any \(\mathcal{M}\)-category is automatically an \(\mathcal{M}^{\prime}\)-category. Notice for example that an \(\mathcal{M}_{\Bbbk}\)-enriched category is just a \(\Bbbk\)-linear category. Let us fix a category \(\mathcal{M}\subset\boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\) as above. We next recall the definition of enriched functors and natural transformations. **Definition 2.4**.: Suppose \(\mathcal{C},\mathcal{D}\) are \(\mathcal{M}\)-enriched categories. An _\(\mathcal{M}\)-enriched_ functor \(F:\mathcal{C}\to\mathcal{D}\) is a functor such that the map \[F_{X,Y}:\operatorname{Hom}_{\mathcal{C}}(X,Y)\to\operatorname{Hom}_{\mathcal{ D}}(F(X),F(Y))\] is an even \(\Bbbk\)-linear map for any pair of objects \(X,Y\in\mathcal{C}\). **Definition 2.5**.: Next suppose that \(F,G:\mathcal{C}\to\mathcal{D}\) is a pair of \(\mathcal{M}\)-enriched functors. We first define an _even_ (resp. _odd_) \(\mathcal{M}\)_-enriched natural transformation_, \(\eta:F\to G\), to be a collection of even (odd) linear maps \[\eta(X)\in\operatorname{Hom}_{\mathcal{M}}(F(X),G(X))\qquad({}^{\forall}X\in \mathcal{C})\] such that for a given \(\varphi\in\operatorname{Hom}_{\mathcal{M}}(X,Y)\) we have \[G(\varphi)\circ\eta(X)=(-1)^{\overline{\varphi}\,\overline{\eta(X)}}\eta(Y) \circ F(\varphi).\] In general, an _\(\mathcal{M}\)-enriched natural transformation_, \(\eta:F\to G\), is then defined to be a collection of linear maps, \(\eta(X)=\eta_{\bar{0}}(X)\oplus\eta_{\bar{1}}(X)\) (\(X\in\mathcal{C}\)), such that \[\eta_{\epsilon}(X)\in\operatorname{Hom}_{\mathcal{M}}(F(X),G(X))_{\epsilon} \qquad(\epsilon\in\mathbb{Z}/2)\] and \(\eta_{\bar{0}}\) (resp. \(\eta_{\bar{1}}\)): \(F\to G\) is an even (resp. odd) \(\mathcal{M}\)-enriched natural transformation. We also recall the corresponding category of enriched functors and natural transformations. **Definition 2.6**.: Given a pair \(\mathcal{C},\mathcal{D}\) of \(\mathcal{M}\)-enriched categories, we write \(\operatorname{Fun}_{\Bbbk}(\mathcal{C},\mathcal{D})\) to denote the \(\mathcal{M}\)-enriched category consisting of all even \(\Bbbk\)-linear functors \[X:\mathcal{C}\to\mathcal{D}\] and whose morphisms are \(\mathcal{M}\)-enriched natural transformations. Let \(\mathcal{C}\) be an \(\mathcal{M}\)-enriched category. We write \(X\cong Y\), if \(X,Y\) are isomorphic in \(\mathcal{C}\). If there is an even isomorphism \(\varphi:X\cong Y\) (i.e., \(\varphi\in\operatorname{Hom}_{\mathcal{C}}(X,Y)_{\bar{0}}\)), then we use the notation \(X\simeq Y\). Let \(\mathcal{C}^{\mathrm{ev}}\) denote the subcategory of \(\mathcal{C}\) consisting of the same objects but only even morphisms. Since \(\mathcal{M}\)-enriched functors send even morphisms to even morphisms, they give rise to the corresponding functors between the underlying even subcategories. Notice that \(\mathcal{V}_{\Bbbk}\) and \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\) are both full subcategories of \(\boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\) which satisfy (5). In the remainder, we will mostly consider enriched categories where \(\mathcal{M}=\mathcal{V}_{\Bbbk}\) or \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\). Given a superalgebra \(A\), we may view \(A\)-\(\mathbf{mod}\) and \(\mathbf{mod}\)-\(A\) as \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\)-enriched categories, while \({}_{A}\boldsymbol{\mathcal{M}}\) and \(\boldsymbol{\mathcal{M}}_{A}\) are \(\boldsymbol{\mathcal{M}}_{\Bbbk}^{\mathsf{ev}}\)-enriched categories. Furthermore, the categories \({}_{A}\boldsymbol{\mathcal{M}}^{\mathsf{ev}}\) and \(\boldsymbol{\mathcal{M}}_{A}^{\mathsf{ev}}\) are abelian, while the respective subcategories \((A\)-\(\mathbf{mod})^{\mathsf{ev}}\), \((\mathbf{mod}\)-\(A)^{\mathsf{ev}}\), \({}_{A}\boldsymbol{\mathcal{V}}^{\mathsf{ev}}\), and \(\boldsymbol{\mathcal{V}}_{A}^{\mathsf{ev}}\) are exact categories in the sense of Quillen (see [5, 10]). ### Tensor Products of \(\mathcal{M}\)-categories Suppose \(\mathcal{C},\mathcal{D}\) are \(\mathcal{M}\)-enriched categories. Then define \(\mathcal{C}\otimes\mathcal{D}\) to be the \(\mathcal{M}\)-enriched category whose objects are pairs \((X,X^{\prime})\) with \(X\in\mathcal{C}\), \(X^{\prime}\in\mathcal{D}\) and with morphisms defined by: \[\operatorname{Hom}_{\mathcal{C}\otimes\mathcal{D}}((X,X^{\prime}),(Y,Y^{ \prime})):=\operatorname{Hom}_{\mathcal{C}}(X,Y)\otimes\operatorname{Hom}_{ \mathcal{D}}(X^{\prime},Y^{\prime})\] for all \(X,Y\in\mathcal{C}\) and \(X^{\prime},Y^{\prime}\in\mathcal{D}\). The composition of morphisms is defined similarly to (1). ## 3. Categories of Divided Powers After recalling some basic properties of divided powers, we define the categories \(\Gamma^{d}\mathcal{C}\) associated to any \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\)-category \(\mathcal{C}\). ### Divided powers Suppose \(M\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) and \(d\in\mathbb{Z}_{\geq 1}\). There is a unique (even) right action of the symmetric group \(\mathfrak{S}_{d}\) on the tensor power \(M^{\otimes d}\) such that each transposition \((i\ i+1)\) for \(1\leq i\leq d-1\) acts by the supertwist: \[(v_{1}\otimes\cdots\otimes v_{d}).(i\ i+1)\ :=\ (-1)^{\overline{v}_{i} \overline{v}_{i+1}}v_{1}\otimes\cdots\otimes v_{i+1}\otimes v_{i}\otimes \cdots\otimes v_{d},\] for any \(v_{1},\ldots,v_{d}\in M\), with \(v_{i},v_{i+1}\) being \(\mathbb{Z}/2\)-homogeneous. We define the _\(d\)-th divided power_ of \(M\) to be the invariant subsupermodule \[\Gamma^{d}M:=(M^{\otimes d})^{\mathfrak{S}_{d}}\] and set \(\Gamma^{0}M:=\Bbbk\). **Remark 3.1**.: We have used an equivalent definition to the one given in [1, 2]. See [13, Example 2.1.1] for an analogous statement regarding divided powers of ordinary \(\Bbbk\)-modules. For any \(M,M^{\prime},N,N^{\prime}\in\boldsymbol{\mathcal{V}}_{\Bbbk}\), there is an even isomorphism \[\operatorname{Hom}(M,M^{\prime})\otimes\operatorname{Hom}(N,N^{\prime})\simeq \operatorname{Hom}(M\otimes N,M^{\prime}\otimes N^{\prime}) \tag{6}\] sending \(\phi\otimes\psi\mapsto\phi\boxtimes\psi\), where \[(\phi\boxtimes\psi)(v\otimes w):=(-1)^{\overline{\psi}\,\overline{v}}\phi(v) \otimes\psi(w).\] Notice that the isomorphism (6) is natural with respect to composition. Next, recall that the functor \(\otimes^{d}:\boldsymbol{\mathcal{V}}_{\Bbbk}\to\boldsymbol{\mathcal{V}}_{\Bbbk}\) sending \(M\mapsto M^{\otimes d}\), whose action on morphisms is defined by \[\otimes_{M,N}^{d}(\varphi):=\,\varphi\boxtimes\cdots\boxtimes\varphi:M^{ \otimes d}\to N^{\otimes d}\] for any \(\varphi\in\operatorname{Hom}(M,N)\). Given \(d,e\in\mathbb{N}\), there is an embedding \[\Gamma^{d}M\otimes\Gamma^{e}N\,\hookrightarrow\,\Gamma^{d+e}(M\oplus N):\,x \otimes y\mapsto x\cdot y, \tag{7}\] where \[x\cdot y\,:=\sum_{\sigma\in\mathfrak{S}_{d+e}/\mathfrak{S}_{d,e}}(x\otimes y). \sigma\,\in\,M^{\otimes(d+e)}\] for \(x\in M^{\otimes d}\), \(y\in M^{\otimes e}\), where \(\mathfrak{S}_{d,e}\) denotes the image of \(\mathfrak{S}_{d}\times\mathfrak{S}_{e}\hookrightarrow\mathfrak{S}_{d+e}\). Then as in [1, Eq.(26)], there is a corresponding decomposition \[\Gamma^{d}(M\oplus N)\cong\bigoplus_{0\leq c\leq d}\Gamma^{c}M\otimes\Gamma^{ d-c}N \tag{8}\] where we identify \(\Gamma^{c}M\otimes\Gamma^{d-c}N\) as a subsupermodule of \(\Gamma^{d}(M\oplus N)\) under the embedding (7). It follows easily from (8) that the divided power \(\Gamma^{d}M\) of a projective \(\Bbbk\)-supermodule \(M\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) is again finitely-generated and projective. This yields a functor \(\Gamma^{d}:\boldsymbol{\mathcal{V}}_{\Bbbk}\to\boldsymbol{\mathcal{V}}_{\Bbbk}\) which is a subfunctor of \(\otimes^{d}\). In particular, the action of \(\Gamma^{d}\) on morphisms is defined by restriction \[\Gamma^{d}_{M,N}(\varphi):=(\varphi^{\otimes d})|_{\Gamma^{d}M}:\Gamma^{d}M \to\Gamma^{d}N\] for any \(\varphi\in\operatorname{Hom}(M,N)\). It is then clear that the (non-linear) maps \(\Gamma^{d}_{M,N}\) are even for all \(M,N\in\boldsymbol{\mathcal{V}}_{\Bbbk}\), so that \(\Gamma^{d}\) restricts to a functor \(\Gamma^{d}:\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\to\boldsymbol{ \mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\). Suppose that \(I=(d_{1},\ldots,d_{s})\) is a tuple of positive integers, and let \(\mathfrak{S}_{I}\) denote the subgroup \(\mathfrak{S}_{d_{1}}\times\cdots\times\mathfrak{S}_{d_{s}}\subseteq\mathfrak{ S}_{|I|}\), where \(|I|:=\sum d_{i}\). Given \(M\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) and distinct nonzero elements \(v_{1},\ldots,v_{s}\in M_{\bar{0}}\), we define the new element \[(v_{1},\ldots,v_{s};I)_{0}:=\sum_{\sigma\in\mathfrak{S}_{|I|}/\mathfrak{S}_{I }}(v_{1}^{\otimes d_{1}}\otimes\cdots\otimes v_{s}^{\otimes d_{s}}).\sigma,\] which belongs to \((M_{\bar{0}}^{\otimes|I|})^{\mathfrak{S}_{|I|}}\). Similiarly, if \(v_{1}^{\prime},\ldots,v_{t}^{\prime}\in M_{\bar{1}}\), we define the (possibly zero) element \[(v_{1}^{\prime},\ldots,v_{t}^{\prime})_{1}:=\sum_{\sigma\in\mathfrak{S}_{t}}( v_{1}^{\prime}\otimes\cdots\otimes v_{t}^{\prime}).\sigma,\] which belongs to \((M_{\bar{1}}^{\otimes t})^{\mathfrak{S}_{t}}\). **Lemma 3.2**.: _Suppose that \(M\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) is a free \(\Bbbk\)-supermodule, and that \(\{v_{1},\ldots,v_{\mu}\}\)_(_resp. \(\{v_{\mu+1},\ldots,v_{\mu+\nu}\}\)_) _is an ordered basis of \(M_{\bar{0}}\)_(_resp. \(M_{\bar{1}}\)_)_. Then a basis for \(\Gamma^{d}M\) is given by the set of all elements of the form_ \[(v_{i_{1}},\ldots,v_{i_{s}};I)_{0}\cdot(v_{j_{1}},\ldots,v_{j_{t}})_{1},\] _such that \(i_{1},\ldots,i_{s}\)_(_resp. \(j_{1},\ldots,j_{t}\)_) _are distinct elements of the set \(\{1,\ldots,\mu\}\)_(_resp. \(\{\mu+1,\ldots,\mu+\nu\}\)_) _and \(|I|+t=d\)._ Proof.: For any \(\Bbbk\)-module \(V\in\mathcal{V}_{\Bbbk}\), let \(D^{d}V\) and \(\Lambda^{d}V\) denote the ordinary \(d\)-th divided and exterior powers of \(V\), respectively. Then it follows, as in [1, (8)], that we have the following isomorphisms of \(\Bbbk\)-modules \[\Gamma^{d}M\ \simeq\ \bigoplus_{k+l=d}D^{k}|M_{\bar{0}}|\otimes\Lambda^{l}|M_{ \bar{1}}|\ \simeq\bigoplus_{k+l=d}\Gamma^{k}M_{\bar{0}}\otimes\Gamma^{l}M_{\bar{1}} \tag{9}\] for each \(d\geq 0\). One may check by comparison with Proposition 4 of [4, Ch.IV, SS5] that the set \[\{(v_{i_{1}},\ldots,v_{i_{s}};I)_{0};\ |I|=k\text{ and }1\leq i_{1}<\cdots<i_{s}\leq\mu\}\] is a basis of \(\Gamma^{k}M_{\bar{0}}\). It is also not difficult to verify that \[\{(v_{j_{1}},\ldots,v_{j_{l}})_{1};\ 1\leq j_{1}<\cdots<j_{l}\leq\nu\}\] is a basis of \(\Gamma^{l}M_{\bar{1}}\). The lemma then follows from (9). ### The category \(\Gamma^{d}\mathcal{C}\) Suppose \(M,N\in\boldsymbol{\mathcal{V}}_{\Bbbk}\). Then define \(\psi^{d}=\psi^{d}(M,N)\) to be the unique map which makes the following square commute: (10) We are now ready to define divided powers of any given \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\text{ev}}\)-enriched category. **Definition 3.3**.: Suppose that \(\mathcal{C}\) is a \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\text{ev}}\)-enriched category. Then we define a new \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\text{ev}}\)-category, \(\Gamma^{d}\mathcal{C}\), which has the same objects as \(\mathcal{C}\), with morphisms given by \[\operatorname{Hom}_{\Gamma^{d}\mathcal{C}}(X,Y):=\Gamma^{d}\operatorname{Hom }_{\mathcal{C}}(X,Y)\] for all \(X,Y\in\mathcal{C}\). The composition of morphisms \[\circ_{X,Y,Z}^{\Gamma^{d}\mathcal{C}}:\ \operatorname{Hom}_{\Gamma^{d} \mathcal{C}}(Y,Z)\otimes\operatorname{Hom}_{\Gamma^{d}(\mathcal{C})}(X,Y)\, \to\,\operatorname{Hom}_{\Gamma^{d}\mathcal{C}}(X,Z)\] is defined by the following composite \[\Gamma^{d}\operatorname{Hom}_{\mathcal{C}}(Y,Z)\otimes\Gamma^{d}\operatorname{ Hom}_{\mathcal{C}}(X,Y)\ \xrightarrow{\ \psi^{d}\ }\ \Gamma^{d}\big{(}\operatorname{Hom}_{\mathcal{C}}(Y,Z)\otimes \operatorname{Hom}_{\mathcal{C}}(X,Y)\big{)}\] \[\xrightarrow{\ \Gamma^{d}(\circ_{X,Y,Z}^{\zeta})\ }\ \Gamma^{d}\operatorname{Hom}_{ \mathcal{V}}(X,Z)\] for all \(X,Y,Z\in\mathcal{C}\). ### The algebra \(\Gamma^{d}A\) Suppose that \(A\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) is a superalgebra. Then we may identify \(A\) as a \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\text{ev}}\)-enriched category with one object, \(o\), and \(\operatorname{hom}\)-set \(\operatorname{Hom}_{A}(o,o):=A\). Hence, the divided power \(\Gamma^{d}A\) of a superalgebra \(A\) is also a superalgebra. The multiplication \(m_{\Gamma^{d}A}\) is induced by the composition \[\Gamma^{d}A\otimes\Gamma^{d}A\xrightarrow{\ \psi^{d}\ }\Gamma^{d}(A\otimes A) \xrightarrow{\ \Gamma^{d}(m_{A})\ }\Gamma^{d}A.\] In particular, \(\Gamma^{d}A\) is a subsuperalgebra of \(A^{\otimes d}\). ### Generalized Schur algebras Fix \(m,n,d\in\mathbb{N}\), and let \(A\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) be a superalgebra. Then we may identify \(\mathit{M}_{n}(A)\) with the superalgebra of \(n\times n\) matrices over \(A\). We may also identify \(\mathit{M}_{n}(A)\) with the tensor product \(A\otimes\mathit{M}_{n}(\Bbbk)\) of superalgebras. The generalized Schur algebra \(S^{A}(n,d)\) defined in [6] may then be identified as the superalgebra \[S^{A}(n,d)=\Gamma^{d}\mathit{M}_{n}(A).\] The classical Schur algebra, \(S(n,d)\), is obtained as the special case: \(S^{\Bbbk}(n,d)=\Gamma^{d}\mathit{M}_{n}(\Bbbk)\). As mentioned in [13, Example 2.1.1], this definition is equivalent to the original definition given by Green in [9]. We next introduce another type of generalized Schur superalgebra which will be needed later. Given nonnegative integers \(m,n\), let \[S^{A}(m|n,d):=\Gamma^{d}\mathit{M}_{m|n}(A).\] Notice that \(S^{\Bbbk}(m|n,d)\) is isomorphic to the Schur superalgebra \(S(m|n,d)\) introduced by Muir in [12]. ## 4. Generalized Polynomial Functors We now define a generalization of both the ordinary categories of strict polynomial functors, defined by Friedlander and Suslin in [8], and the superanalogues of these categories defined in [1]. ### Representations of a category Suppose that \(\mathcal{C}\) is a \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\)-enriched category. Then write \[\operatorname{\mathbf{rep}}\mathcal{C}:=\operatorname{Fun}_{\Bbbk}(\mathcal{ C},\boldsymbol{\mathcal{V}}_{\Bbbk}).\] to denote the category of all even \(\Bbbk\)-linear functors, \(F:\mathcal{C}\to\boldsymbol{\mathcal{V}}_{\Bbbk}\). Next suppose that \(X\in\mathcal{C}\), and consider the superalgebra \(E:=\operatorname{End}_{\mathcal{C}}(X)\). The categories \(\operatorname{\mathbf{rep}}\mathcal{C}\) and \(E\operatorname{\boldsymbol{\cdot}\mathbf{mod}}\) are both \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\)-enriched categories. Recall that \((E\operatorname{\boldsymbol{\cdot}\mathbf{mod}})^{\mathsf{ev}}\) and \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\mathsf{ev}}\) are both exact categories. Now since direct sums, products, kernels and cokernels can be computed objectwise in the target category \(\boldsymbol{\mathcal{V}}_{\Bbbk}\), we see that \((\operatorname{\mathbf{rep}}\mathcal{C})^{\mathsf{ev}}\) is also an exact category. The relationship between \(\operatorname{\mathbf{rep}}\mathcal{C}\) and \(E\operatorname{\boldsymbol{\cdot}\mathbf{mod}}\) is given by evaluation on \(X\). If \(F\in\operatorname{\mathbf{rep}}\mathcal{C}\), the (even) functoriality of \(F\) makes the \(\Bbbk\)-supermodule \(F(X)\) into a left \(E\)-supermodule. We thus have the evaluation functor: \[\operatorname{\mathbf{rep}}\mathcal{C}\to\ E\operatorname{\boldsymbol{\cdot }\mathbf{mod}}:\,F\mapsto F(X).\] This evaluation functor also has the following interpretation. Since the covariant hom-functor \(h^{X}:=\operatorname{Hom}_{\mathcal{C}}(X,-)\) is an even \(\Bbbk\)-linear functor, it must belong to \(\operatorname{\mathbf{rep}}\mathcal{C}\). In this situation, Yoneda's lemma takes the form of an even isomorphism \[\operatorname{Hom}_{\operatorname{\mathbf{rep}}\mathcal{C}}(h^{X},F)\ \simeq\ F(X) \tag{11}\] for any \(F\in\operatorname{\mathbf{rep}}\mathcal{C}\). In particular, \[E\ =\ h^{X}(X)\ \simeq\ \operatorname{End}_{\operatorname{\mathbf{rep}} \mathcal{C}}(h^{X}).\] Hence, Yoneda's lemma allows us to interpret "evaluation at \(X\)" as the functor \(\operatorname{Hom}_{\mathbf{rep}\,\mathcal{C}}(h^{X},-):\mathbf{rep}\,\mathcal{C }\to E\text{-}\mathbf{mod}\). Notice that the parity change functor, \(\Pi:\boldsymbol{\mathcal{V}}_{\Bbbk}\to\boldsymbol{\mathcal{V}}_{\Bbbk}\), induces by composition a functor \(\Pi\circ-:\mathbf{rep}\,\mathcal{C}\to\mathbf{rep}\,\mathcal{C}\). We now describe a condition on \(X\) which ensures that evaluation is in fact an equivalence of categories. The following lemma is a generalization of [14, Proposition A.1]. It was stated in [1, Proposition A.1] assuming that \(\Bbbk\) is a field. However, the generalization to the case of a commutative ring \(\Bbbk\) is easily obtained. **Lemma 4.1** ([14, 1]).: _Let \(\mathcal{C}\) be a \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\text{ev}}\)-enriched category. Assume that there exists an object \(P\in\mathcal{C}\) such that for all \(X,Y\in\mathcal{C}\), the composition induces a surjective map_ \[\operatorname{Hom}_{\mathcal{C}}(P,Y)\otimes\operatorname{Hom}_{\mathcal{C}} (X,P)\twoheadrightarrow\operatorname{Hom}_{\mathcal{C}}(X,Y).\] _Then the following hold._ 1. _For all_ \(F\in\mathbf{rep}\,\mathcal{C}\) _and all_ \(Y\in\mathcal{C}\)_, the canonical map_ \(\operatorname{Hom}_{\mathcal{C}}(P,Y)\otimes F(P)\to F(Y)\) _is surjective._ 2. _The set_ \(\{h^{P},\Pi h^{P}\}\) _is a projective generator of_ \((\mathbf{rep}\,\mathcal{C})^{\text{ev}}\)_, where_ \(h^{P}=\operatorname{Hom}_{\mathcal{C}}(P,-)\) _as above._ 3. _If_ \(E=\operatorname{End}_{\mathcal{C}}(P)\)_, then evaluation on_ \(P\) _induces an equivalence of categories,_ \(\mathbf{rep}\,\mathcal{C}\simeq E\text{-}\mathbf{mod}\)_._ We note that for a given a \(\mathcal{V}_{\Bbbk}\)-category \(\mathcal{C}\), we may also consider the category \[\operatorname{rep}\,\mathcal{C}:=\operatorname{Fun}_{\Bbbk}(\mathcal{C}, \mathcal{V}_{\Bbbk})\] consisting of ordinary \(\Bbbk\)-linear functors and natural transformations. ### Generalized polynomial functors Given a \(\boldsymbol{\mathcal{V}}_{\Bbbk}^{\text{ev}}\)-category \(\mathcal{C}\) and \(d\in\mathbb{N}_{0}\), we associate the corresponding category \[\boldsymbol{\mathcal{P}}_{d,\mathcal{C}}:=\mathbf{rep}\,\Gamma^{d}\mathcal{C}\] which we call the category of (_homogeneous, degree \(d\)_) generalized polynomial functors_ associated to the category \(\mathcal{C}\). Suppose now that \(\mathcal{C}\) is a \(\mathcal{V}_{\Bbbk}\)-enriched category. Then we may also consider the category \[\mathcal{P}_{d,\mathcal{C}}:=\operatorname{rep}\,\Gamma^{d}\mathcal{C}\] whose objects we again call generalized polynomial functors. **Example 4.2**.: Suppose that \(\Bbbk\) is a field. Then we have the following special cases: 1. \(\mathcal{P}_{d,\mathcal{V}_{\Bbbk}}\) is equivalent to the category \(\mathcal{P}_{d,\Bbbk}\) of homogenous strict polynomial functors, defined by Friedlander and Suslin in [8]. 2. \(\boldsymbol{\mathcal{P}}_{d,\mathcal{V}_{\Bbbk}}\) is equivalent to the category \(\mathsf{Pol}_{d}^{\text{I}}\) defined in [1, Section 3]. 3. Recall from Remark 2.2 that \(C(1)\) denotes the rank \(1\) Clifford algebra. Then \(\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{E}}_{C(1)}}=\boldsymbol{ \mathcal{P}}_{d,\boldsymbol{\mathcal{V}}_{C(1)}}\) is equivalent to the category \(\mathsf{Pol}_{d}^{\rm II}\) of _spin polynomial functors_ defined in [1, Section 3]. We are now ready to state our main result. **Theorem 4.3**.: _Suppose \(A\in\boldsymbol{\mathcal{V}}_{\Bbbk}\) is a superalgebra which is free as a \(\Bbbk\)-module. Then for \(m,n,d\in\mathbb{N}\), with \(m,n\geq d\), we have the following equivalences of categories:_ 1. \(\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{V}}_{A}}\ \stackrel{{ \sim}}{{\longrightarrow}}\ S^{A}(m|n,d)\text{-}\mathbf{mod};\)__ 2. \(\boldsymbol{\mathcal{P}}_{d,\boldsymbol{\mathcal{E}}_{A}}\ \stackrel{{ \sim}}{{\longrightarrow}}\ S^{A}(n,d)\text{-}\mathbf{mod}\)_,_ _given by evaluation on \(A^{m|n}\) and \(A^{n}\), respectively._ Proof.: We prove part (i). Since \(S^{A}(m|n,d)=\operatorname{End}_{\Gamma^{d}\boldsymbol{\mathcal{V}}_{A}}(A^{m |n})\), it suffices by Lemma 4.1 to show that the composition map \[\operatorname{Hom}_{\Gamma^{d}\boldsymbol{\mathcal{V}}_{A}}(A^{m|n},Y)\otimes \operatorname{Hom}_{\Gamma^{d}\boldsymbol{\mathcal{V}}_{A}}(X,A^{m|n})\to \operatorname{Hom}_{\Gamma^{d}\boldsymbol{\mathcal{V}}_{A}}(X,Y) \tag{12}\] is surjective, for all \(X,Y\in\boldsymbol{\mathcal{V}}_{A}\). By naturality, one may reduce to the case where \(X\) is a free (right) \(A\)-supermodule. Since \(Y\) is projective and thus a direct summand of a free \(A\)-supermodule, we may also assume that \(Y\) is free. We may thus assume that \(X=A^{p|q}\), \(Y=A^{s|t}\), for some \(p,q,s,t\in\mathbb{N}_{0}\). Let us identify \(\operatorname{Hom}(\Bbbk^{p|q},\Bbbk^{m|n})\) as a subset of \(\operatorname{Hom}(X,A^{m|n})\) via the embedding of Lemma 2.1.(i). It will then suffice to show the following map \[\Gamma^{d}\!\operatorname{Hom}_{A}(A^{m|n},Y)\otimes\Gamma^{d}\!\operatorname {Hom}_{\Bbbk}(\Bbbk^{p|q},\Bbbk^{m|n})\to\Gamma^{d}\!\operatorname{Hom}_{A }(X,Y), \tag{13}\] obtained by restricting composition, is surjective. Suppose that \(a_{1},\dots,a_{\mu}\) (resp. \(a_{\mu+1},\dots,a_{\mu+\nu}\)) is a \(\Bbbk\)-basis of \(A_{\bar{0}}\) (resp. \(A_{\bar{1}}\)). Then, using the notation in the proof of Lemma 2.1, it follows that there exist (homogeneous) bases \((e(j,i))\), \((e^{(r)}(k,j))\) and \((e^{(r)}(k,i))\) of \(\operatorname{Hom}(\Bbbk^{p|q},\Bbbk^{m|n})\), \(\operatorname{Hom}_{A}(A^{m|n},Y)\) and \(\operatorname{Hom}_{A}(X,Y)\), respectively, such that: \[e^{(r)}(k,j_{1})\circ e(j_{2},i)\:=\:e^{(r)}(k,i)\cdot\delta_{j_{1},j_{2}},\] where \(\delta_{j_{1},j_{2}}\) is the Kronecker delta. Suppose that \(r_{1},\dots,r_{a},r^{\prime}_{1},\dots,r^{\prime}_{b}\in\{1,\dots,\mu+\nu\}\), for some \(1\leq a,b\leq d\), and assume the basis elements \[e^{(r_{1})}(k_{1},i_{1}),\dots,e^{(r_{a})}(k_{a},i_{a})\quad(\text{resp. }e^{(r^{\prime}_{1})}(k^{\prime}_{1},i^{\prime}_{1}),\dots,e^{(r^{\prime}_{b}) }(k^{\prime}_{b},i^{\prime}_{b}))\] are even (odd). Then to prove surjectivity, it suffices to show that each elemen of the form \[(e^{(r_{1})}(k_{1},i_{1}),\dots,e^{(r_{a})}(k_{a},i_{a});I)_{0}\cdot(e^{(r^{ \prime}_{1})}(k^{\prime}_{1},i^{\prime}_{1}),\dots,e^{(r^{\prime}_{b})}(k^{ \prime}_{b},i^{\prime}_{b}))_{1} \tag{14}\] belongs to the image of (13), since \(\Gamma^{d}\!\operatorname{Hom}_{A}(X,Y)\) is spanned by such elements according to Lemma 3.2. Now suppose \(I=(d_{1},\dots,d_{a})\in(\mathbb{Z}_{>0})^{a}\). Then, since \(m,n\geq d=|I|+b\geq a+b\), we may choose _distinct_ indices \(j_{1},\dots,j_{a},j^{\prime}_{1},\dots,j^{\prime}_{b}\) such that \(e(j_{1},i_{1}),\ldots,e(j_{a},i_{a}),\ e(j^{\prime}_{1},i^{\prime}_{1}),\ldots,e(j^{ \prime}_{b},i^{\prime}_{b})\) are all even elements. We are thus able to form the element \[(e^{(r_{1})}(k_{1},j_{1}),\ldots,e^{(r_{a})}(k_{a},j_{a});I)_{0} \cdot(e^{(r^{\prime}_{1})}(k^{\prime}_{1},j^{\prime}_{1}),\ldots,e^{(r^{\prime} _{b})}(k^{\prime}_{b},j^{\prime}_{b}))_{1}\] \[\quad\otimes(e(j_{1},i_{1}),\ldots,e(j_{s},i_{s}),\ e(j^{\prime}_ {1},i^{\prime}_{1}),\ldots,e(j^{\prime}_{t},i^{\prime}_{t});J)_{0},\] where \(J=(d_{1},\ldots,d_{a},1,\ldots,1)\in(\mathbb{Z}_{>0})^{a+b}\), which is sent to the element (14) under the map induced by composition in \(\Gamma^{d}\boldsymbol{\mathcal{V}}_{A}\). The proof of equivalence (ii) is similar. We also have the following non-graded analogue of Theorem 4.3. **Theorem 4.4**.: _Let \(A\in\mathcal{V}_{\Bbbk}\) be an ordinary \(\Bbbk\)-algebra which is free as a \(\Bbbk\)-module, and suppose \(n,d\in\mathbb{N}\). Then there is an equivalence of categories:_ \[\mathcal{P}_{d,\mathcal{V}_{A}}\ \stackrel{{\sim}}{{\longrightarrow}}\ S^{A}(n,d) \operatorname{-mod},\] _provided \(n\geq d\)._ Proof.: The proof is analagous to Theorem 4.3, using [14, Prop. A.1] in place of Lemma 4.1. It follows by Section 3.4 and Example 4.2 that Theorem 4.3 generalizes Theorem 4.2 of [1], while Theorem 4.4 generalizes Theorem 3.2 of [8]. ## 5. Generalized Schur-Weyl Duality Throughout this section, we fix a superalgebra \(A\in\boldsymbol{\mathcal{V}}_{\Bbbk}\). ### Wreath products Consider the group algebra of the symmetric group, \(\Bbbk\mathfrak{S}_{d}\), as a superalgebra concentrated in degree zero. Then the _wreath product superalgebra_, \(A\wr\mathfrak{S}_{d}\), is the \(\Bbbk\)-supermodule \(A^{\otimes d}\otimes\Bbbk\mathfrak{S}_{d}\), with multiplication defined by \[(x\otimes\rho)\cdot(y\otimes\sigma):=x(y\rho^{-1})\otimes\rho\sigma \tag{15}\] for all \(x,y\in A^{\otimes d}\) and \(\rho,\sigma\in\mathfrak{S}_{d}\). If \(G\) is a finite group, then note for example that \((\Bbbk G)\wr\mathfrak{S}_{d}\) is isomorphic to the group algebra of the classical wreath product, \(G\wr\mathfrak{S}_{d}:=G^{d}\rtimes\mathfrak{S}_{d}\). Assume in the remainder that \(A\) is free as a \(\Bbbk\)-module. We then identify the tensor power \(A^{\otimes d}\) and group algebra \(\Bbbk\mathfrak{S}_{d}\) as subalgebras of \(A\wr\mathfrak{S}_{d}\) by setting \[A^{\otimes d}=A^{\otimes d}\otimes 1_{\mathfrak{S}_{d}},\quad\Bbbk\mathfrak{S} _{d}=1_{A^{\otimes d}}\otimes\Bbbk\mathfrak{S}_{d}\] respectively. ### Tensor products Given nonnegative integers \(d\) and \(e\), we have an embedding \(\mathfrak{S}_{d}\times\mathfrak{S}_{e}\hookrightarrow\mathfrak{S}_{d+e}\). This induces an embedding \[\Gamma^{d+e}M\hookrightarrow\Gamma^{d}M\otimes\Gamma^{e}M \tag{16}\] for any \(M\in\boldsymbol{\mathcal{V}}_{\Bbbk}\), given by the composition of the following maps: \[\Gamma^{d+e}M=(M^{\otimes d+e})^{\mathfrak{S}_{d+e}} \hookrightarrow (M^{\otimes d+e})^{\mathfrak{S}_{d}\times\mathfrak{S}_{e}}\] \[\simeq (M^{\otimes d})^{\mathfrak{S}_{d}}\otimes(M^{\otimes e})^{ \mathfrak{S}_{e}}=\Gamma^{d}M\otimes\Gamma^{e}M.\] We then have a corresponding diagonal embedding \[\delta:\Gamma^{d+e}\,\mathcal{C}\hookrightarrow\Gamma^{d}\mathcal{C}\otimes \Gamma^{e}\mathcal{C}, \tag{17}\] which sends \(X\mapsto(X,X)\) and which acts on morphisms via the embedding induced by (16). Given \(F\in\boldsymbol{\mathcal{P}}_{d,\mathcal{C}}\), \(G\in\boldsymbol{\mathcal{P}}_{e,\mathcal{C}}\), let \(F\boxtimes G\in\mathbf{rep}(\Gamma^{d}\mathcal{C}\otimes\Gamma^{e}\mathcal{C})\) denote the functor which sends \[(X_{i},Y_{i})\mapsto F(X_{i})\otimes G(Y_{i})\ \ \text{and}\ \ \varphi_{1} \otimes\varphi_{2}\mapsto F(\varphi_{1})\boxtimes G(\varphi_{2}),\] for all \(X_{i},Y_{i}\in\mathcal{C}\) and \(\varphi_{i}\in\operatorname{Hom}_{\mathcal{C}}(X_{i},Y_{i})\), for \(i=1,2\), respectively. We then define the tensor product \[-\otimes-:\boldsymbol{\mathcal{P}}_{d,\mathcal{C}}\times\boldsymbol{\mathcal{ P}}_{e,\mathcal{C}}\to\boldsymbol{\mathcal{P}}_{d+e,\,\mathcal{C}} \tag{18}\] as the composition of functors \[\boldsymbol{\mathcal{P}}_{d,\mathcal{C}}\times\boldsymbol{\mathcal{P}}_{e, \mathcal{C}}\xrightarrow{-\boxtimes-}\mathbf{rep}(\Gamma^{d}\mathcal{C} \otimes\Gamma^{e}\mathcal{C})\xrightarrow{\delta_{*}}\boldsymbol{\mathcal{P}}_ {d+e,\mathcal{C}}.\] where \(\delta_{*}\) is the functor induced by (17). ### Generalized Schur-Weyl duality Let us write \(\boldsymbol{\mathcal{P}}_{d}^{A}:=\boldsymbol{\mathcal{P}}_{d,\boldsymbol{ \mathcal{E}}_{A}}\). Then the (external) tensor product defined above induces a bifunctor \[-\otimes-:\boldsymbol{\mathcal{P}}_{d}^{A}\times\boldsymbol{\mathcal{P}}_{e}^ {A}\to\boldsymbol{\mathcal{P}}_{d+e}^{A}.\] This gives \(\boldsymbol{\mathcal{P}}^{A}:=\bigoplus_{d\geq 0}\boldsymbol{\mathcal{P}}_{d}^{A}\) the structure of a monoidal category. Given any object \(X\in\boldsymbol{\mathcal{E}}_{A}\), we associate the corresponding object \[\Gamma^{d,X}:=\operatorname{Hom}_{\Gamma^{d}\boldsymbol{\mathcal{E}}_{A}}(X,-)\] in the category \(\boldsymbol{\mathcal{P}}_{d}^{A}\). It follows by Yoneda's lemma (11) that \(\Gamma^{d,X}\) is a projective object. Let us write \(\Gamma^{d,n}_{A}:=\Gamma^{d,A^{n}}\). Then it follows from (7) that we have a decomposition \[\Gamma^{d,m+n}_{A}\simeq\bigoplus_{i+j=d}\Gamma^{i,m}_{A}\otimes\Gamma^{j,n}_{A} \tag{19}\] of strict polynomial functors. Now let \(\Lambda(n,d)\) denote the set of all tuples \(\lambda=(\lambda_{1},\dots,\lambda_{n})\in(\mathbb{Z}_{\geq 0})^{n}\) such that \(\sum\lambda_{i}=d\). Given \(\lambda\in\Lambda(n,d)\), we will write \(\Gamma^{\lambda}_{A}:=\Gamma^{\lambda_{1},1}_{A}\otimes\dots\otimes\Gamma^{ \lambda_{n},1}_{A}\). By (19) and induction, we have a canonical isomorphism \[\Gamma^{d,n}_{A}\simeq\bigoplus_{\lambda\in\Lambda(n,d)}\Gamma^{\lambda}_{A}. \tag{20}\] It follows that the objects \(\Gamma^{\lambda}\) are projective in \(\boldsymbol{\mathcal{P}}^{A}_{d}\). Let \(\omega=(1,\dots,1)\in\Lambda(d,d)\). Notice that \(\Gamma^{\omega}(X)\simeq X^{\otimes d}\) for any \(X\in\boldsymbol{\mathcal{E}}_{A}\), since \(\operatorname{Hom}_{A}(A,X)\simeq X\). It follows that \(\otimes^{d}:=\Gamma^{\omega}\) is a projective object of \(\boldsymbol{\mathcal{P}}^{A}_{d}\). We then have the following analogue of the generalized Schur Weyl duality described in [6]. **Theorem 5.1**.: _Assume \(n\geq d\), and write \(V=A^{n}\)._ 1. _The left_ \(S^{A}(n,d)\)_-supermodule_ \(V^{\otimes d}\simeq\operatorname{Hom}_{\boldsymbol{\mathcal{P}}^{A}_{d}}( \Gamma^{d,n},\otimes^{d})\) _is a projective object of_ \(S^{A}(n,d)\)_-_**mod**_._ 2. _There is a canonical isomorphism of superalgebras:_ \[\operatorname{End}_{\boldsymbol{\mathcal{P}}^{A}_{d}}(\otimes^{d})\cong A \wr\mathfrak{S}_{d}.\] 3. _We have an exact functor_ \[\operatorname{Hom}_{\boldsymbol{\mathcal{P}}^{A}_{d}}(\otimes^{d},-): \boldsymbol{\mathcal{P}}^{A}_{d}\to A\wr\mathfrak{S}_{d}\operatorname{- \boldsymbol{mod}}.\] Proof.: Part (i) follows from Theorem 4.3.(i) and the fact that \(\bigotimes^{d}\) is a projective object of \(\boldsymbol{\mathcal{P}}^{A}_{d}\). Recall from [6, SS5] that there is an even idempotent \(\xi_{\omega}\in S^{A}(n,d)\) such that \(V^{\otimes d}\simeq S^{A}(n,d)\xi_{\omega}\), as left supermodules. Since \(\xi_{\omega}\) is an idempotent, it follows that the superalgebras \(\xi_{\omega}S^{A}(n,d)\xi_{\omega}\) and \(\operatorname{End}_{S^{A}(n,d)}(S^{A}(n,d)\xi_{\omega})\) are isomorphic. Part (ii) thus follows from Theorem 4.3.(i), together with [6, Lemma 5.15] and [6, Proposition 5.17]. Finally, (iii) is a direct consequence of (i) and (ii). **Remark 5.2**.: One may refer to the functor in Theorem 5.1.(iii) as the _generalized Schur-Weyl duality functor_. A similar functor related to classical Schur-Weyl duality was studied in [15] in the context of \(\mathfrak{g}\)-categorification.
2309.06578
Can Large Language Models Discern Evidence for Scientific Hypotheses? Case Studies in the Social Sciences
Hypothesis formulation and testing are central to empirical research. A strong hypothesis is a best guess based on existing evidence and informed by a comprehensive view of relevant literature. However, with exponential increase in the number of scientific articles published annually, manual aggregation and synthesis of evidence related to a given hypothesis is a challenge. Our work explores the ability of current large language models (LLMs) to discern evidence in support or refute of specific hypotheses based on the text of scientific abstracts. We share a novel dataset for the task of scientific hypothesis evidencing using community-driven annotations of studies in the social sciences. We compare the performance of LLMs to several state-of-the-art benchmarks and highlight opportunities for future research in this area. The dataset is available at https://github.com/Sai90000/ScientificHypothesisEvidencing.git
Sai Koneru, Jian Wu, Sarah Rajtmajer
2023-09-07T04:15:17Z
http://arxiv.org/abs/2309.06578v3
Can Large Language Models Discern Evidence for Scientific Hypotheses? Case Studies in the Social Sciences ###### Abstract Hypothesis formulation and testing are central to empirical research. A strong hypothesis is a best guess based on existing evidence and informed by a comprehensive view of relevant literature. However, with exponential increase in the number of scientific articles published annually, manual aggregation and synthesis of evidence related to a given hypothesis is a challenge. Our work explores the ability of current large language models (LLMs) to discern evidence in support or refute of specific hypotheses based on the text of scientific abstracts. We share a novel dataset for the task of _scientific hypothesis evidencing_ using community-driven annotations of studies in the social sciences. We compare the performance of LLMs to several state of the art methods and highlight opportunities for future research in this area. Our dataset is shared with the research community: [https://github.com/Sai90000/ScientificHypothesisEvidence.git](https://github.com/Sai90000/ScientificHypothesisEvidence.git) Large Language Models, Natural Language Understanding, Scientific Hypothesis Evidencing ## 1 Introduction Translating scholarly research findings into actionable, evidence-based impacts relies on iterative refinement for robust understanding of a given phenomenon across multiple studies, contexts, etc. The sequential approach to scientific interrogation is also at the heart of null hypothesis significance testing. Namely, a hypothesis is an informed theory, or an _educated guess_, based on available information and prior findings [22]. As such, synthesis and understanding of current literature is essential to study planning and to efficient research more broadly. Yet, scholarly databases fail to aggregate, compare, contrast, and contextualize existing studies in a way that allows comprehensive review of the relevant literature in service to a targeted research question. In part, this is because the sheer volume of published work is difficult to navigate and the narrative format through which most empirical work is reported was not envisioned with machine readability in mind. Work in the areas of natural language processing (NLP) and natural language understanding (NLU) has emerged to address various challenges related to synthesizing scientific findings. Automated approaches for _fact-checking_[1], for example, have received significant attention in the context of misinformation and disinformation. This task aims to assess the accuracy of a factual claim based on a literature [23]. What remains a gap, however, are methods to determine whether a research question is addressed within a paper based on its abstract, and if so, whether the corresponding hypothesis is supported or refuted by the work. In this work, we propose this task as _scientific hypothesis evidencing_ (SHE). Notably, grassroots efforts to usefully assemble the literature have popped up, e.g., in the form of shared Google docs, contributed to by authors of related work and socialized primarily via Twitter [1]. In these documents, authors synthesize existing work that tests closely-related hypotheses or a similar research question, e.g., _Does social media cause political polarization?_ Chapters within these collaborative documents add additional structure, highlighting studies with similar outcomes or similar experimental settings. In this work, we study whether and to what extent state-of-the-art NLU and large language models can supplant manual expert-driven collaborative meta-analyses, or parts thereof, in service to discerning hypotheses and primary findings from scientific abstracts. We focus on the social sciences due to the availability of high quality datasets annotated by domain experts. Our work makes the following primary contributions: 1. We propose the SHE task-the identification of evidence from the abstract of a scientific publication in support or refute of a hypothesis; 2. We build and share a benchmark dataset for SHE using expert-annotated collaborative literature reviews; 3. Using this dataset, we test three types of models for the SHE task based on: word embeddings; transfer learning; and, large language models (LLMs). **Research question (from the review).** Is there an association between social media use and bad mental health outcomes? **Abstract.** Although studies have shown that increases in the frequency of social media use may be associated with increases in depressive symptoms of individuals with depression, the current study aimed to identity specific social media behaviors related to major depressive disorder (MDD). Milinenials (N = 504) who actively use Facebook, Twitter, Instagram, and/or Snapchat participated in an online survey assessing major depression and specific social media behaviors. Univariate and multivariate analyses were conducted to identify specific social media behaviors associated with the presence of MDD. The results identified five key social media factors associated with MDD. Individuals who were more likely to compare themselves to others better off than they were (p = 0.005), those who indicated that they would be more bothered by being tagged in unflattering pictures (p = 0.011), and those less likely to post pictures of themselves along with other people (p = 0.015) were more likely to meet the criteria for MDD. Participants following 300 + Twitter accounts were less likely to have MDD (p = 0.041), and those with higher scores on the Social Media Addiction scale were significantly more likely to meet the criteria for MDD (p = 0.031). Participating in negative social media behaviors is associated with a higher likelihood of having MDD. Research and clinical implications are considered. **Hypothesis (declarative).** There is an association between social media use and bad mental health outcomes. **Label.** Entail Our findings suggest that this task is challenging for current NLU and that LLMs do not seem to perform better than traditional language models and transfer learning models. We offer perspectives and suggestions for the path forward. ## 2 Related Work The task of _scientific claim verification_ is treated either. (1) as a natural language inference (NLI) problem using deep neural networks trained on human-annotated datasets Khot et al. (2018); Wadden et al. (2022); or, (2) as a classification problem using a joint claim-evidence representation Ohshikawa et al. (2020). In support of these efforts, multiple _claim verification_ datasets have been proposed as benchmarks for the community, e.g., for topics in biomedical sciences Wadden et al. (2020, 2022), public health Kotonya and Toni (2020); Sarrouti et al. (2021); Saakyan et al. (2021) and environment Diggelmann et al. (2020). Examples of the NLI datasets include the Stanford Natural Language Inference (SNLI) dataset Bowman et al. (2015) and the Allen AI's SciTail dataset Khot et al. (2018). SNLI contains about 550,000 premise-hypothesis pairs. The premises were derived from image captions and hypotheses were created by crowdworkers. SNLI was the first NLI corpus to see encouraging results from neural networks. The SciTail dataset contains 27,000 premise-hypothesis pairs created from multiple-choice science exams and web sentences. Examples of the claim-evidence representations include the SciFact dataset Wadden et al. (2020) and its extension - SciFact-Open Wadden et al. (2022). SciFact contained about 1.4K scientific claims and a search corpus of about 5K abstracts that provided either supporting or refuting evidence for each claim. The claims in SciFact-open were extracted from the citation context of papers in biomedical sciences including 279 claims verified against a search corpus of 500K abstracts. LLMs are trained on large datasets sourced from the internet representing a wide spectrum of both general and domain knowledge. They have shown remarkable performance across a range of NLU tasks such as reading comprehension, and question answering Liang et al. (2022). The SHE task offers a distinctive opportunity to assess these models in the context of scientific research domain expertise, thereby enabling reasoning abilities compared to those of human experts. The SHE problem formulation is distinct from the hypothesis and premise pairs encountered in conventional NLI tasks Bowman et al. (2015). The language used in scientific publications contains domain-specific terminology which is different from the the premise-hypothesis pairs in general scientific domains (e.g., SNLI Bowman et al. (2015)). Furthermore, abstracts from scientific articles contain numerical data that is often not present in traditional NLP datasets. Figure 1: Exemplar collaborative review document structure for one question. \begin{table} \begin{tabular}{|p{227.6pt}|} \hline **Research question (from the review).** Is there an association between social media use and bad mental health outcomes? \\ \hline **Abstract.** Although studies have shown that increases in the frequency of social media use may be associated with increases in depressive symptoms of individuals with depression, the current study aimed to identify specific social media behaviors related to major depressive disorder (MDD). Milinenials (N = 504) who actively use Facebook, Twitter, Instagram, and/or Snapchat participated in an online survey assessing major depression and specific social media behaviors. Univariate and multivariate analyses were conducted to identify specific social media behaviors associated with the presence of MDD. The results identified five key social media factors associated with MDD. Individuals who were more likely to compare themselves to others better off than they were (p = 0.005), those who indicated that they would be more bothered by being tagged in unflattering pictures (p = 0.011), and those less likely to post pictures of themselves along with other people (p = 0.015) were more likely to meet the criteria for MDD. Participants following 300 + Twitter accounts were less likely to have MDD (p = 0.041), and those with higher scores on the Social Media Addiction scale were significantly more likely to meet the criteria for MDD (p = 0.031). Participating in negative social media behaviors is associated with a higher likelihood of having MDD. Research and clinical implications are considered. **Hypothesis (declarative).** There is an association between social media use and bad mental health outcomes. Problem Definition Scientific hypothesis evidencing (SHE) is defined as the identification of the association between a given declarative hypothesis and a relevant abstract. This association can be labeled either _entailment_, _contradiction_, or _inconclusive_. The complexity of the task arises from contextual reasoning. For example, in Table 1, identifying the relationship between the abstract and hypothesis provided requires the model to reason that _depressive disorder_ is a _bad mental health outcome_, _usage of Twitter_, _Facebook_, _Instagram_, _Snapchat_ is _use of social media_ leading to _higher likelihood of major depressive disorder_ and hence the relationship is _entailment_. In SHE, the hypotheses or research questions are typically expressed at a higher level of abstraction than the evidence provided within the abstract. In this work, we assume that the hypothesis in each hypothesis-abstract pair is addressed by the paper in question and focus on identification of relations between hypotheses and abstracts. Identifying evidence about an arbitrary hypothesis from a literature database is a bigger challenge, usually involving an information retrieval component. This would require a larger corpus of labeled documents as ground truth (Wadden et al., 2022; Pradeep et al., 2021). ## 4 Dataset Our Collaborative Reviews (CoRe) dataset is built from 12 different open-source collaborative literature reviews actively curated and maintained by domain experts and focused on specific questions in the social and behavioral sciences (Haidt and Twenge, 2019, 2023; Haidt and Bail, 2022; Haidt and Zach, Ongoing(a); Haidt et al., Ongoing(b); Haidt and Zach, Ongoing(a), 2023b,a; Haidt and Zach, Ongoing(c), 2019, 2023). The majority of these reviews were started in 2019 to map important studies within social and behavioral sciences and were maintained using Google docs. These documents are openly available for public viewing and academic researchers in relevant domains can request edit access to make changes. Each review categorizes articles based on a set of research questions related to the topic and the outcomes of each study. Any discrepancies in the classification are resolved by the lead authors alongside a domain expert1. Footnote 1: Further detail about these reviews can be found at [https://jonathanhaidt.com/reviews/](https://jonathanhaidt.com/reviews/) Figure 1 gives a schematic illustration of a block of reviews about social media and mental health (Haidt and Twenge, 2023). Most articles are peer-reviewed scientific papers, but several reviews also contain blog posts, news articles, books, and other reports. Our CoRe dataset includes only scientific publications. Raw data were compiled using all reviews available on July 1, 2023. Research questions, labels, and Digital Object Identifiers (DOIs) were extracted from the reviews through automatic parsing of the document text. DOIs not readily available within the reviews were manually extracted from the publication links provided in the review. We then queried Semantic Scholar (S2) using DOIs to collect article titles and abstracts. In cases where S2 did not have coverage for certain articles, we queried CrossRef (CR). For articles outside the coverage of both S2 and CR, titles and abstracts were collected manually from the publication webpage. Research questions were converted to declarative statements in order to match the structure of hypotheses in the NLI task. Study outputs were manually mapped into one of three classes: _entailment_; _contradiction_; or, _inconclusive_. The curated dataset contains _(hypothesis, abstract, label)_ triplets where the _abstract_ contains the evidence required to test the _hypothesis_ and predict the _label_. Table 2 provides a list of topics covered by the 12 collaborative reviews and an overview of key statistics. The dataset contains 69 distinct hypotheses tested across 602 scientific articles and findings aligned to our 3 labels. In total, the dataset contains 638 triplets because a fraction of articles address multiple hypotheses. The _entailment_ class has greatest representation within the dataset; 61.6% of triplets represent articles that contain evidence in support of the corresponding hypothesis. The _contradiction_ class makes up 25.7% of the triplets. The remaining 12.7% are in the _inconclusive_ class with a mixed evidence. The distribution of articles across topics is imbalanced, with some reviews containing substantially more literature than others. Table 3 presents a comparison between the CoRe dataset and the SNLI, and SciFact datasets. Notably, as the CoRe, SciFact datasets use abstracts of scientific publications as premises, their lengths are longer compared to that in SNLI dataset. The mean hypotheses, premise lengths in CoRe dataset are similar to SciFact dataset however, their domains are different. For training and evaluating the models, we shuffled the dataset and split it into to training (70% ), development (15%), and heldout test (15%) datasets. ## 5 Methods We evaluate three families of methods on the SHE task using the CoRe dataset: supervised classifiers based on pre-trained embeddings; transfer learning models; zero- and few-shot LLMs. ### Supervised classification based on pre-trained embedding models To investigate embedding models' performance on SHE, we adopt the sentence pair classification framework outlined in Bowman et al. (2015). Concatenated hypothesis and abstract embeddings are used as input to the model, which contains three successive fully-connected layers followed by a three-way softmax layer (see Figure 2). We evaluate the performance of two pre-trained embedding models: _longformer_Beltagy et al. (2020) and _text-embedding-ada-002_Greene et al. (2022). Longformer is a transformer-based text encoder model developed to process information at the document-level, therefore eliminating the need for chunking long input text sequences. It uses a combination of local windowed attention and global attention to create a sparse attention matrix (vs. a full attention matrix) making attention more efficient. Longformer supports sequences of length up to 4,096 and produces embeddings of size 768. Text-embedding-ada-002 is a transformer decoder language model developed by OpenAI and at the time of its release (December 2022) was shown to achieve state-of-the-art performance on tasks such as text search and sentence similarity Greene et al. (2022). It is capable of embedding sequences of length up to 8,192 and generates 1,536-dimensional embedding vectors. ### Transfer learning models In this approach, we treat our task as an NLI task. Specifically, we use an abstract as the premise and determine whether it entails a given hypothesis. Among models proposed for the NLI task, \begin{table} \begin{tabular}{l|c c c c c c|c c c} \hline \hline **Topic** & **Hyp.** & **Art.** & **Tri.** & **Ent.** & **Cont.** & **Inc.** & **Train** & **Dev.** & **Test** \\ \hline Adolescent mood disorders & 4 & 34 & 37 & 36 & 0 & 1 & 27 & 10 & 0 \\ Adolescent mental illness crisis & 8 & 40 & 40 & 35 & 0 & 5 & 25 & 8 & 7 \\ Changes in cognitive ability & 1 & 12 & 13 & 11 & 0 & 2 & 10 & 3 & 0 \\ Digital gambling and mental health & 1 & 3 & 3 & 0 & 3 & 0 & 2 & 1 & 0 \\ Free play and mental health & 5 & 36 & 37 & 23 & 13 & 1 & 17 & 8 & 12 \\ Online communities and adolescent health & 2 & 3 & 3 & 1 & 0 & 2 & 1 & 1 & 1 \\ Phone free schools & 5 & 37 & 38 & 8 & 26 & 4 & 21 & 8 & 9 \\ Porn use and adolescent health & 6 & 47 & 47 & 14 & 24 & 9 & 30 & 10 & 7 \\ Social media and mental health & 14 & 222 & 232 & 178 & 48 & 6 & 142 & 38 & 52 \\ Social media and political dysfunction & 9 & 144 & 152 & 67 & 40 & 45 & 82 & 35 & 35 \\ Video game use and adolescent health & 9 & 30 & 32 & 18 & 9 & 5 & 23 & 4 & 5 \\ Gen Z Phone-Based Childhood & 2 & 3 & 3 & 2 & 0 & 1 & 2 & 1 & 0 \\ \hline **Total** & 69 & 602 & 637 & 393 & 163 & 81 & & \\ **Train** & 59 & 370 & 382 & 243 & 92 & 47 & & \\ **Dev.** & 46 & 127 & 127 & 79 & 28 & 20 & & \\ **Test** & 35 & 126 & 128 & 71 & 43 & 14 & & \\ \hline \hline \end{tabular} \end{table} Table 2: Statistical overview of the CoRe dataset showing number of hypotheses, articles, triplets along with the distribution of labels across various topics within the dataset. Columns _Train_, _Dev._, _Test_ correspond to the number of triplets within each respective split. _Hyp.=Hypotheses; Art.=Articles; Tri.=Triplets; Ent.=Entail; Cont.=Contradict; Inc.=Inconclusive; Dev.=Development_ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline **Dataset** & **Hyp.** & **Pre.** & **Size** & **Domain** \\ \hline _CoRe_ & 10 & 194 & 637 & Social Sciences \\ _SciFact_ & 12 & 194 & 1,409 & Medicine/Biology \\ _SNL1_ & 7 & 12 & 570,152 & Non Scientific \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of average number of words in hypotheses, premises, instance counts, and domains._Hyp.=Hypotheses; Pre.=Premises_ Figure 2: Supervised classification for concatenated hypothesis-abstract pairs. we evaluate the Enhanced Sequential Inference Model (ESIM) [3] and Multi-Task Deep Neural Network (MT-DNN) [14]. ESIM is a supervised learning model that uses bidirectional Long Short Term Memory (biLSTM) layers to encode hypothesis and premise for inference [3]. It has achieved high performance on NLI tasks and a reported accuracy of 88.6% on the SNLI dataset. The model uses a 840B token version of GloVe embeddings [10] for word representations. MT-DNN is a model aiming at learning robust representations across NLI tasks, such as text summarization, NLI and question answering [14]. MT-DNN achieved the new state-of-the-art performance on the SNLI and the SciTail datasets. We use the MT-DNN model built on the pre-trained _bert-base-uncased_ model [1] and fine-tuned it over 5 epochs for the task. ### Large language models We tested two LLMs, ChatGPT and PaLM 2 [14], on our test split. For ChatGPT model, we used the API version of _gpt-3.5-turbo_ which offers a faster and significantly less expensive model than OpenAI's other GPT-3.5, GPT-4 models. From PaLM 2, we used the generative model _text-boson-001_[12] an LLM fine-tuned to follow natural language instructions on a variety of language tasks, e.g., information extraction, problem solving, text edition, and data extraction [12]. We explored these models' performance in zero-shot and few-shot settings. Models were prompted with the abstract and the hypothesis embedded into predefined templates. Prompts contained specific instructions to generate a single output label. Prompt engineering Prompt engineering refers to the task of finding the best prompt for an LLM in support of given task [14]. We experiment with five prompts used in prior work. All are _prefix_ prompts, i.e., prompt text comes entirely before model-generated text. Prompt templates and their sources are summarized in Table 5. Depending on the prompt template, we requested LLMs return one of three sets of labels: _(true, false, neutral)_; _(yes, no, maybe)_; _(entail, contradict, neutral)_. Table 4 maps each label to our canonicalized label set. Because prompts were queried without providing any training data, we refer this method zero-shot learning. Prompt ensembling In the context of LLMs, prompt ensembling refers to using several individual prompts at inference [14]. Ensembling has shown better performance than fine-tuned models by harnessing their complementary strengths [14]. Here, we use a majority voting strategy to ensemble the outputs of our five individual prompts. ### Few Shot Learning In the few-shot learning (FSL) setting, LLMs are provided with several examples that demonstrate how the model should respond to the prompt [1]. Studies show that the examples chosen for FSL have a significant impact on the performance of LLMs [11]. We used a semantic search method to select nine samples from the training dataset to provide examples for each hypothesis-abstract pair in the held-out dataset.2 Footnote 2: Number of training samples was constrained by the LLM prompt length limitations. To do so, we incorporated a pre-trained transformer encoder model, specifically the Huggingface implementation of _multi-qa-mpnet-base-dot-v13_ which was designed for semantic search. As shown in Figure 3, we first split the training set into three subsets each having a different label. For each instance in the test dataset, we calculate cosine similarity between this instance against each concatenated hypothesis-abstract vectors in the training set and selected the top three pairs in each subset. This results in 9 hypothesis-abstract pairs used for FSL. For each instance in the held-out set, we calculated cosine similarity between the concatenated hypothesis-abstract pair and examples in the training corpora. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **CoRe** & **SNLI** & **P1** & **P2, P3, P5** & **P4** \\ \hline _Entail_ & Entail & true & Yes & e \\ _Contradict_ & Contradict & false & No & c \\ _Inconclusive_ & Neutral & neutral & Maybe & n \\ \hline \end{tabular} \end{table} Table 4: Label map across datasets and prompts. Figure 3: Semantic search-based sample selection for few-shot learning. ## 6 Experiments We evaluate each model's ability to discern the relationship between a given abstract and a hypothesis, written as a declarative statement in the CoRe dataset. Our approach aligns with methodologies widely used in the literature to evaluate performance on NLI datasets, e.g., SNLI [1], MNLI [13], where models are presented with a premise and a declarative hypothesis asked to classify their relationship. Performance of all models is measured by macro-F1-score, calculated as the average of F1-scores over all three class labels, and accuracy is calculated as the fraction of correct predictions. For supervised classification based on embeddings, all layers utilize the ReLU activation function. Hyperparameters such as learning rate and regularization parameter, were set based on Bayesian hyperparameter tuning with an objective to maximize macro-F1-score on the test data [1]. For training ESIM and fine-tuning MT-DNN on the SNLI dataset, we adopted default hyperparameters, as recommended in respective papers. We evaluate LLMs in zero-shot and few-shot settings. The temperature parameter controls the creativity of the text generated by the LLMs. Lower temperatures result in more consistent output and higher temperatures result in more creative, diverse responses. We compare the performance of LLMs with temperature settings from 0 to 1, by increments of 0.25. We query each LLM using the same prompt 5 times in each temperature setting. To test prompt ensembling, we query each LLM using the set of five prompts and determine the final classification label by majority voting aggregation. Prompt ensembling was tested in both zero-shot and few-shot settings across different temperatures. Similarly to the single prompt evaluation, we tested the prompt ensembling under five independent runs for each temperature configuration. ## 7 Results Table 6 summarizes model performance on the test set. Here, we focus on comparing different types of models, so we report metrics averaged across all settings. The observation that all models achieve macro-F1-scores less than 0.65 demonstrates that SHE is a challenging task. The supervised classifier model using _text-embedding-ada-002_ embeddings yielded the best performance achieving a macro-F1-score of 0.615, followed by the pre-trained gpt-3.5-turbo model with prompt ensembling in the few-shot setting. ### Transfer learning As anticipated, ESIM and MT-DNN models when trained or fine-tuned on the SNLI dataset respectively, exhibited significantly lower performance compared to the model when trained or fine-tuned on the CoRe dataset. This can be attributed to the differences in the characteristics of hypotheses \begin{table} \begin{tabular}{|p{42.7pt}|p{142.3pt}|p{142.3pt}|} \hline **Id** & **Source** & **Template** \\ \hline P1 & [1] & You are given a pair of texts. Say about this pair: given Text 1, is Text 2 true, false or neutral (you can’t tell if it’s true or false)? Reply in one word. Text 1: Abstract Text 2: Hypothesis \\ \hline P2 & [10]* & Decide if the following summary is consistent with the corresponding article. Note that consistency means all information in the summary is supported by the article. Article: Abstract Summary: Hypothesis Answer (yes or no or maybe): \\ \hline P3 & [1] & Here is a premise: “Abstract.” Here is a hypothesis: “Hypothesis.” Is it possible to conclude that if the premise is true, then so is the hypothesis? Yes, No, or Maybe? \\ \hline P4 & [11] & Instructions: You will be presented with a premise and a hypothesis about that premise. You need to decide whether the hypothesis is entailed by the premise by choosing one of the following answers: ‘e’: The hypothesis follows logically from the information contained in the premise. ‘c’: The hypothesis is logically false from the information contained in the premise. ‘n’: It is not possible to determine whether the hypothesis is true or false without further information. Read the passage of information thoroughly and select the correct answer from the three answer labels. Read the premise thoroughly to ensure you know what the premise entails. Premise: Abstract Hypothesis: Hypothesis \\ \hline P5 & [1]* & The task is to identify whether the premise entails the hypothesis. Please respond with “yes” or “no” or “maybe” Premise: Abstract Hypothesis: Hypothesis Answer: \\ \hline \end{tabular} * * Added a third label _maybe_ \end{table} Table 5: Overview of the different prompts used for testing LLMs and their sources. Since prompts P2, P5 have only two labels _yes_, _no_, a third label _maybe_ was added. and premises in the two datasets. For instance in CoRe, each hypothesis has an average length of 10 words compared to 7 in case of SNLI. Average context length is 194 in CoRe compared to a 13-word premise in SNLI. Furthermore, levels of abstraction of hypotheses within the datasets vary. In CoRe, the evidence required to identify the abstract-hypothesis relationship is latent within the premise. Additionally, around 2,500 words from the CoRe dataset vocabulary are missing from the SNLI dataset. This underscores the need for domain specific datasets for fine-tuning. ### Performance of LLMs Figure 4 summarizes the results for LLMs tested under different settings. Although not directly trained on the CoRe dataset, LLMs were able to comprehend the evidence within scientific abstracts and relate them to hypotheses. This can be attributed to their large-scale training. In the zero-shot setting, LLMs generally achieved macro-F1 of about0.5, which is comparable with transfer learning models fine-tuned on the CoRe dataset. Additionally, in the zero-shot setting across all prompt styles, PaLM 2 consistently outperformed ChatGPT. We observed that when conducting FSL with PaLM 2, the model may output _null_ output and this occurrences are unpredictable at different temperatures and iterations. This occured over different temperatures and different iterations. The number of such instances ranged from 48 (37.5%) to 64 (50%) for an evaluation under a certain setting; overall, 20 (15.6%) of responses yielded _null_ outputs consistently across all prompt styles. We will investigate this anomaly in the future. While variations in performance were observed across individual runs, for a given prompt, the temperature setting did not have a major influence on the average performance of LLMs. This observation remained consistent across the zero-shot, few-shot, and prompt ensembling settings. ### Effect of prompt template As expected, the choice of prompt style had clear influence on LLM performance. Different performance metrics for models tested with different prompts are summarized in Table 7. In the zero-shot setting, ChatGPT recorded the lowest mean macro-f1-score of 0.337 when prompted with P4 whereas it achieved highest mean macro-f1-score of 0.479 when using prompt P3. A similar trend is observed in the few-shot setting where the lowest performance was recorded when using prompt P3 with a mean macro-f1-score of 0.466 while the highest performance was reported for P4 with a mean macro-f1-score of 0.609. There was no single prompt that had consistent high performance across all temperatures, models, and settings. This indicates that ensembling over multiple prompt templates is more reliable than using a single prompt. ### Unequal performance gains with FSL FSL in general enhances the performance of LLMs across prompt styles, although performance gains are unequal. In case of ChatGPT, prompt P4 showed the greatest improvement with mean macro-f1-score increasing from 0.337 in zero-shot to 0.609 in the few-shot setting. Conversely, the performance slightly declined with P3, from 0.479 in zero-shot to 0.466 in the few-shot setting. Prompt ensembling with ChatGPT in the zero-shot setting achieves performance comparable to the few-shot setting. ## 8 Conclusion We have introduced the Scientific Hypothesis Evidencing task and a novel dataset for this task. Our goal is to determine whether a paper, based on its abstract, offers evidence in support or refute of a given hypothesis. This goal broadly underlies all of meta-analysis. It supports efforts to highlight incon Figure 4: Average macro-F1 of LLMs with different prompt templates and temperature settings. sistencies and gaps in existing literature, motivate next studies, and support evidence-based decision-making and policy. Given the wide availability of abstracts, e.g., in scholarly data repositories, methods which can successfully operate on abstracts as opposed to full text are preferable. Our findings suggest that hypothesis evidencing is challenging task for current NLU models, including state-of-the-art LLMs trained on a diverse set of data. Notably, supervised learning using embedding-based models outperformes LLMs in our experiments. Yet, supervised models are known to be less generalizable than LLMs. Looking ahead, Open AI has recently introduced fine tuning functionality for gpt-3.5-turbo. Future work should investigate the performance of LLMs following fine tuning on the data. This will indicate whether the higher performance of embedding-based models is result of exposure to the complete training dataset vs. fewer examples provided in FSL setting. In addition, for a more robust assessment, future work should explore five-fold cross-validation. And, given the limitations associated with human-generated discrete prompts, future work should explore automatic prompt tuning. We expect that, in providing the research community with an initial benchmark dataset, our work will catalyze some of these next steps. Future work should continue to build out datasets like ours, keeping in mind ways to ameliorate class imbalance and diversify represented topics. Notably, the assembly of these datasets serves multiple aims, bringing together domain experts around important questions in their own area, highlighting robustness and reliability amongst claims, and of course moving forward NLP and NLU. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Type** & **Model** & **Setting** & **Accuracy** & **macro F1** \\ \hline Supervised model on & Longformer & Supervised on CoRe & 65.60\% & 0.558 \\ embeddings & text-embedding-ada-002 & Supervised on CoRe & **70.31\%** & **0.615** \\ \hline \multirow{4}{*}{Transfer learning} & MT-DNN & Fine-tuned on CoRe & 67.97\% & 0.523 \\ & MT-DNN & Fine-tuned on SNLI & 42.97\% & 0.342 \\ \cline{2-5} & ESIM & Supervised on CoRe & 64.84\% & 0.489 \\ & ESIM & Supervised on SNLI & 39.84\% & 0.335 \\ \hline \multirow{4}{*}{LLM} & Zero-shot w/o ensemble & 47.22\%\({}^{*}\) & 0.414\({}^{*}\) \\ & ChatGPT & Few-shot w/o ensemble & 59.85\%\({}^{*}\) & 0.517\({}^{*}\) \\ & Zero-shot with ensemble & 53.94\% & 0.500 \\ & Few-shot with ensemble & 66.57\% & 0.576 \\ \cline{2-5} & & zero-shot w/o ensemble & 59.78\%\({}^{*}\) & 0.504\({}^{*}\) \\ & PaLM 2 & Few-shot w/o ensemble & 69.78\%\({}^{*+}\) & 0.583\({}^{+}\) \\ & Zero-shot with ensemble & 62.87\% & 0.536 \\ & Few-shot with ensemble & 76.40\% & 0.678\({}^{+}\) \\ \hline \multicolumn{5}{l}{\({}^{*}\) Mean of responses across all temperatures, prompt templates, and iterations} \\ \multicolumn{5}{l}{\({}^{\dagger}\) Incomplete responses} \\ \end{tabular} \end{table} Table 6: Results summarizing the performance of models on the held-out set under different settings. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model** & **Prompt** & **F1** & **P** & **R** \\ \hline \multirow{4}{*}{PaLM 2} & _P1_ & 0.56(0.01) & 0.60(0.01) & 0.59(0.01) \\ & _P2_ & 0.46(0.02) & 0.50(0.03) & 0.51(0.01) \\ & _P3_ & 0.46(0.02) & 0.50(0.03) & 0.51(0.01) \\ Zero-shot & _P4_ & 0.52(0.02) & 0.60(0.06) & 0.59(0.04) \\ & _P5_ & 0.51(0.01) & 0.53(0.01) & 0.53(0.01) \\ \hline \multirow{4}{*}{ChaM GPT} & _P1_ & 0.60(0.04) & 0.65(0.03) & 0.68(0.06) \\ & _P2_ & 0.57(0.01) & 0.58(0.04) & 0.58(0.01) \\ & _P3_ & 0.58(0.02) & 0.61(0.02) & 0.64(0.03) \\ & _P4_ & 0.58(0.04) & 0.58(0.04) & 0.59(0.04) \\ & _P5_ & 0.59(0.04) & 0.63(0.07) & 0.59(0.03) \\ \hline \multirow{4}{*}{ChaM GPT} & _P1_ & 0.41(0.03) & 0.54(0.032) & 0.50(0.02) \\ & _P2_ & 0.41(0.01) & 0.44(0.03) & 0.47(0.00) \\ & _P3_ & 0.47(0.01) & 0.58(0.02) & 0.53(0.02) \\ & _P4_ & 0.34(0.01) & 0.61(0.03) & 0.44(0.01) \\ & _P5_ & 0.45(0.01) & 0.51(0.01) & 0.50(0.01) \\ \hline \multicolumn{5}{l}{\({}^{*}\) Partial results due to _null_ responses} \\ \end{tabular} \end{table} Table 7: Comparison of performance metrics of LLMs across various prompt templates on the CoRe dataset. The metrics are averaged for different temperature settings across all the runs. Subscripts indicate standard deviation over 5 runs. _Note: The macro averaged precision and recall metrics are skewed due to class imbalance_. ## Ethics Statement The data we compiled were originally contributed by volunteering social scientists. Although we did not seek their consensus, we cite each review as an individual reference and acknowledge all contributors for their efforts on this open list, which not only benefits social scientists but also computer and information scientists. In addition, because the edit request needed to be approved, we assumed the contributors were all qualified scientists or researchers, so the quality of the data was ensured.
2309.09893
Fault-Tolerant One-Bit Addition with the Smallest Interesting Colour Code
Fault-tolerant operations based on stabilizer codes are the state of the art in suppressing error rates in quantum computations. Most such codes do not permit a straightforward implementation of non-Clifford logical operations, which are necessary to define a universal gate set. As a result, implementations of these operations must either use error-correcting codes with more complicated error correction procedures or gate teleportation and magic states, which are prepared at the logical level, increasing overhead to a degree that precludes near-term implementation. In this work, we implement a small quantum algorithm, one-qubit addition, fault-tolerantly on the Quantinuum H1-1 quantum computer, using the [[8,3,2]] colour code. By removing unnecessary error-correction circuits and using low-overhead techniques for fault-tolerant preparation and measurement, we reduce the number of error-prone two-qubit gates and measurements to 36. We observe arithmetic errors with a rate of $\sim 1.1 \times 10^{-3}$ for the fault-tolerant circuit and $\sim 9.5 \times 10^{-3}$ for the unencoded circuit.
Yang Wang, Selwyn Simsek, Thomas M. Gatterman, Justin A. Gerber, Kevin Gilmore, Dan Gresh, Nathan Hewitt, Chandler V. Horst, Mitchell Matheny, Tanner Mengle, Brian Neyenhuis, Ben Criger
2023-09-18T15:56:14Z
http://arxiv.org/abs/2309.09893v1
# Fault-Tolerant One-Bit Addition with the Smallest Interesting Colour Code ###### Abstract Fault-tolerant operations based on stabilizer codes are the state of the art in suppressing error rates in quantum computations. Most such codes do not permit a straightforward implementation of non-Clifford logical operations, which are necessary to define a universal gate set. As a result, implementations of these operations must either use error-correcting codes with more complicated error correction procedures or gate teleportation and magic states, which are prepared at the logical level, increasing overhead to a degree that precludes near-term implementation. In this work, we implement a small quantum algorithm, one-qubit addition, fault-tolerantly on the QuantumH1-1 quantum computer, using the [8; 3; 2] colour code. By removing unnecessary error-correction circuits and using low-overhead techniques for fault-tolerant preparation and measurement, we reduce the number of error-prone two-qubit gates and measurements to 36. We observe arithmetic errors with a rate of \(\sim 1.1\times 10^{-3}\) for the fault-tolerant circuit and \(\sim 9.5\times 10^{-3}\) for the unencoded circuit. ## I Introduction Quantum computers have a large and growing number of potential applications [1], and quantum computers of increasing size are being constructed [2; 3]. Owing to the effects of noise and physical imperfections, these devices continue to have large physical error rates, on the order of \(2\times 10^{-3}\)[4] for a typical two-qubit entangling gate or measurement, preventing the direct implementation of large-scale algorithms with physical qubits. Therefore, it is necessary to carry out a quantum computation fault-tolerantly to lessen the effective error rate. One straighforward way to construct a fault-tolerant circuit is to select a quantum error-correcting code together with a corresponding set of fault-tolerant operations, and replace the individual operations of a given non-fault-tolerant circuit with their fault-tolerant counterparts [5]. Error-correction gadgets designed for the code in question are then inserted between these fault-tolerant operations to prevent the effect of errors building up over time. Indeed, proofs of the threshold theorem often provide an explicit construction of such a procedure [6]. However, this procedure often involves a large overhead in terms of both gate count and number of qubits required, which limits the suitability of such circuits to be implemented on experimental hardware. One consequence of this overhead is that although the ingredients of fault tolerant computation including memory gadgets [7], entangling gates [8] and preparation of magic states [9] have been demonstrated individually, they have generally not yet been implemented at the same time. Reducing the size of a fault-tolerant circuit (in terms of gate and qubit count) can make implementation easier. Smaller circuits are also preferable as there is reason to expect that they result in lower logical error rates. Consider two fault-tolerant circuits \(C_{s}\) and \(C_{l}\) that act identically on all inputs, where the circuit \(C_{s}\) is shorter than the circuit \(C_{l}\), and both circuits can detect or correct \(t-1\) faults. \(C_{l}\) contains more locations at which faults may occur and, therefore, has a higher likelihood of experiencing at least \(t\) faults, which may be undetectable/uncorrectable and contribute to the logical error rate, provided all other factors are equal. This is another reason why smaller fault-tolerant circuits are to be preferred over larger ones. Much previous work employs a conventional approach to implementing fault-tolerant algorithms. One starts with a code in which it is straightforward to implement Clifford gates directly. For instance, this can be done with surface code patches via lattice surgery [10], and we note that an instance of Grover's algorithm not containing non-Clifford gates has already been implemented fault-tolerantly on two qubits [11] using the [4; 2; 2] code. Then, to obtain a universal gate set, a single non-Clifford gate such as the t gate is implemented by gate teleportation [12], which may require magic state distillation [13]. A recent proposal [14] turns this scenario on its head. One may instead start with codes which possess transversal (and hence fault-tolerant) non-Clifford gates. The Clifford gates are then implemented by gate teleportation with Pauli eigenstates as the ancillary inputs, requiring no expensive distillation to prepare. In this paper, we realise a small instance of this proposal. We implement a one-qubit addition circuit using the [8; 3; 2] colour code, which allows a transversal non-Clifford ccz gate. Although this algorithm is simple, it computes the answer to a mathematical problem and contains both Clifford and non-Clifford gates. The one-bit adder circuit is also a simple application of the Toffoli gate, which is a universal gate for reversible classical computing. It can be used to construct oracles for Grover's algorithm [11; 15] as well as the modular exponentiation circuits employed within Shor's algorithm. As a result, the fault-tolerant realisation of a one-bit adder circuit has practical implications for the realisation of more substantial quantum algorithms, as well as pushing forward the state-of-the-art in the realisation of fault tolerant algorithms on contemporary quantum computers. ## II One-bit addition At the logical level, adding two bits (potentially in superposition), may be carried out using the circuit in Figure 1. Note that a classical one-bit addition requires only two bits of memory, as two bits are sufficient to store both the input bits and the output, which may be a two-bit number. However, the resulting circuit is not reversible, as the input bits cannot be recovered from their sum. To implement the one-bit adder on a quantum computer, a reversible classical circuit is needed. The one-bit adder can be made reversible at the cost of requiring three bits or qubits. Implementing the circuit in Figure 1 with computational basis states as inputs would not be a good demonstration of quantum computing, as the state would not exhibit superposition or entanglement throughout the computation. In order to remedy this, we input the state \(\ket{+}\ket{+}\ket{+}\), thus preparing the equal superposition of all four two-bit sums, and obtaining the result of one of them at measurement time. We also replace the Toffoli gate with a ccz gate conjugated by Hadamards, which, after simplification, gives the circuit of Figure 2, which is the logical circuit we use in the remainder of this work. In the following section, we review the error-detecting code selected for this work (the \(\llbracket 8,\,3,\,2\rrbracket\) colour code). We express the logical circuit in terms of simple, low-overhead fault-tolerant operations in Section IV. We then implement the resulting circuit in an ion-trap quantum computer, comparing the arithmetic error rates achieved using fault-tolerant and non-fault-tolerant circuits, as well as simulations, in Section V. We discuss the resource overhead required for a variety of implementations of one-bit addition in Section VI, and conclude in Section VII. ## III \(\llbracket 8,\,3,\,2\rrbracket\) colour code To encode the three logical qubits necessary for one-bit addition and gain access to a transversal ccz, we select the \(\llbracket 8,\,3,\,2\rrbracket\) colour code [16]. To prepare a uniform superposition of valid one-bit sums, we first fault-tolerantly prepare the \(\left|\overline{\tau}\right\rangle^{\otimes 3}\) state. We then perform the transversal ccz, and a cnot between qubits \(1\) and \(2\), then measure qubit \(3\) in the \(\overline{X}\) basis, and finally measure qubits \(1\) and \(2\) in the \(\overline{Z}\) basis. While the transversal ccz of the \(\llbracket 8,\,3,\,2\rrbracket\) code is already well understood, the state preparation, Clifford gates and measurements used here have not been described in the prior literature to our knowledge. We explain their derivation below, in order of execution in the experiment. ## IV Construction of the circuit ### \(\left|\overline{\tau}\right\rangle^{\otimes 3}\) Preparation There is a well-known procedure for fault-tolerant preparation of \(\left|\overline{\tau}\right\rangle^{\otimes k}\) states in CSS codes of distance \(d\) involving measuring \(Z\) stabilisers with the \(\ket{+}^{\otimes n}\) state as input. The first round of measurement will result in random outcomes, so multiple rounds (two for codes with \(d=2\)) would be necessary to detect errors. For the \(\llbracket 8,\,3,\,2\rrbracket\) code, this requires \(\sim 32\) cnots and \(8\) measurements. To reduce the size of this circuit, we use the Goto circuit design technique [17], first writing out a non-fault-tolerant circuit with the minimum number of two-qubit gates, then measuring a limited set of stabilizers that detect high-weight errors resulting from error propagation through the initial circuit. We find the non-fault-tolerant stage of the circuit by inspection, beginning from the desired final state and using cnot gates to break the state's entanglement, until arriving at a state consisting of four Bell pairs, which can be prepared fault-tolerantly using cnots on bare qubits. We confirm that this circuit contains the minimum number of cnots using breadth-first search over an implicit graph whose vertices are canonical stabilizer states (see [18], [19]). Gottesman-Knill simulation reveals that all high-weight propagated errors can be detected by fault-tolerant measurement of two weight-four stabilisers, \(X_{1,3,4,6}\) and \(Z_{1,3,4,6}\). This can be accomplished using an appropriately interleaved circuit designed by Reichardt [20]. The final preparation circuit requires \(18\) cnots and two measurements, halving the number of relatively Figure 1: One-bit addition circuit. The result of \(a+b\) is stored in the two-bit number \(s=s_{0}s_{1}\). Figure 2: One-bit addition with the uniform superposition as input, preparing the ‘superposition of valid sums’ and measuring it destructively. error-prone gates with respect to the generic technique. ### Destructive \(\overline{X}\) Measurement Similarly to state preparation, there is also a generic protocol for measuring the logical observables of CSS codes. In this protocol, all data qubits are measured in the \(X\) or \(Z\) basis, and the eigenvalues of stabilisers and logical operators in that basis are then reconstructed by calculating classical parities, allowing a final round of classical error correction to be performed. In this way, any set composed only of tensor products of \(\overline{Z}\) or \(\overline{X}\) operators can be measured, but we cannot simultaneously measure \(\overline{X}_{j}\) and \(\overline{Z}_{k\neq j}\) using this protocol. Typically, whenever we wish to measure a subset of logical qubits in a different basis, we use an ancilla-based circuit to measure the relevant operators in a non-destructive, fault-tolerant way (similarly to stabilizer measurements) or synthesize the logical measurement by transferring the relevant subset of logical qubits to a new code block, which may subsequently be measured destructively. The \(\llbracket 8,3,2\rrbracket\) colour code allows an alternative to such a protocol; a logical \(X\) operator supported on a face may be measured destructively by measuring the four qubits on that face in the \(X\) basis. The remaining \(\overline{X}\) operators are reduced to weight two, and the weights of the \(\overline{Z}\) operators not supported on the measured face are preserved, leaving the remaining two logical qubits encoded in the [4, 2, 2] code, see Figure 4. This destructive measurement is not fault-tolerant because no stabiliser eigenvalue can be reconstructed from the measurement outputs. To reconstruct \(S_{X}=X^{\otimes 8}\), we use a flag-based circuit to non-destructively measure \(X^{\otimes 4}\) on the opposite face and take the parity of the five measurement outputs to reconstruct the stabilizer eigenvalue. ### Logical CNOT by Permutation With CSS codes that encode a single logical qubit, such as surface code patches, entangling operations are implemented across different code blocks, usually using transversal gates or lattice surgery [21]. For codes such as the \(\llbracket 8,3,2\rrbracket\) code, which possess \(k>1\) logical qubits, there is no general protocol for performing logical entangling operations within a single code block (though architectures which use codes with \(k>1\) have been explored [22]). However, the \(\llbracket 8,3,2\rrbracket\) colour code possesses a logical cnot gate which can be implemented by permuting or relabelling the physical qubits. This is due to a spatial symmetry of the stabilizer group which the logical operators do not obey. The QCCD architecture [23, 2] allows us to transport physical qubits from one area of a device to another if necessary. As a result, the cnot gate given Figure 3: Stabilizer generators and logical operators for the \(\llbracket 8,3,2\rrbracket\) colour code. in Figure 2 can be implemented with near-unit fidelity using only transport operations, see Figure 5. ## V Experimental results ### Experimental details The circuit given in Figure 6, which is a fault-tolerant implementation of the one-bit adder circuit, was submitted to both the QuantumH1-1 quantum computer and the QuantumH1-1E emulator. Upon submission, a compiler transforms the circuit into one corresponding to the native gate set of the quantum computer. For comparison, a non-fault tolerant circuit acting on three physical qubits, given in Figure 7, was also submitted to the device. The resulting circuits were executed 10,000 times on the quantum computer and 100,000 times on the emulator, and the results presented in Table 1. The implementations of logical operations discussed in the previous section allow us to write a fault-tolerant implementation of one-bit addition in the [8, 3, 2] colour code using 24 cnots and 12 measurements [24], see Figure 6. For comparison, decomposing the one-qubit adder into cnots and single-qubit gates on bare qubits results in five cnots and three measurements, see Figure 7. To completely characterize the logical errors which occur in a fault-tolerant one-qubit addition, performing three-qubit logical process tomography would be necessary. However, process tomography results in prohibitively high sampling overhead and introduces the challenge of distinguishing state preparation and measurement (SPAM) errors from those occurring in the unitary of interest. For these reasons, we instead execute a complete protocol consisting of state preparation, unitary operations and measurements, and calculate an operationally-defined error rate which is affected by all steps of the process. At the end of each summation, we measure the register in the computational basis, at which point we learn classical values for the one-bit number \(a\), and the two-bit number \(s\). If \[(s=3)\vee((s=2)\wedge(a=0))\vee((s=0)\wedge(a=1)), \tag{1}\] we can infer that a logical error has occurred. We call these events _arithmetic errors_ and compare the rates at which they occur in Table 1. In order to evaluate the effectiveness of the arithmetic error rate as a measure of the true error rate, we attempt to estimate the true error rate. This cannot be done directly from the data presented in Table 1. Therefore, we carry out a density-matrix-based simulation of the circuit, using a noise model motivated by that of the QuantumH1-1 emulator, and attempt to quantify the effect of noise on the state obtained at the end of the circuit. We obtain an arithmetic error rate of 0.39%, which is of comparable magnitude to that observed in Table 1. The fidelity of the simulated state with the ideal error-free state 99.7%, which implies that the use of the arithmetic error rate does not conceal significant logical errors in other bases. ## VI Comparison with planar architectures To understand the overhead reduction achieved in this implementation, we investigate a surface-code-based implementation of a one-bit addition for surface code patches with distance \(d=2\). Similar to [25], logical qubits are realised in independent code blocks, logical Clifford gates are implemented using lattice surgery, and the logical Hadamard is propagated forward, resulting in an \(\overline{X}\) measurement on the third qubit at the end of the circuit. We can lower the overhead of this implementation by generating the state \(\texttt{ccz}|\overline{+++}\rangle\) using a magic state factory [26], and computing with it directly, rather than using it for gate teleportation. At distance \(d=2\), the factory can detect any single-qubit error during the production of the \(|\texttt{ccz}\rangle\) state. The overhead of this implementation is dominated by the ccz magic state factory. It requires 18 surface code Figure 4: Action of measuring every qubit of the top face of the cube in the \(X\) basis. Figure 5: Action of reflecting the top face of the cubic layout for the [8, 3, 2] code. The operators \(\overline{X}_{2}\) and \(\overline{Z}_{3}\) are mapped to \(\overline{X}_{2}\overline{X}_{3}\) and \(\overline{Z}_{2}\overline{Z}_{3}\), respectively, leaving other logical operators unaffected, effectively performing a logical cnot gate. patches to implement, which at distance \(d=2\) implies \(\sim 18\cdot 2d^{2}=144\) physical qubits. This figure is over an order of magnitude higher than that of the [8; 3; 2] code implementation discussed above and is too large to be executed on a Quantum H-series device at time of writing. The relative logical error rates resulting from the use of such different state factories can be estimated by counting pairs of faults that result in logical errors (larger sets of independent faults being much less likely). We carry this estimation out using QuantumClifford.jl[19], resulting in 84873 fault pairs for the \(d=2\) surface code, and 1116 for the [8; 3; 2] code. While a more detailed comparison would not apply to devices of either architecture, we can see that the number of malicious pairs is far greater than the number of malicious pairs for the [8; 3; 2]-code-based implementation, which suggests that higher code distance (and greater overhead) would be necessary to match the logical error rate obtained in this work. By contrast, implementing one-qubit addition using the [8; 3; 2] colour code on a device with square-lattice connectivity can be accomplished with moderate overhead using the qubit layouts in Figure 9. The initial layout is used for non-fault-tolerant state preparation, and four cnots are then required to change the layout so that the remainder of the experiment can be carried out. After non-fault-tolerant state preparation, the stabilizer measurement requires an additional six cnots, since the swap gates must be decomposed into cnots rather than replaced with transport operations. While these additional cnots represent a significant increase in the overall size of the circuit without increasing its ability to tolerate errors, the induced overhead is not as significant as implementing the computation with surface-code-based logical qubits. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \multirow{2}{*}{Shot frequencies} & \multicolumn{2}{c|}{Non-fault-tolerant} & \multicolumn{2}{c|}{Fault-tolerant} \\ \cline{2-5} & Device & Emulator & Device & Emulator \\ \hline 000 & 25.18\% & 24.64\% & 24.49\% & 24.98\% \\ 001 & 0.14\% & 0.30\% & 0.03\% & 0.01\% \\ 010 & 25.23\% & 24.86\% & 25.52\% & 25.15\% \\ 011 & 0.07\% & 0.19\% & 0.02\% & 0.008\% \\ 100 & 0.41\% & 0.46\% & 0.02\% & 0.012\% \\ 101 & 24.52\% & 24.58\% & 24.75\% & 24.87\% \\ 110 & 24.14\% & 24.70\% & 25.15\% & 24.95\% \\ 111 & 0.31\% & 0.27\% & 0.01\% & 0.01\% \\ \hline Shot total & 10000 & 100000 & 8998 & 88537 \\ \hline Arithmetic error rate & 0.95\(\pm\)0.19\% & 1.22\(\pm\)0.07\% & 0.11\(\pm\)0.07\% & 0.04\(\pm\)0.014\% \\ \hline \end{tabular} \end{table} Table 1: Results of evaluating fault-tolerant and non-fault-tolerant circuits on both the Quantum H1-1 quantum computer and emulator. Only those shots that pass post-selection are used to estimate the arithmetic error rate of the fault-tolerant circuit. Rows in red correspond to arithmetic errors, while the rows in green correspond to valid one-bit sums. Figure 6: Fault-tolerant implementation of one-bit addition given in Figure 2 as submitted to the compiler. Dashed regions, from left to right: Non-fault-tolerant \(\ket{\ket{+++}}\) preparation, flag fault-tolerant measurement of \(\overline{X}_{1}\overline{X}_{3}\) and overlapping \(S_{Z}\), transversal ccz, destructive measurement of \(\overline{X}_{2}\), destructive measurement of \(\overline{Z}_{1}\) and \(\overline{Z}_{3}\). Figure 7: Non-fault-tolerant circuit as submitted to compiler. ## VII Discussion & Conclusion The fault-tolerant implementation of one-bit addition demonstrates the combined effect of transversal non-Clifford gates, logical Cliffords by permutation, post-selected state preparation, and omission of error-correction gadgets in a fault-tolerant computation. While we cannot expect each of these techniques to result in the same logical error rate reduction in all fault-tolerant computations, we believe that each of them will contribute to lower logical error rates and overheads in some future fault-tolerant computations. This result also highlights the peculiar 'inversion of difficulty' in fault-tolerant quantum computing. That is, the operations that induce the most error at the physical level (cnot and ccz) can be carried out fault-tolerantly using high-fidelity transport and transversal single-qubit gates. In Figure 8: Schedule for simulating a ccz state factory [26] using lattice surgery with distance-2 surface codes. Top: multi-qubit logical \(\overline{X}\) measurements performed in series. Bottom: Destructive distance-1 measurement in the \(\overline{SX}\) basis, applied to the eight surface codes used as logical ancillas. Note that, in a genuine magic state factory, \(T\) and \(T^{\dagger}\) gates would take the place of the \(S\) gates used here, which we use here for ease of simulation. Not shown: Pauli corrections applied to the three output logical qubits depending on results of destructive measurement (see [26]). Figure 10: Modified non-fault-tolerant \(|\overline{++++}\rangle\) preparation circuit, using the minimal number of cnots and qubits connected in a square lattice. Figure 9: Small planar qubit layouts that facilitate fault-tolerant one-bit addition. Left: layout for non-fault-tolerant state preparation. Right: layout for subsequent flag-based measurement of \(X_{1,2,5,6}/Z_{1,2,5,6}\) and \(X_{1,3,5,7}\). contrast, state preparation (which is comparatively simple and reliable at the physical level) comprises most of the fault-tolerant circuit, causing a large fraction of logical errors. This demonstration also highlights several open problems to address in future work. For instance, this experiment uses post-selection rather than correction and achieves a low conditional probability of error with moderate (\(\sim 10\%\)) post-selection overhead. Full post-selection (i.e. accepting the output only if no syndrome is observed) on every fault-tolerant gadget would result in an exponentially decaying acceptance probability. Still, the effect of partial post-selection (accepting syndromes that indicate an error with weight \(w\ll\nicefrac{{d}}{{2}}\) on selected gadgets within a larger algorithm) has yet to be explored. In addition, the placement of QEC gadgets between every adjacent pair of fault-tolerant operations is sufficient to prove the existence of thresholds in the ExRec formalism [6]. However, it is not necessary to do this to make a circuit fault-tolerant. Omitting QEC gadgets (or using partial QEC gadgets, as in Appendix A) between consecutive fault-tolerant operations can reduce logical error rates and overheads in a wide variety of protocols. Finally, we note that the fault-tolerant protocol we have developed leverages all-to-all connectivity of the physical qubits, and exhibits relatively high pseudothresholds for certain quantum computing tasks, making it particularly suitable for other platforms with high qubit connectivity, such as neutral atoms [27, 28] and NV-based networks [29, 30]. A similar idea has recently been proposed to implement quantum low-density parity-check codes (qLDPC) using reconfigurable atom arrays [27]. By using the product structure inherent in many qLDPC codes to implement non-local syndrome extraction circuits via atom rearrangement, effectively constant overhead in practically relevant regimes can be achieved [27]. ###### Acknowledgements. The authors thank Andrew Landahl for inspiring this project, as well as Ross Duncan, Chris Self, Ciaran Ryan-Anderson, and David Hayes for helpful comments and insightful discussions during the preparation of this work, and Karl Mayer for illuminating discussions regarding the noise model used on the Quantum H1-1 emulator. This publication is part of the QuTech NWO funding 2020-2024 - Part I "Fundamental Research" with project number 601.QT.001-1, which is financed by the Dutch Research Council (NWO). Y. Wang would like to acknowledge the funding support from BMBF (Grant No. 16KIS1590K). _Note added_: We would like to bring the reader's attention to a related work by Menendez at al., "Implementing fault-tolerant non-Clifford gates using the [[8, 3, 2]] color code", which appears in the same arXiv posting.
2309.13613
Eigenmodes of fractal drums: A numerical student experiment
``Can one hear the shape of a drum?'' was a question posed (and made famous) by mathematician Mark Kac in the mid-1960s. It addresses whether a deeper connection exists between the resonance modes (eigenmodes) of a drum and its shape. Here we propose a numerical experiment, suitable for advanced undergraduate physics students, on the calculation of the eigenmodes of a square Koch fractal drum, for which experimental results do exist. This exercise is designed to develop the students' understanding of the vibrations of fractal drums, their eigenmodes, and potentially their integrated density of states. The students calculate the lowest order eigenmodes of the fractal drum, visualize these modes, and study their symmetry properties. As an extension, the students may investigate the integrated density of states of the fractal drum and compare their findings to the Weyl-Berry conjecture.
Veronica P. Simonsen, Nathan Hale, Ingve Simonsen
2023-09-24T11:41:28Z
http://arxiv.org/abs/2309.13613v1
# Eigenmodes of fractal drums: A numerical student experiment ###### Abstract "Can one hear the shape of a drum?" was a question posed (and made famous) by mathematician Mark Kac in the mid-1960s. It addresses whether a deeper connection exists between the resonance modes (eigenmodes) of a drum and its shape. Here we propose a numerical experiment, suitable for advanced undergraduate physics students, on the calculation of the eigenmodes of a square Koch fractal drum, for which experimental results do exist. This exercise is designed to develop the students' understanding of the vibrations of fractal drums, their eigenmodes, and potentially their integrated density of states. The students calculate the lowest order eigenmodes of the fractal drum, visualize these modes, and study their symmetry properties. As an extension, the students may investigate the integrated density of states of the fractal drum and compare their findings to the Weyl-Berry conjecture. ## I Introduction It is well known that a large drum has a lower fundamental resonance frequency than a smaller drum. Hence, from the tone that a drum makes, you can potentially say something about its size (the area of the membrane). What now if the area of the drum is the same but we change the shape of the drum? Will this change of shape modify the tones of the drum? In 1966, the Polish mathematician Mark Kac published a seminal and influential paper related to this question under the title "Can one hear the shape of a drum?"[1]. Shortly after Kac published his famous paper, fractals started to become a topic of interest[2]. If the boundary of the drum is _fractal_, and therefore not smooth, what will then happen? In the early 1990s, Sapoval and coworkers conducted a series of elegant experiments to study the modes of fractal drums[3]. They observed modes _localized_ to bounded regions of the drum, labeled \(A\), \(B\), \(C\), and \(D\) in Fig. 1(a). In fact, Sapoval _et al._ were able to excite each mode separately. Classic (or non-fractal) drums do not behave this way, as striking any part makes the whole membrane vibrate. Why is the fractal drum so different? Sapoval _et al._ showed that the equation governing wave motion has solutions with very large amplitudes at the inward-facing corners of the drum [Fig. 1]. These large-amplitude regions generate a cascade of large-amplitude vibrations that interfere with one another. This gives rise to dissipation on many scales, so drums with fractal boundaries, hereafter called _fractal drums_, exhibit very strong damping. How does this explain the local vibrations of the fractal drum? The narrow throat connecting region \(A\) to the rest of the drum slows a wave traveling from \(A\) to \(B\) [Fig. 1(a)], and the strong damping absorbs the wave before it can spread. Experimental result for one of these local modes is shown in Fig. 1(b). Any such local modes can be considered a linear combination of the eigenmodes of the system, and the numerical calculation of the possible eigenmodes of the fractal drum is one of the main purposes of the numerical study that we propose here. In this paper, we introduce a numerical experiment allowing students to study the vibrations of fractal drums, their eigenstates, and potentially their density of states. conjecture. These problems have significant physical applications to the study of porous media, diffusion, wave propagation in fractal media or wave scattering from fractal surfaces. The tasks are devoted to the numerical calculation of the eigenfrequencies and related eigenmodes of fractal drums. As the perimeter of the fractal drum, we have chosen the so-called square Koch curve, the same structure used in the experiments by Sapoval _et al.[3]_. The purpose of the numerical experiment that we propose is to enhance students' learning by offering them a means of experimenting with concepts that they may find troublesome in class. Moreover, the experiment is suitable for introductory or upper-level courses and as a modeling exercise in upper-level physics courses. The experiment can bring enthusiasm to a physics classroom. The remaining part of this work is organized as follows: Figure 1: (a) The boundary of the (square Koch) fractal drum (\(\ell=3\)) that is investigated. The limiting curve has fractal (box-counting) dimension \(\ln(8)/\ln(4)=3/2\). (b) Experimental result of Sapoval _et al.[3]_ showing localized vibrations (reprinted with permission of APS). In Sec. II we present the numerical experiment, including its background and the relevant theoretical framework for the fractal drum problem. Then, we provide some implementation details on how to solve the problem and comment on challenges that the students may face in doing so [Sec. III]. In Sec. IV, we present and discuss the results that were obtained. Finally, Sec. V presents the conclusions we draw from this work and gives some final remarks. ## II Numerical experiment ### Fractal drums The problems presented in this work were part of the course _Computational Physics_ taught at the Norwegian University of Science and Technology (NTNU). The aim of the fractal drum problem is to numerically calculate the vibrational resonance frequencies of the square Koch drum and obtain the corresponding eigenmodes. This is the same problem that Sapoval and co-workers [3] studied experimentally in the early 1990s. These authors presented some numerical results for a few eigenmodes of the drum and their results were obtained by a relaxation method (see Ref. [3] for details). Here, a different numerical approach is used that allows one to obtain all the lower eigenmodes. The physics used in the fractal drum problem, although not explored in this work, extends to applications in the study of porous media, diffusion, wave propagation in fractal media and wave scattering from fractal surfaces. To state the problem, let \(D\) denote the region inside the square Koch drum. The oscillation of the membrane (in \(D\)) is determined by the wave equation \(\nabla^{2}u=(1/c^{2})\partial^{2}u/\partial t^{2}\) (\(c\) is a velocity) subjected to (Dirichlet) boundary condition \(u=0\) for all times on the boundary \(\partial D\). Here \(u(\mathbf{r},t)\) represents the vertical displacement of the membrane at position \(\mathbf{r}\) in the plane at time \(t\). Performing the Fourier transform of the wave equation with respect to time leads to the Helmholtz equation [4; 5] \[-\nabla^{2}U(\mathbf{r},\omega) =\frac{\omega^{2}}{c^{2}}U(\mathbf{r},\omega), \text{in }D \tag{1a}\] \[U(\mathbf{r},\omega) =0 \text{on }\partial D, \tag{1b}\] where \(\omega\) denotes the angular frequency. Equation (1a) states that \(\omega^{2}/c^{2}\) is an eigenvalue, \(\omega\) is the corresponding eigenfrequency, for the negative Laplacian operator \([-\nabla^{2}]\), and the function \(U(\mathbf{r},\omega)\) is the eigenmode corresponding to the eigenfrequency \(\omega\). A classic approach to solving Eq. (1) inside \(D\) is to use a finite difference approximation to the unknown function \(U(\mathbf{r},\omega)\) in this domain. This is achieved by defining a rectangular grid of lattice constants \(h\) in the domain of interest. If \(\mathbf{r}_{mn}=(x_{m},y_{n})\) represents an arbitrary lattice point in a region of the plane containing the square Koch drum, we let \(U(\mathbf{r}_{mn})=U_{mn}\) denote the vertical displacement of the membrane at this point. When the standard five-point stencil [6; 7], defined by the point itself and its four nearest neighbors, is applied to the Laplacian operator that appears on the left-hand side of Eq. (1a), we are led to \[-\frac{1}{h^{2}}\left[U_{m+1,n}+U_{m-1,n}+U_{m,n+1}+U_{m,n-1}-4U _{mn}\right]\\ =\frac{\omega^{2}}{c^{2}}U_{mn}. \tag{2}\] The vertical displacement vanishes [\(U_{mn}=0\)] for lattice points that are outside, or on the boundary, of the square Koch drum. Hence, it is only the set of displacements \(\{U_{mn}\}\) that correspond to lattice points that are inside the square Koch drum that we need to determine. We call these points _internal_ lattice points. When Eq. (2) is applied to all internal lattice points, a set of linear eigenequations is obtained, which determines the eigenfrequencies and the corresponding eigenmodes of the square Koch drum. ## III Implementation details In this section, we will outline some of the implementation details required to numerically calculate the eigenfrequencies and eigenmodes of the fractal square Koch drum using the finite difference approximation. ### Constructing the fractal drum The fractal that we will be concerned with is constructed on the basis of the _generator_ presented in Fig. 2(b). This generator is constructed from an initial (\(\ell=0\)) line segment of length \(L\) [Fig 2(a)] by (\(i\)) dividing it into four equal segments of length \(L/4\); (\(ii\)) raising the 2nd element (from the left) a distance \(L/4\) from the base; and (\(iii\)) lowering the 3rd element a distance \(L/4\), while the elements connected to the end points are not moved. It is customary to treat the central vertical part of the generator as two separate line segments instead of one, in order to make each of the 8 line segments the same length. The structure in Fig. 2(b) is the _generator_ of the fractal and is represented by generation level \(\ell=1\). To obtain the structure at level \(\ell=2\), this generator is applied subsequently to each of the line segments of length \(L/4\) from the previous generation level. In this way, the \(\ell=2\) structure presented in Fig. 2(c) is obtained. The structures corresponding to higher levels are generated recursively in the same fashion by applying the generator from Fig 2(b) to the smaller-and-smaller line segments from the previous level. In the limit \(\ell\rightarrow\infty\), the true fractal structure is obtained; when \(\ell\) has a finite value, the structure is said to be a pre-fractal. Therefore, the drums used in this experiment are technically not a fractal but a pre-fractal. The square Koch fractal (of type 2) is generated by starting from a square of sides \(L\) [Fig. 3(a)] (level 0), and recursively applying the fractal generator from Fig. 2(b) to each of its sides. The fractal structure at level \(\ell=1\) and \(\ell=2\) is obtained and the resulting structures are presented in Figs. 3(b) and 3(c), respectively. In Fig. 3, the points that are added at each level are presented in different colors. It should be noticed from the way that the structure is generated that the total area inside the structure is \(L^{2}\) and independent of the generation level. Furthermore, the smallest line segment of the structure at level \(\ell\) is \[\delta_{\ell}=\frac{L}{4^{t}}. \tag{3}\] ### Discretize The next step is to introduce a _square_ lattice to which all the corners of the square Koch fractal at level \(\ell\) belong. For this to be the case, the discretization interval \(\delta_{\ell}\) cannot be independent of the initial width \(L\) of the square from which one started the generation (level \(\ell=0\)). From the structures depicted in Fig. 3, it should be apparent that the widths of the structures grow with generation level \(\ell\). From the way the square Koch fractal is generated, one finds that its size (width and height) at level \(\ell\) is given as \(L_{\ell}=L+2\sum_{n=1}^{\ell}\delta_{n}\) or as \[\frac{L_{\ell}}{L}=1+2\sum_{n=1}^{\ell}4^{-n}. \tag{4}\] By discretizing a square region of sides \(L_{\ell}\) and using a discretized interval \(\delta_{\ell}=L/4^{\ell}\), all corners of the square Koch curve at level \(\ell\) are guaranteed to fall onto the lattice. If we assume that lattice points coincide with the end points of this square region, a general lattice point Figure 3: (Color online) The process of constructing the square Koch fractal. (a) The initial square region (level \(\ell=0\)) from which the square Koch fractal is generated; (b) level \(\ell=1\) of the fractal construction (added points shown in blue); (c) level \(\ell=2\) of the construction (added points shown in red). The vertical dashed lines represent the initial width (and height) of the square region from which the square Koch curve is constructed. Notice that the width and height of the structure increase with the generation level \(\ell\), but the area inside the curve remains the same at _all_ generation levels. The coordinate system that we use is indicated in Panel (a) and its origin is located at the center of the square. Figure 2: (Color online) The process of constructing the fractal. (a) The initial line segment of length \(L\) (level \(\ell=0\)); (b) the “_generator_” of the fractal (level \(\ell=1\)); (c) the structure at level \(\ell=2\) of the construction. The different nodal colors are used to represent the nodes added at each level of the generation process. is given as \[\mathbf{r}_{mn}=x_{m}\mathbf{\hat{x}}+y_{n}\mathbf{\hat{y}},\] (5a) where the coordinate system used is indicated in Fig. 3 (a) and a caret over a vector indicates that it is a unit vector. In writing Eq. ( 5a ) we have defined \[x_{m} =-\frac{L_{\ell}}{2}+(m-1)\delta_{\ell} \tag{5b}\] \[y_{n} =-\frac{L_{\ell}}{2}+(n-1)\delta_{\ell}, \tag{5c}\] with \(m=1,2,\ldots,N_{\ell}+1\) and \(n=1,2,\ldots,N_{\ell}+1\). Here the integer \[N_{\ell}=\left\lfloor\frac{L_{\ell}}{\delta_{\ell}}\right\rfloor=\left\lfloor 4 ^{\ell}\left(1+2\sum_{n=1}^{\ell}4^{-n}\right)\right\rceil, \tag{6}\] denotes the number of line segments (of size \(\delta_{\ell}\)) needed to cover the width (or height) of the square region \(L_{\ell}\times L_{\ell}\) that fully contains the square Koch curve (the symbol \(\lfloor\cdot\rceil\) means the nearest integer). The total number of points in the lattice is \((N_{\ell}+1)^{2}\) and the fraction of lattice points that are inside the square Koch curve (internal lattice points) can be approximated by the area ratio \((L/L_{\ell})^{2}\) [cf. Eq. (4)]. ### Classification of lattice points To facilitate the implementation of the finite difference expression in Eq. (2), it will be beneficial to know which set of lattice points are internal, external, and boundary points for the square Koch curve. To keep track of the classification of the lattice points, we define a square matrix of integers that has a dimension that is identical to the lattice and whose values determine if the lattice point is inside (positive value), outside (negative value), or on the boundary (zero value) of the square Koch drum. In the following, we will refer to this matrix as the classification array (or matrix) and it will later be used as a look-up table. Since the corners of the square Koch curve at level \(\ell>0\) coincide with some of the lattice points if a lattice constant \(h=\delta_{\ell}\) is used, one can readily identify the boundary points and set the value of the classification array to zero for such points. It still remains to be determined if the remaining points are inside or outside of the (closed) square Koch curve. The way to do this was _not_ specified in the description of the problem that was handed out. Instead, the students were asked to identify and implement at least one method of doing so, and several methods were proposed, implemented, and tested by the students. Here we briefly describe a few such methods. The (closed) square Koch curve can be seen as a simple polygon since it is defined by its corners. Therefore, our point classification problem is equivalent to the well-known point-in-polygon problem from computer graphics [8; 9; 10]. This is an old problem, and numerous algorithms exist to solve it. Here we briefly mention a few that were suggested by students. The _ray casting algorithm_[11] which keeps track of the number of intersections for a ray (or line) passing from a starting point that is outside (or exterior of) the polygon to the point in question one is investigating; if the number of such intersections is odd, the investigated point is located inside the polygon, if it is even, the point is outside the polygon. In the _winding number algorithm_ the investigated point's winding number with respect to the polygon is calculated [9]. This number, which is an integer, is zero if the point is outside the polygon, and non-zero if it is inside. The more mathematically inclined students may appreciate that the point-in-polygon problem can be addressed by Cauchy's residue theorem from complex analysis. By defining \(z=x+iy\) and letting \(z_{0}\) denote the point of interest, the complex integral \((2\pi i)^{-1}\oint_{\gamma}dz/(z-z_{0})\), where \(\gamma=\partial D\) is the square Koch curve, will vanish if \(z_{0}\) is outside \(\partial D\) and should equal \(1\) (the residue of the integrand at \(z_{0}\)) if it is inside. By numerically calculating the contour integral it can be determined if a point is inside or outside the square Koch curve. It should be remarked that Cauchy's residue theorem can be used to define the winding number algorithm since the winding number is just an alternate form of the Cauchy integral given above [12]. To fill the whole classification array, we start from the upper left corner of the lattice, a point that corresponds to lattice point \(\mathbf{r}_{11}\), and traverse the lattice column-by-column [13]. For each lattice point, one of the methods outlined above (or others) is used to determine if the lattice point is inside or outside of the square Koch curve. For the calculations that we present in this paper, we used the winding number algorithm. If the lattice point is outside the square Koch curve, we set the value to \(-1\) (or any other negative value). On the other hand, for lattice points that are classified as being inside, the classification array is given a strictly positive integer value. The classification value of the first internal point that we encounter is set to \(1\), the second one to \(2\), and so on. This way of labeling the internal lattice points will be convenient when we later set up the eigensystem (see the next subsection). When the lattice is traversed column-by-column starting from the upper left corner, as we have assumed here, the classification of the first internal lattice points is detailed in Fig. 4(b). ### Constructing the eigensystem Equation (2) is the starting point for setting up the eigensystem that determines the eigenmodes and corresponding eigenfrequencies of the drum. However, we want the eigenfrequencies that we calculate to be independent of the width and height, \(L\), of the square from which the square Koch drum was generated. Therefore, we multiply both sides of Eq. (2) by \(L^{2}\) and define the _dimensionless eigenfrequency_ \[\Omega=\frac{\omega}{c}L, \tag{7}\] of the square Koch drum. From the equation that is obtained in this way we construct the eigensystem \(A\mathbf{v}=\lambda\mathbf{v}\). Here \(A\) is the coefficient matrix representing the finite difference approximation to the negative of the Laplacian (times \(L^{2}\)), \(\mathbf{v}\) is the eigenvector, and \(\lambda=\Omega^{2}\) is the corresponding eigenvalue. First, one needs to adopt a storage convention that maps onto a vector the set of the matrix elements \(U_{mn}\) that correspond to internal lattice points. We adopt the convention \[\mathbf{v}=\left(U_{m_{1}n_{1}},\,U_{m_{2}n_{2}},\,U_{m_{3}n_{3}},\,\cdots \right)^{T}, \tag{8}\] where the index pair \(m_{p}n_{p}\) that appears as subscripts is defined from the lattice point classification matrix \(C\) by \(C(m_{p},n_{p})\ =\ p\) with \(p\) a positive integer [\(p\in\mathbb{N}^{+}\)]. In other words, the \(p\)'th element of the eigenvector \(\mathbf{v}\) corresponds to the lattice point located at position \((m_{p},n_{p})\). With this convention, and the use of the classification matrix \(C\), the coefficient matrix \(A\) can be constructed in the following way. First, all elements of the matrix \(A\) are initialized to zero [\(A=0\)]. Then one loops over all lattice points (here in a column-by-column manner), \(m=1,2,\ldots,N_{\ell}+1\) and \(n=1,2,\ldots,N_{\ell}+1\). If a lattice point is outside or on the boundary of the square Koch curve, do nothing, and go on to the next lattice point. On the other hand, if the point of lattice indices \((m,n)\) is an internal point \(i\ =\ C(m,n)\ >\ 0\), the diagonal element of the coefficient matrix is set to \(A_{ii}\ =\ 4L^{2}/\delta_{\ell}^{2}\ =\ 4^{\ell+1}\) [see Eq. (2)\(\times L^{2}\)] where we have used \(h\ =\ \delta_{\ell}\) for the square Koch curve at generation level \(\ell\). This value of \(A_{ii}\) is indicated by the blue color in Fig. 4(c). Next, the potential coupling to its four nearest-neighboring lattice points is taken into account. This is done by subsequently considering the points that are located to the right and the left of the lattice point \((m,n)\), that is, points labeled \(j=C(m+1,n)\) and \(j=C(m-1,n)\), and the lattice points just above and below \((m,n)\) that are labeled \(j=C(m,n+1)\) and \(j=C(m,n-1)\). For each of the points that are nearest-neighbors to lattice point \((m,n)\) and also are internal lattice points so that \(j>0\), one sets \(A_{ij}=-L^{2}/\delta_{\ell}^{2}=-4^{\ell}\) [see Eq. (2)\(\times L^{2}\)]. Such elements are indicated by the green color in Fig. 4(c). In the same figure, the white color indicates vanishing (zero value) matrix elements. After completing the loop over the whole lattice, the coefficient matrix \(A\) is filled and the eigenmodes and eigenvalues can be computed. One should note that the coefficient matrix \(A\) is symmetric and positive definite. Hence, the eigenvalues are real and the eigenvectors can be chosen to be real; this is required for the physical quantities frequency and displacement. In passing, it should be noted that the matrix \(A\) has dimension \(M_{\ell}\times M_{\ell}\) where a good approximation for \(M_{\ell}\) is \(\left\lfloor(N_{\ell}+1)^{2}L^{2}/L_{\ell}^{2}\right\rfloor\). Furthermore, the majority of the elements of this matrix are zero, so it is a _sparse matrix_. Taking advantage of the sparsity of the coefficient matrix \(A\) is particularly important (to reduce memory requirements) if one wants to handle higher generation levels \(\ell\). Since each row of the matrix \(A\) can have at most 5 non-zero elements, a lower bound on its sparsity [14] is \(1-5/M_{\ell}\). ### Solving the eigensystem If the matrix \(A\) is stored as a dense matrix [15], the eigensystem is best solved by the routines ssyev/dsyev from the high-performance LAPACK-library [16]. If instead the popular programming languages Python or C++ are used, the Python modules NumPy/ScyPy [17; 18; 19; 20] or the library Armadillo [20] will provide the same capabilities, while Matlab has an eigensolver directly built into the language. Internally, all these approaches use the LAPACK library. On the other hand, if you should opt Figure 4: The classification of the lattice points that are internal to the square Koch curve. (a) For the square Koch curve (solid black line) the blue solid dots represent internal lattice points. Points that are on the boundary or outside the fractal are not shown. The green box indicates the region that is detailed in panel (b) of this figure; (b) Assuming that the lattice is traversed column-by-column from the upper left corner, the values of the classification array corresponding internal points using the convention detailed in the main text are presented; (c) The structure of the coefficient matrix represented by the left-hand-side of Eq. (2), that is, the finite difference approximation to the negative Laplacian \(-\nabla^{2}\). Here blue squares represent the value \(4/h^{2}\), the green squares represent the value \(-1/h^{2}\) while the white squares represent the zero elements. for storing the coefficient matrix \(A\) as a sparse matrix, ARPACK [21] is the workhorse eigensolver library and both SciPy and Armadillo have wrappers to this library. Furthermore, Matlab handles sparse matrices as part of the language. It should be mentioned that ARPACK also has the option of calculating a given number of the lowest eigenvalues and corresponding eigenvectors. This option can be significantly faster than calculating the full set of eigenvalues and eigenvectors. Independently of how the eigensystem is solved, the result is a set of eigenvalues \(\{\lambda_{\nu}\}\) and the corresponding set of eigenvectors \(\{\mathbf{v}_{\nu}\}\) (with \(\nu=1,2,\ldots\)). Typically the calculated eigenvectors \(\mathbf{v}_{\nu}\) are calculated using a given normalization; for instance, if LAPACK is used for the calculation, the eigenvectors are normalized to have unit \(L_{2}\)-norms. The calculated eigenvectors \(\mathbf{v}_{\nu}\) cannot be visualized directly. Instead, they have to be mapped back onto the lattice that was initially defined and assumed in setting up the eigensystem (a mapping from a vector to a portion of a matrix). To this end, an eigenmode matrix \(E_{\nu}\) is allocated to have the same dimensions as the lattice and the classification matrix \(C\). By performing a (column-by-column) double loop over the elements \(C(m,n)\) of the classification matrix [22], such a vector-to-matrix mapping can be achieved by using how the classification matrix was defined [see Sec. III.3]. For points of the lattice \((m,n)\) that are not internal to the square Koch drum, indicated by \(C(m,n)\leq 0\), we put \(E_{\nu}(m,n)=0\), _i.e._ vanishing vertical displacement. However, for points of the lattice for which \(C(m,n)>0\), we set \(E_{\nu}(m,n)=v_{\nu}(i)\) where \(i=C(m,n)\) is a positive integer [see Sec. III.3 for details]. When the double-loop over \(m\) and \(n\) finishes, the vector-to-matrix mapping is completed and now the eigenmode can be visualized by generating a contour plot of the eigenmode matrix \(E_{\nu}\) and on it superposing the boundary of the square Koch curve assumed in calculating the eigenmodes. In this way, we obtained the eigenmodes that will be presented below (in Figs. 5 and 6.) ## IV Results and Discussion The previous section detailed how to set up and solve the eigensystem \(A\mathbf{v}=\lambda\mathbf{v}\) that determines the eigenmodes and eigenfrequencies of the square Koch drum. Here we will present and discuss the results that can be obtained by doing so. It will be assumed that the boundary of the square Koch drum is generated at level \(\ell=4\)[23]. This value of \(\ell\) is high enough that the square Koch curve displays sufficient details without the resulting eigensystem taking too long to solve or requiring more memory than can be stored on a typical student laptop. For level \(\ell=4\) the discretization interval is \(\delta_{4}=L/4^{4}=L/256\) [Eq. (3)], and the width of the square Koch drum is \(L_{4}\approx 1.664L\) [Eq. (4)]. Furthermore, with these values, or from Eq. (6), it follows that the linear size of the quadratic lattice is \(N_{4}+1=427\). Out of the \((N_{4}+1)^{2}=182\,329\) points, \(16\,384\) lattice points are boundary points, while there are \(M_{4}=57\,345\) internal lattice points for the square Koch drum (\(\ell=4\)). Therefore, the size of the eigensystem is \(M_{4}\times M_{4}\). Using single-precision floating points, dense storage of the coefficient matrix of the eigensystem will require about \(12.25\,\mathrm{G}\mathrm{b}\) of memory. Since the sparsity of the matrix is over \(99.9\,\%\), only a fraction of this storage is required if sparse matrix storage is used. It should be mentioned that the students do not typically have sufficient memory on their laptops for dense matrix storage when \(\ell\geq 4\); however, if they are using sparse storage, they are not expected to face this problem, until \(\ell\geq 6\). For \(\ell=4\) the eigensystem was constructed using sparse matrix storage and solved as outlined in Sec. II. The calculation of the first 21 eigenmodes of the square Koch drum took only a few minutes on a typical desktop computer; the most time-consuming steps of the calculation were (_i_) to obtain the classification of the lattice points, needed for the system setup, and (_ii_) to solve the eigensystem. In this way we obtained the eigenmodes presented in Figs. 5 and 6. Here the calculated eigenvectors were mapped back onto the eigenmode matrix \(E_{\nu}\) and contour plots of these modes, with the boundary of the square Koch drum superimposed, were produced to visualize the calculated modes [see Sec. III.5 for details]. Figure 5(a) presents the fundamental eigenmode of the square Koch drum (at level \(\ell=4\)). It is found that the vertical displacement of this mode is concentrated around the center of the square Koch drum and the displacement values all have the same sign; therefore, no nodal lines exist for the fundamental mode, as expected from the Courant nodal domain theorem [24]. This feature is similar to the fundamental mode of the non-fractal square drum [Fig. 3(a)] [4; 25]. The corresponding dimensionless eigenfrequency is \(\Omega_{0}=9.4299\), a value that should be compared to the fundamental frequency of the square drum which is \(\widehat{\Omega}_{0}=\sqrt{2}\pi=4.4429\)[3; 4; 25]. Therefore, the ratio of these two fundamental frequencies is \(\Omega_{0}/\widehat{\Omega}_{0}=2.1225\), a ratio that Sapoval _et al._ reported to be \(2.100^{3}\). Reducing the generation level to \(\ell=3\), as assumed in the experiments by Sapoval _et al._, resulted in a reduced ratio \(\Omega_{0}/\widehat{\Omega}\) that still remained slightly higher than the experimental value. However, visually comparing the fundamental eigenmode in Fig. 5(a) to the fundamental mode depicted in Fig. 4(a) of Ref. [3] shows good agreement. With regards to the eigenmodes \(E_{1}\) and \(E_{2}\), seen in Figs. 5 (b) and 5(c), we numerically find that \(\Omega_{1}\) equals \(\Omega_{2}\) to 10 decimal places [Table 1], which we interpret as a sign of _degeneracy_. The number of different eigenmodes corresponding to a particular eigenfrequency is known as the _degree of degeneracy_. It should be recalled that the first excited states of a square drum are also degenerate with a degree of degeneracy of two [4]. The following two eigenmodes, \(E_{3}\) and \(E_{4}\), are non-degenerate and their structures are presented in Figs. 5(d) and 5(e). For both these modes, the displace ment is mainly in the four "wings" of the square Koch drum, while, for each mode, the displacement at the center of the drum is significantly lower. Hence, one observes four well-defined regions for which the displacement is significant. For mode \(E_{4}\), the displacement in these regions has the same sign, while for mode \(E_{3}\), two diagonally placed regions have positive displacement while the other two have negative displacement. The reason the \(E_{3}\) eigenmode does not have a rotated, degenerate eigenmode is discussed later in this section and can be explained on the basis of group theory. If we compare the eigenmodes \(E_{1}\)-\(E_{4}\) from Figs. 5(b)-(e) (and their eigenfrequencies), to the corresponding modes shown by Fig. 5 in Sapoval _et al._[3], good qualitative agreement is found. It is remarked that the experimental displacement pattern presented in Fig. 1(b) can be obtained by a linear combination of the modes \(E_{1}\)-\(E_{4}\), as was explained in Ref. [3]. Figures 5(f)-(u) present the structure of the modes \(E_{\nu}\) for \(\nu=5\)-\(20\) and their corresponding eigenfrequencies are given in Table 1. Several of these modes are degenerate, like the modes that correspond to mode indices \(\nu=5,6\); \(\nu=9,10\); \(\nu=14,15\) and \(\nu=18,19\) [see Table 1]. Moreover, and as expected, one finds that the spatial complexity of the modes increases with the mode index. It \begin{table} \begin{tabular}{c c c c} \(\nu\) & \(\Omega_{\nu}\) & \(g_{\nu}\) & \(\Omega_{\nu}/\widehat{\Omega}_{0}\) \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 1: The eigenfrequencies associated with the eigenmodes of the square Koch drum (\(\ell=4\)) depicted in Figs. 5. The columns of the table present the mode index \(\nu\), the dimensionless eigenfrequency \(\Omega_{\nu}\) and the degree of degeneracy \(g_{\nu}\), both for the square Koch drum, and finally the ratio \(\Omega_{\nu}/\widehat{\Omega}_{0}\) where \(\widehat{\Omega}_{0}=\sqrt{2}\pi\) is the dimensionless fundamental eigenfrequency of the corresponding classic square drum. Figure 5: (Color online) The lowest eigenmodes \(E_{0}\)–\(E_{20}\) of the square Koch drum at fractal generation level \(\ell=4\) that correspond the lowest eigenfrequencies which are listed in Table 1. These modes were obtained by solving the eigensystem as explained in Sec. III assuming the discretization interval \(\delta_{4}=L/4^{4}=L/256\). The blue and red colors represent negative and positive values for the vertical displacement, respectively. is hard not to appreciate the esthetic beauty of some of these higher-order modes depicted in Fig. 5. Many students found motivation in producing, on their own account, such appealing results. One may also wonder what some of the much higher-order modes of the square Koch drum look like. To this end, Fig. 6 presents the modes \(E_{1113}\)-\(E_{1115}\). The associated eigenfrequencies are given in the figure caption. The mode structure is rather complex, as expected, and \(E_{1114}\) and \(E_{1115}\) are, in fact, degenerate modes. We now turn to the symmetry properties of the eigenmodes presented in Figs. 5 and 6. These properties are determined by the symmetries of the eigenproblem (1). The square Koch curve [Fig. 3(c)] is invariant with respect to in-plane rotations of \(90^{\circ}\) about the center of the drum (for any value of \(\ell\)). Since the Helmholtz equation (1a) is rotationally invariant, the full solution to (1) displays in-plane \(90^{\circ}\)-rotational symmetry. The consequence for the eigenmodes of this symmetry is typically studied using group theory [26; 27]. The useful result to note from such theory is that when a symmetry operation of the problem is applied to one of its eigenmodes, the result will be a linear combination of the eigenmodes corresponding to the same eigenvalue [27]. This has the consequence that non-degenerate eigenmodes of the square Koch drum should, up to a constant, be \(90^{\circ}\)-rotational symmetric about their center point. For a \(g_{\nu}=2\) degenerate eigenmode, the prediction is that its in-plane rotation of \(90^{\circ}\) about its center should, due to the orthogonality of the eigenmodes, result in a constant times the other eigenmode that corresponds to the same eigenvalue. Close inspection of the modes in Figs. 5 and 6 reveals that the expected symmetry properties are indeed present in the calculated eigenmodes. In total 11 of the 21 eigenmodes of the square Koch drum presented in Fig. 5 are non-degenerate [Table 1]. The dimensionless eigenfrequencies of the (non-fractal) square drum are \(\sqrt{m^{2}+n^{2}}\pi\) with \(m,n=1,2,\ldots\)[28; 4]. Among the 21 first eigenmodes of the square drum, only 4 modes are non-degenerate. The lower number of degenerate eigenmodes found for the square Koch drum as compared to the corresponding non-fractal square drum is due to the latter drum having a higher degree of symmetry. The classic square drum is also symmetric with respect to reflections about the first (horizontal) and second (vertical) axis [Fig. 2(a)] and with respect to the \(\pm 45^{\circ}\) diagonals. These symmetries are _not_ present for the square Koch drum. For this reason, some of the degeneracy that is present in the classic square drum is lifted for the corresponding square Koch drum. Additional symmetry in the shape of the drum increases the fraction of eigenmodes that are degenerate, at least, this is the case for the drums that we considered. ## V Conclusions The numerical experiment described in this paper provided students with a better understanding of the vibrational properties of fractal or extremely irregular structures. Important topics include the vibrations of fractal drums, their eigenfrequencies and corresponding eigenstates. Optionally, one could extend the study to include the density of states in order to examine the Weyl-Berry conjecture. The numerical experiment allows students to construct a fractal drum, calculate its eigenmodes, and visualize the vibrational modes. The students can change boundary conditions, vary certain dimensions, and observe the results. The assignment may be integrated into a computational physics class. Understanding students' concerns when solving a numerical problem allows the teacher to be more effective and help all their students take full advantage of the educational resources at their disposal. The ideal group size for conducting the proposed activities is two students to allow for discussions between them. Furthermore, this problem will expose students to eigenvalue problems which probably are larger than what they have faced during their studies. In order to solve it, they have to generate the fractal structure and must learn to master how to map an unorganized portion of a matrix of unknowns into a vector (required by the eigensolver), and to define the coefficient matrix that is associated with it. Since this matrix is quite sparse, the use of eigensolvers for sparse matrices will typically become a topic of interest. Last but not least, our experience in presenting/supervising this computational student project several times is that the students tend to enjoy it. Students typically find the project challenging but are still motivated to solve the problem; they are fascinated by the beauty of some of the eigenmodes of the square Koch drum. The hope is that others can benefit from our experience with this numerical student experiment. Many of the tasks in this numerical experiment presented students with novel challenges. For example, students working on the classification of whether lattice points are inside or outside the fractal boundary struggled with finding an efficient solution. Since some of the tasks in this work involve very large arrays such as the coefficient matrix, every portion of the code must be optimized to yield a solution within a realistic time span. Students reported that while constructing and solving the eigensystem was relatively simple, optimizing this process was more challenging. Furthermore, they also reported that the scope and difficulty of the tasks of this numerical experiment improved their confidence in their own coding abilities for the purpose of both scientific numerical modeling and software engineering. To assist instructors considering applying the "fractal drum" project discussed in this paper, the formulation of the project as we used it in our course, including the step-by-step instructions for the students, is available as supplementary material in Ref. [29]. ###### Acknowledgements. V.P.S. acknowledges the Research Council of Norway through its Center of Excellence Funding Scheme, Project No. 262644 PoreLab, for allowing her the use of PoreLab's facilities, and I.S. thanks Dr. J.O. Fjaerestad for fruitful discussions on group theory. The authors gratefully acknowledge the anonymous referees and the editor whose constructive comments improved this paper.
2309.12519
Multimessenger astronomy driven by high-energy neutrinos
The possible connection between high energy neutrinos in the energy region above 100 TeV and ultrahigh energy cosmic rays (UHECRs) at energies above $10^{19}$ eV motivates multi-messenger observation approaches involving neutrinos and the multi-wavelength electro-magnetic (EMG) signals. We have constructed a generic unification scheme to model the neutrino and UHECR common sources. Finding the allowed space of the parameters on the source characteristics allows a case study to evaluate the likelihood of each of the known source classes being such unified sources. The likely source candidates are transient or flaring objects mainly in optical and X-ray bands. We propose the two feasible strategies to identify these sources. One is to introduce a sub-threshold triggering in a wide field of view X-ray observatory for following up neutrino detections, and the other is to search for EMG counterparts associated with detections of multiple neutrino events coming from the same direction within a time scale of $\lesssim 30$ days. Sources with a total neutrino emission energy greater than $\sim 10^{51}$ erg are accessible with the present or near-future high energy neutrino observation facilities collaborating with X-rays and optical telescopes currently in operation. The neutrino-driven multi-messenger observations provide a smoking gun to probe the hadronic emission sources we would not be able to find otherwise.
Shigeru Yoshida
2023-09-21T22:45:25Z
http://arxiv.org/abs/2309.12519v1
# Multimessenger astronomy driven by high-energy neutrinos ###### Abstract: The possible connection between high energy neutrinos in the energy region above 100 TeV and ultrahigh energy cosmic rays (UHECRs) at energies above \(10^{19}\) eV motivates multi-messenger observation approaches involving neutrinos and the multi-wavelength electro-magnetic (EMG) signals. We have constructed a generic unification scheme to model the neutrino and UHECR common sources. Finding the allowed space of the parameters on the source characteristics allows a case study to evaluate the likelihood of each of the known source classes being such unified sources. The likely source candidates are transient or flaring objects mainly in optical and X-ray bands. We propose the two feasible strategies to identify these sources. One is to introduce a sub-threshold triggering in a wide field of view X-ray observatory for following up neutrino detections, and the other is to search for EMG counterparts associated with detections of multiple neutrino events coming from the same direction within a time scale of \(\lesssim 30\) days. Sources with a total neutrino emission energy greater than \(\sim 10^{51}\) erg are accessible with the present or near-future high energy neutrino observation facilities collaborating with X-rays and optical telescopes currently in operation. The neutrino-driven multi-messenger observations provide a smoking gun to probe the hadronic emission sources we would not be able to find otherwise. ## 1 Introduction The cosmic background radiation in ultra-high energy (UHE) sky at \(\gg\) TeV is formed by cosmic rays and neutrinos. The precise measurements of ultrahigh-energy cosmic rays (UHECRs) by Pierre Auger Observatory (PAO) with high statistics have now revealed the detailed structure of their energy spectrum [1]. The IceCube Neutrino Observatory has discovered [2, 3] and measured the high energy neutrino radiation in the UHE sky [4, 5], realizing the observation window of the penetrating messengers to study the UHE emissions. As shown in Fig. 1, we have found that the observed energy flux of high-energy neutrinos above \(~{}\sim 100\) TeV is comparable to that of UHECRs at \(\gtrsim 10^{19}\) eV. It suggests that the neutrino radiation may originate in the same astronomical objects to radiate UHECRs we have been detecting. It may be plausible that the UHE cosmic radiation can be understood in a common unified scheme. We have built the unification generic model to account for the observed neutrinos at energies greater than \(\sim 100\) TeV and UHECRs, based on the photo-hadronic interaction framework [8]. This modeling enables us to evaluate if a given class of astronomical objects are qualified as the possible common origin of UHECRs and neutrinos. It is a viable tool to probe the UHECR origin with the multi-messenger observations. In this article, we discuss the possibilities of the UHECR-neutrino common sources for a broad spectrum of the astronomical objects classes with the generic unification model. We then discuss how we can identify these sources. As the viable source candidates are transients, we argue that neutrino follow-up observations in optical and X-ray bands are feasible methods to find the sources of hadronic emissions. We propose practical strategies to pin down neutrino sources that could not be identified by neutrino observations alone. Figure 1: The energy fluxes of UHE cosmic background radiations. The small data points represent the UHECR fluxes measured by PAO [1]. The rest of the data shows the neutrino energy fluxes and the upper limits measured with IceCube. The thick black data points were obtained by the neutrino-induced cascade measurements [4]. The shaded region indicates the flux space consistent with the \(\nu_{\mu}\) induced track measurements [5]. The blue data point shows the flux of the 6 PeV-energy \(\bar{\nu_{e}}\) event estimated by the dedicated search channel (PEPE) [6]. The thick line with arrows indicates the differential upper limit obtained by the Extremely-high Energy (EHE) neutrino search [7]. The neutrino fluxes are the all-flavor-sum fluxes \(E_{\nu}^{2}\Phi_{\nu_{e}+\nu_{\mu}+\nu_{\tau}}\). Finally, we conclude that multi-messenger observations with neutrinos, optical, and X-ray photons pioneer in the field of high energy astrophysics with the presently and soon-to-be available facilities. ## 2 A generic unification model We constructed the generic framework to describe a unified origin of UHECRs and high energy neutrinos with the parameterization less dependent on the details of the source environment and the model-dependent micro-physics processes [8]. Our goal here is to derive generic and model-independent constraints on characteristics of the possible UHECR-neutrino common emitters. The resultant constraints are considered to be conservative and weaker than those obtained by the model-dependent arguments applied to each of the specific classes of astronomical objects, but the results based on our generic framework are universal and robust as long as the UHE particle emission mechanism is mostly determined in a simple setup with physics processes well approximated by a first leading order effect. We introduce the Occam's razor principle here to judge whether a given astronomical object class can be the unified origin. In this sense, our arguments provide a guidance for astronomers to conduct the multi-messenger observations. We made the following assumptions to build our generic unification framework. **One zone**: The cosmic ray accelerations and their interactions to produce secondary neutrinos occur in the same place. **Escape from sources**: The energy spectrum of accelerated UHECRs running away from the acceleration zone is not drastically distorted in their escape process. This assumption may affect the UHECR energetics condition we will discuss below. Considering the uncertainties of the UHECR escape process, we use the measured UHECR intensity as the upper limit to constrain the source luminosity rather than to fit it with the spectrum calculated by the generic model. **Optically thin environment**: The sources are optically thin for UHE protons interacting with photons, and their emission is directly observed without absorption. This assumption is valid for photo-hadronic scenarios with one zone modelling. **Photon spectrum**: The spectrum of photons interacting with UHECRs to produce neutrinos is described by a power-law. However we note that the thermal photon yield can also be reasonably approximated by a power-law form in the energy range relevant to the high energy neutrino emission within a factor of two allowance. **Cosmological evolution**: The UHECR-neutrino common sources follow the cosmological evolution tracing the star formation rate (SFR) or any other similar evolutions. However, we parameterize the evolutions to estimate the conditions for the different evolution cases when necessary. ### The source modelling The power of the unified sources is gauged by the bolometric photon luminosity \[L^{\prime}_{\gamma}=4\pi R^{2}c\int\limits_{\varepsilon_{\gamma}^{\rm min}}^{ \varepsilon_{\gamma}^{\rm max}}\frac{dn_{\gamma}}{d\varepsilon^{\prime}_{ \gamma}}\varepsilon^{\prime}_{\gamma}d\varepsilon^{\prime}_{\gamma}, \tag{1}\] where \(R\) is the distance of the UHECR acceleration and emission site from the source center. Primed (') characters represent quantities measured in the rest frame of plasma with the Lorentz bulk factor \(\Gamma\). The photon density spectrum follows a power-law form as \[\frac{dn_{\gamma}}{d\varepsilon^{\prime}_{\gamma}}=\frac{K^{\prime}_{\gamma}}{ \varepsilon^{\prime}_{\gamma 0}}\left(\frac{\varepsilon^{\prime}_{\gamma}}{ \varepsilon^{\prime}_{\gamma 0}}\right)^{-\alpha_{\gamma}}, \tag{2}\] where \(\varepsilon^{\prime}_{\gamma 0}\) is the reference energy in the engine frame, and it is associated with the representative energy of UHECRs \(\varepsilon^{\Delta}_{p0}\) by \[\varepsilon_{\gamma 0}=\frac{(s_{R}-m_{p}^{2})}{4}\frac{\Gamma^{2}}{\varepsilon ^{\Delta}_{p0}} \tag{3}\] where \(s_{R}\approx 1.47\) GeV\({}^{2}\) is the Mandelstam variable at the \(\Delta\) resonance in the photopion production. The representative CR energy \(\varepsilon^{\Delta}_{p0}\) is set to be 10 PeV in the present formulation, as this energy range of cosmic ray protons should produce the PeV-energy neutrinos IceCube has detected. The spectrum of UHECRs emitted from the sources is assumed to follow a power-law with index of \(\alpha_{\rm CR}\). The bolometric luminosity of UHECRs at energies above \(\varepsilon^{\Delta}_{p0}=10\) PeV is connected to \(L_{\gamma}\) via the CR loading factor \(\xi_{\rm CR}\) as \(L_{\rm CR}\approx\xi_{\rm CR}L_{\gamma}=\xi_{\rm CR}L^{\prime}_{\gamma}\Gamma^{2}\). The neutrino luminosity with respect to a given \(L_{\rm CR}\) is determined by the \(p\gamma\) interaction optical depth, an average number of the interaction times before cosmic ray protons escape from the interaction site. It is approximately given by [8] \[\tau_{p\gamma}(\varepsilon_{i}) \approx \tau_{p\gamma 0}\Bigg{(}\frac{\varepsilon_{i}}{\tilde{\varepsilon}^{ \Delta}_{p0}}\Bigg{)}^{\alpha_{\gamma}-1} \tag{4}\] \[\approx \left[\frac{2}{1+\alpha_{\gamma}}\frac{K^{\prime}_{\gamma}R}{ \Gamma}\int ds\frac{\sigma_{p\gamma}(s)}{s-m_{p}^{2}}\right]\Bigg{(}\frac{ \varepsilon_{i}}{\tilde{\varepsilon}^{\Delta}_{p0}}\Bigg{)}^{\alpha_{\gamma }-1}\] \[\approx \frac{B^{\prime}}{\Gamma^{2}}\sqrt{\frac{L^{\prime}_{\gamma}}{ \xi_{B}}}C(\alpha_{\gamma},\tilde{\varepsilon}^{\Delta}_{p0})\Bigg{(}\frac{ \varepsilon_{i}}{\tilde{\varepsilon}^{\Delta}_{p0}}\Bigg{)}^{\alpha_{\gamma} -1}. \tag{5}\] Proceeding from Eq. (4) to (5), the explicit dependence on \(R\) is eliminated by considering the energy density balance between the photon radiation \(L^{\prime}_{\gamma}\) and the magnetic energy with the B-field strength \(B^{\prime}\) via the equipartition parameter \(\xi_{B}\). The constant \(C\) depends only on the photon spectrum power-law index \(\alpha_{\gamma}\) and the representative CR energy \(\varepsilon_{p0}^{\Delta}\) and is approximately given by \[C(\alpha_{\gamma},\tilde{\varepsilon}_{p0}^{\Delta}) \sim 2.4\times 10^{-24}\ {\rm erg^{-1}\ cm^{3/2}\ s^{1/2}} \tag{6}\] \[\times \left(\frac{2}{1+\alpha_{\gamma}}\right)\left(\frac{\tilde{ \varepsilon}_{p0}^{\Delta}}{10\ {\rm PeV}}\right).\] ### The required source conditions A UHECR source must meet the following necessity conditions : The acceleration, the escape, and the survival requirements. In order to accelerate cosmic rays to UHE range, the acceleration time must be faster than the dynamical time scale. It sets the lower bound for the magnetic field as \[B^{\prime}\gtrsim\frac{\varepsilon_{i}^{\rm max}}{eZ}\frac{\eta}{R}\approx 1.1 \times 10^{5}\eta\left(\frac{R}{3\times 10^{12}\ {\rm cm}}\right)^{-1}\left(\frac{ \varepsilon_{i}^{\rm max}}{Z10^{11}\ {\rm GeV}}\right)\ {\rm G}, \tag{7}\] where \(\eta\gtrsim 1\) is the particle acceleration efficiency term. This condition, also known as the Hillas condition when \(\eta\rightarrow\beta^{-2}\), can be transformed to the constraint on the target photon luminosity \(L^{\prime}_{\gamma}\), the gauge of the source engine power in the present generic modelling scheme, \[L^{\prime}_{\gamma} \geq \frac{1}{2}\xi_{B}^{-1}c\eta^{2}\beta^{2}\left(\frac{\varepsilon_ {i}^{\rm max}}{Ze}\right)^{2} \tag{8}\] \[\simeq 1.7\times 10^{45}\ \xi_{B}^{-1}\eta^{2}\beta^{2}\left(\frac{ \varepsilon_{i}^{\rm max}}{Z10^{11}\ {\rm GeV}}\right)^{2}\ \ \ {\rm erg/s} \tag{9}\] In addition, to ensure that UHECRs can leave the sources before losing their energies, the escape time scale must be faster than the cosmic ray energy loss time scale. The energy loss processes consist of the \(p\gamma\) photo-meson production, Bethe-Heitler (BH) interactions, and the synchrotron cooling. The photo-meson production time scale is essentially counted with the \(p\gamma\) optical depth, \(\tau_{p\gamma}\), in the present scheme, and any sources with \(\tau_{p\gamma}(\varepsilon_{i}^{\rm max})\lesssim 1\) implies that the energy loss by the \(p\gamma\) photo-meson production is not a deciding factor to limit the UHECR acceleration and escape processes. We examine this \(\tau_{p\gamma}\) condition by estimating \(\tau_{p\gamma}\) using Eq. (5) in Section 2.4. As the BH process is in general important only if the photon spectrum is softer as \(\alpha_{\gamma}\gtrsim 2\), the UHECR escape condition is formulated as a necessity condition by requiring the dynamical time scale faster than the synchrotron cooling time scale. It has been found that this condition is transformed to the upper bound of the \(p\gamma\) optical depth at the cosmic ray reference energy \(\varepsilon_{p0}^{\Delta}=10\) PeV, \(\tau_{p\gamma 0}\), as \[\tau_{p\gamma 0} \lesssim 6\times 10^{-1}\frac{2}{1+\alpha_{\gamma}}\left(\frac{\xi_{B}}{0.1 }\right)^{-1}\left(\frac{A}{Z}\right)^{4}\left(\frac{\varepsilon_{i}^{\rm max }}{10^{11}\ {\rm GeV}}\right)^{-1}. \tag{10}\] If the measured bulk of UHECRs is dominated by heavier nuclear rather than nucleon as strongly indicated by the data obtained by PAO, the further severe requirement must be satisfied - The nuclei survival condition [9]. That is, we require that nuclei with \(A>1\) and \(Z>1\) are accelerated and survive. This is possible only if the time scale of the photo-disintegration is slower than the dynamical time scale which leads to the condition on photo-disintegration optical depth as \(\tau_{A\gamma}\lesssim A\). We found that this condition sets the upper bound of the \(p\gamma\) optical depth as [8] \[\tau_{p\gamma 0} \lesssim A\frac{\int ds\frac{\sigma_{p\gamma}(s)}{s-m_{p}^{2}}}{\int ds \frac{\sigma_{A\gamma}(s)}{s-m_{A}^{2}}}\Bigg{[}\left(\frac{s_{\rm GDR}-m_{A}^ {2}}{s_{R}-m_{p}^{2}}\right)\left(\frac{\tilde{\varepsilon}_{p0}^{\Delta}}{ \varepsilon_{i}^{\rm max}}\right)\Bigg{]}^{\alpha_{\gamma}-1} \tag{11}\] \[\lesssim 0.4\,\left(\frac{A}{56}\right)^{0.79}=0.2\left(\frac{A}{24} \right)^{0.79}.\] ### The constraints due to the UHECR and and neutrino fluxes UHECR sources in the unification scenario must provide both the UHECR flux and the high energy neutrino flux that are consistent with the measurements. The analytical formulation to calculate the spectrum of UHECRs on the Earth and that of secondary produced neutrinos have been derived in Ref. [8, 10] to place the constraints on the source parameters of \(L^{\prime}_{\gamma}\), \(\tau_{p\gamma 0}\), and the boosted source number density \(N_{\Gamma}\equiv n_{0}\xi_{\rm CR}\Gamma^{2}\) where \(n_{0}\) is the comoving number density in the present epoch. Note that \(L^{\prime}_{\gamma}N_{\Gamma}=\xi_{\rm CR}L_{\gamma}n_{0}=L_{\rm CR}n_{0}\) is the bolometric luminosity density of UHECRs above \(\varepsilon_{p0}^{\Delta}=10\) PeV. Figure 2 shows the resultant constraints. The region enclosed in the solid lines is the allowed parameter space by the flux conditions. The conditions were set by the criteria that ensure the consistency to the neutrino measurements with IceCube including the upper limit of flux at \(\varepsilon_{\nu}\gtrsim 100\) PeV, and that the UHECR flux on the earth would not exceed the integral flux above \(10^{19}\) eV measured by PAO. Figure 2: The allowed region in the parameter space of luminosity per unit volume, \(L^{\prime}_{\gamma}{\cal N}_{\Gamma}\), and damping factor \(1-e^{-\tau_{p\gamma 0}}\)[8]. The region enclosed by the solid lines displays the allowed space by the UHECR and the neutrino flux requirements. The shaded region represents the parameter space allowed also by considering the UHECR proton escape condition or the nuclear survival condition. The left panel shows the proton model while the right panel shows the case of primary silicon nuclei. We can interpret the allowed parameter space shown in Fig. 2 from the context of the UHECR energetics and the analytical estimate of the fiducial neutrino flux. The UHECR differential luminosity density is estimated as [11] \[E_{\rm CR}\frac{dQ_{\rm CR}}{dE_{\rm CR}} \approx 6.3\times 10^{43}[{\rm erg~{}Mpc^{-3}yr^{-1}}]~{}\left(\frac{E_{ \rm CR}}{10^{19.5~{}eV}}\right)^{2-\alpha_{\rm CR}} \tag{12}\] \[\approx \left\{\begin{array}{ll}1.8\times 10^{44}~{}[{\rm erg~{}Mpc^{-3} yr^{-1}}]&\alpha_{\rm CR}=2.3,E_{\rm CR}=10^{18}~{}{\rm eV}\\ 3.4\times 10^{44}~{}[{\rm erg~{}Mpc^{-3}yr^{-1}}]&\alpha_{\rm CR}=2.5,E_{\rm CR }=10^{18}~{}{\rm eV}\end{array}\right.\] As a representative case, we consider \(\alpha_{\rm CR}=2.3\) hereafter. From Eq. (12), the resultant bolometric UHECR energy density above the reference energy above \(\varepsilon_{p0}^{\Delta}=10\) PeV is given by \[n_{0}\xi_{\rm CR}L_{\gamma} \approx 13E_{\rm CR}\frac{dQ_{\rm CR}}{dE_{\rm CR}}|_{E_{\rm CR}=10^{18}~ {}{\rm eV}} \tag{13}\] \[\approx 2.3\times 10^{45}~{}~{}{\rm erg~{}Mpc^{-3}yr^{-1}},\] which is consistent with the allowed region of the parameter space. This energetics condition above effectively sets the requirement of the CR loading factor for a given \(L_{\Gamma}\) and \(n_{0}\) such as \[\xi_{\rm CR}\approx 0.7\left(\frac{L_{\gamma}}{10^{46}{\rm erg/s}}\right)^{-1} \left(\frac{n_{0}}{10^{-8}{\rm Mpc^{-3}}}\right)^{-1}. \tag{14}\] The neutrino emissivity from a source is connected to the primary UHECR emissivity as [12] \[\varepsilon_{\nu}^{2}\frac{d\dot{N}_{\nu}}{d\varepsilon_{\nu}}\approx\xi_{\pi} <x><y_{\nu}>\tau_{p\gamma}\varepsilon_{\rm CR}^{2}\frac{d\dot{N}_{\rm CR}}{d \varepsilon_{\rm CR}}A^{2-\alpha_{\rm CR}} \tag{15}\] for a hadronically thin (_i.e.\(\tau_{p\gamma}\lesssim 5\)_) source. Here \(\xi_{\pi}\sim 3/2\) is the average multiplicity of neutrinos from a single pion produced by the photo-hadronic interaction, and \(<y_{\nu}>\sim 1/4\) is the average fraction of energy channeling into a neutrino from the secondary produced pion and \(<x>\sim 0.2\) is the average inesticity of the \(p\gamma\) collision. Since the energy flux of the high energy cosmic background neutrinos can be approximately written using the source emissivity \(\varepsilon_{\nu}^{2}d\dot{N}_{\nu}/d\varepsilon_{\nu}\), we can relate the \(p\gamma\) optical depth to the required bolometric UHECR luminosity density for a given energy flux of neutrinos via Eq. (15). We get \[\tau_{p\gamma 0}L_{\gamma}^{\prime}{\cal N}_{\Gamma}=\tau_{p \gamma 0}n_{0}\xi_{\rm CR}L_{\gamma} \approx 9.3\times 10^{43}\left(\frac{E_{\nu}^{2}\Phi_{\nu}}{2\times 10^{-8}~{}[{ \rm GeV~{}cm^{-2}~{}sec^{-1}sr^{-1}}]}\right) \tag{16}\] \[\times\left(\frac{\xi_{z}}{2.8}\right)^{-1}A^{0.3}~{}~{}~{}{\rm erg ~{}Mpc^{-3}yr^{-1}},\] for \(\alpha_{\rm CR}=2.3\). Here \(\xi_{z}\equiv(1/t_{\rm H})\int dt\Psi(z)/(1+z)\) is a dimensionless parameter that depends on the redshift evolution function \(\Psi(z)\) of the sources. This relation well represents the allowed parameter space shown in Fig. 2. Combining the UHECR luminosity density given by Eq. (13) with this neutrino fiducial flux condition, Eq. (16), sets the lower bound of the \(p\gamma\) optical depth, which is \[\tau_{p\gamma 0} \gtrsim 0.04A^{0.3}\left(\frac{\xi_{z}}{2.8}\right)^{-1} \tag{17}\] \[\gtrsim 0.1\left(\frac{A}{28}\right)^{0.3}\left(\frac{\xi_{z}}{2.8} \right)^{-1}.\] As the UHECR escape condition sets the _upper_ bound of the optical depth (see Eq. (10)), this fiducial neutrino flux requirements leads to a necessity condition of the B-field equipartition parameter. \[\xi_{\rm B}\lesssim 1.5\left(\frac{A}{Z}\right)^{4}\left(\frac{\xi_{z}}{2.8} \right)\left(\frac{\varepsilon_{i}^{\rm max}}{10^{11}\ {\rm GeV}}\right)^{-1}\!A^{-0.3} \tag{18}\] ### The case study Tables 1 and 2 list the various source classes together with their characteristic parameters and the constraints on \(\xi_{B}\) and \(\tau_{p\gamma 0}\) imposed by the conditions we discussed earlier. The AGN corona [13] is already disfavored as a common UHECR neutrino source regardless of any model-dependent arguments. The conditions on \(\tau_{p\gamma}\) from the UHECR escape and the fiducial neutrino flux requirements cannot hold concurrently. This is primarily because the corona is expected to be strongly magnetized (\(\xi_{\rm B}\gg 1\)). The approximate estimate using Eq. (5) gives \(\tau_{p\gamma}\approx 100\left(B^{\prime}/0.3{\rm kG}\right)\left(L_{\gamma}/ 10^{44}{\rm erg/s}\right)^{\frac{1}{2}}\left(\xi_{B}/13\right)^{-\frac{1}{2}} \left(\beta/0.02\right)^{-1}\left(\varepsilon_{i}/\tilde{\varepsilon}_{p0}^{ \Delta}\right)^{0.8}\), indicating that AGN corona may be a TeV-PeV neutrino source candidate, but certainly not a UHECR source since \(\tau_{p\gamma}\gg 1\) prevents hadrons from being accelerated to UHE range. The BL Lac [14] should also meet the contradicting \(\tau_{p\gamma}\) conditions demanded by the UHECR escape and neutrino fiducial flux requirements, which makes it difficult to be considered as a common source. The estimated optical depth is \(\tau_{p\gamma 0}\approx 4\times 10^{-6}\left(B^{\prime}/1{\rm G}\right)\left( \Gamma/10\right)^{-3}\left(L_{\gamma}/2\times 10^{44}{\rm erg/s}\right)^{\frac{1}{2}} \left(\xi_{B}/80\right)^{-\frac{1}{2}}\), which indeed suggests that BL Lacs may be UHECR sources but too dark in neutrinos. UHE emission from FSRQ [15] is an interesting possibility. The UHECR escape and the neutrino flux conditions can be concurrent in principle, providing that \begin{table} \begin{tabular}{l c c c c} \hline \hline & AGN corona & BL Lac & FSRQ & Radio Gal. MAD \\ \hline \hline \(\Gamma\) of the target photons & \(\approx 1\) & \(\approx 10\) & \(\approx 1\) & \(\approx 1\) \\ target photon energy: Eq. (3) & opt/UV(X-ray) & X-ray & UV/Opt & UV/Opt \\ \(L_{\gamma}\) [\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}\)\) & \(10^{44}\) & \(2\times 10^{44}\) & \(4\times 10^{46}\) & \(10^{41}\) \\ \(n_{0}\) [\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}\)\({}^{-3}\)] & \(5\times 10^{-6}\) & \(3\times 10^{-7}\) & \(3\times 10^{-10}\) & \(2\times 10^{-6}\) \\ \(B^{\prime}\) [\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}\)] & 300 & \(\approx 1\) & \(1\) & \(100^{5}\) \\ \(\xi_{\rm B}\) & 13 & \(\approx 1\) & \(\approx 1\) & \(\approx 10\) \\ \hline Acceleration: Eq. (9) & \(\xi_{\rm B}\equiv 0.007(\frac{\xi_{z}}{2.8})^{-2}\) & \(\xi_{\rm B}\simeq 4.3(\frac{\xi_{z}}{2.8})^{-2}\) & \(\xi_{\rm B}\equiv 0.004(\frac{\xi_{z}}{2.8})^{-2}\) & \(\xi_{\rm B}\geq 8.7(\frac{\xi_{z}}{2.8})^{-2}\) \\ \(\tau_{p\gamma 0}\) by Escape: Eq. (10) & \(\lesssim 0.005(\frac{\xi_{z}}{2.8})^{-1}\) & \(\lesssim 7.5\times 10^{-4}(\frac{\xi_{z}}{2.8})^{-1}\) & \(\lesssim 0.6(\frac{\xi_{z}}{2.8})^{-1}\) & \(\lesssim 8\times 10^{-8}(\frac{\xi_{z}}{2.8})^{-1}\) \\ \(\tau_{p\gamma 0}\) by Nuclei survival: Eq. (11) & \(\lesssim 0.2(\frac{\xi_{z}}{2.8})^{0.79}\) & \(\lesssim 0.2(\frac{\xi_{z}}{2.8})^{0.79}\) & \(\lesssim 0.2(\frac{\xi_{z}}{2.8})^{0.79}\) & \(\lesssim 0.2(\frac{\xi_{z}}{2.8})^{0.79}\) \\ \(\tau_{p\gamma 0}\) by \(\nu\) Flux: Eq. (17) & \(\gtrsim 0.1(\frac{\xi_{z}}{2.8})^{-1}(\frac{\xi_{z}}{2.8})^{0.3}\) & \(\gtrsim 0.1(\frac{\xi_{z}}{2.8})^{-1}(\frac{\xi_{z}}{2.8})^{-1}\) & \(\gtrsim 0.5(\frac{\xi_{z}}{2.8})^{-1}(\frac{\xi_{z}}{2.8})^{0.3}\) \\ \(\xi_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}}}}\) & \(\approx 1.2}\) & \(\approx 1.2\) & \(\approx 5.8\) & \(\approx 350\) \\ \hline \hline \end{tabular} \end{table} Table 1: The parameters of the neutrino emission characteristics in the unification model and the constraints on \(\xi_{B}\), the B-field equipartition parameter, and \(\tau_{p\gamma 0}\), the photo-hadronic optical depth at the cosmic ray reference energy of \(\varepsilon_{p0}^{\Delta}=10\) PeV imposed by the conditions for UHECR-neutrino common sources. The various sites/population in AGN family are listed. \(0.1-1\) is in the plausible range at the FSRQ system. Using Eq. (5), we indeed estimate \(\tau_{p\gamma}\approx 1.4\left(B^{\prime}/1\mathrm{G}\right)\left(L_{\gamma}/4 \times 10^{46}\mathrm{erg/s}\right)^{\frac{1}{2}}\left(\xi_{B}/0.1\right)^{- \frac{1}{2}}\left(\varepsilon_{i}/\hat{\varepsilon}_{p0}^{\Delta}\right)^{0.5}\). The high photon luminosity (\(L_{\gamma}\gtrsim 10^{46}\) erg/s) meets the acceleration condition given by Eq. (9) for protons (\(Z=1\)) even if \(\xi_{\mathrm{B}}\ll 1\) while the nuclei survival condition could be only barely satisfied. It implies that FSRQs may be the common origin for UHECR protons and neutrinos, though not UHECR nuclei. However, this hypothesis is disfavored, since the strongly evolved sources such as FSRQs are not likely to be UHECR origins if the proton component is not negligible at the UHE range [16, 17]. This is because the GZK cosmogenic neutrinos would have overshot the present upper limit of neutrino flux at EeV (\(10^{18}\) eV) range placed by IceCube and PAO. A scenario of hadronic emissions from magnetically arrested disks (MAD) in a subclass of radio galaxies has been proposed [18], motivated by the GeV-TeV gamma-ray observations from nearby radio galaxies. The cosmic ray accelerations can be plausible with the framework of radiatively inefficient accretion flows (RIAFs). Because of the highly magnetized environment (\(\xi_{\mathrm{B}}\gg 100\)), MAD may be producing UHECR nuclei though UHECR protons cannot escape. It certainly meets the acceleration condition given by Eq. (9) for silicons (\(Z=14\)). The energetics condition, Eq. (14), requires \(\xi_{\mathrm{CR}}\gg 10\) but this constraints can be relaxed if we expand this model to the entire Fanaroff-Riley I galaxies by including high-excitation radio galaxies, bringing \(n_{0}\approx 10^{-4}\) Mpc\({}^{-3}\). However, it is unlikely to emit neutrinos with the intensity measured by IceCube. The approximate estimate using Eq. (5) gives \(\tau_{p\gamma}\approx 2.5\times 10^{-3}\left(B^{\prime}/100\mathrm{G}\right) \Gamma^{-3}\left(L_{\gamma}/10^{41}\mathrm{erg/s}\right)^{\frac{1}{2}}\left( \xi_{B}/3.7\times 10^{5}\right)^{-\frac{1}{2}}\left(\varepsilon_{i}/\hat{ \varepsilon}_{p0}^{\Delta}\right)^{1.2}\), which is far below the fiducial neutrino flux condition, Eq. (17). This is primarily because the MAD is optically too thin, given that the luminosity of the target photons (optical/UV) \(L_{\gamma}\sim 10^{41}\) erg/s. Note that the high magnetic field strength would cause the synchrotron cooling of the secondary produced muons by the photo-hadronic collisions, which suppresses neutrinos with energies higher than \(\sim 10\) PeV. Powerful transient objects (table 2) have also been discussed in the literature for considering UHECR origins. Jetted TDEs cannot be UHECR proton sources because of the escape requirement, but potentially UHE nuclei sources [19]. However, the tight margin between the nuclei survival condition and the fiducial neutrino flux requirement needs a fine parameter tuning for qualifying this object to be the UHECR-neutrino common sources. The ap \begin{table} \begin{tabular}{l c c c c} \hline \hline & jetted TDE & TDE corona & Low Luminosity GRB & Engine-driven SNe \\ \hline \(\Gamma\) of the target photons & \(\approx 10\) & \(\approx 1\) & \(\approx 2\)–\(10\) & \(\approx 1\) \\ target photon energy: Eq. (3) & X-ray & \(\mathrm{opt/UV}\)-ray & X-ray & UV/Opt \\ \(L_{\gamma}\) [erg/s] & \(10^{47}\) & \(3\times 10^{43}\) & \(10^{47}\) & \(3\times 10^{45}\) \\ \(n_{0}\) [Mpc\({}^{-3}\)] & \(3\times 10^{-12}(\frac{\Delta\Gamma}{10\mathrm{g}})\) & \(4\times 10^{-7}(\frac{\Delta\Gamma}{10\mathrm{g}})\) & \(3\times 10^{-11}(\frac{\Delta\Gamma}{3\times 10\mathrm{g}})\) & \(10^{-9}(\frac{\Delta\Gamma}{10\mathrm{g}})\) \\ \(B^{\prime}\) [G] & \(500\) & \(10^{3}\) & \(80\) & \(1.6\) \\ \(\xi_{\mathrm{B}}\) & \(1\) & \(45\) & \(0.1\) & \(4\) \\ \hline Acceleration: Eq. (9) & \(\xi_{\mathrm{B}}\gtrsim 0.009(\frac{\Delta\Gamma}{10\mathrm{g}})^{-2}\) & \(\xi_{\mathrm{B}}\gtrsim 0.01(\frac{\Delta\Gamma}{10\mathrm{g}})^{-2}\) & \(\xi_{\mathrm{B}}\gtrsim 0.009(\frac{\Delta\Gamma}{10\mathrm{g}})^{-2}\) & \(\xi_{\mathrm{B}}\gtrsim 0.3(\frac{\Delta\Gamma}{10\mathrm{g}})^{-2}\) \\ \(\tau_{p\gamma}\) by Escape: Eq. (10) & \(\lesssim 0.06(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) & \(\lesssim 1.3\times 10^{-3}(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) & \(\lesssim 0.6(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) & \(\lesssim 0.015(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) \\ \(\tau_{p\gamma}\) by Nuclei survival: Eq. (11) & \(\lesssim 0.2(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) & \(\lesssim 0.2(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) & \(\lesssim 0.2(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) & \(\lesssim 0.2(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}\) \\ \(\tau_{p\gamma}\) by a Flux: Eq. (17) & \(\gtrsim 0.5(\frac{\Delta\Gamma}{10\mathrm{g}})^{-2}\) & \(\gtrsim 0.3\) & \(\gtrsim 0.5(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}(\frac{\Delta\Gamma}{10\mathrm{g}}) ^{0.3}\) & \(\gtrsim 0.1(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}(\frac{\Delta\Gamma}{10\mathrm{g}}) ^{0.3}\) & \(\gtrsim 0.1(\frac{\Delta\Gamma}{10\mathrm{g}})^{-1}(\frac{\Delta\Gamma}{10\mathrm{g}}) ^{0.3}\) \\ \(\xi_{\mathrm{CR}}\): Eq. (14) & \(\approx 220\) & \(\approx 5.6\) & \(\approx 24\) & \(\approx 24\) \\ \hline \hline \end{tabular} \end{table} Table 2: Same as Table 1 but the various transient objects are listed. proximate estimate indeed gives \(\tau_{p\gamma}\approx 0.34\left(B^{\prime}/500{\rm G}\right)\left(\Gamma/10\right)^ {-3}\left(L_{\gamma}/10^{47}{\rm erg/s}\right)^{\frac{1}{2}}\xi_{B}^{-\frac{1} {2}}\left(\varepsilon_{i}/\tilde{\varepsilon}_{p0}^{\Delta}\right)\) which could barely meet the both conditions, considering the unavoidable uncertainties of the parameter values. The more serious issue is that it is more difficult to meet the energetics condition. The resultant CR loading factor \(\xi_{\rm CR}\gg 10\) raises questions about credibility of the TDE scenario. Non-jetted TDEs are more generous objects and may alleviate the energetics issue. The wind driven by TDE or the possible corona may be a plausible site for cosmic-ray acceleration [20]. The demerit is their possible optically thick environment. The TDE corona scenario expects \(\tau_{p\gamma}\approx 20\left(B^{\prime}/1{\rm kG}\right)\Gamma^{-3}\left(L_{ \gamma}/3\times 10^{43}{\rm erg/s}\right)^{\frac{1}{2}}\left(\xi_{B}/45 \right)^{-\frac{1}{2}}\left(\beta/0.1\right)^{-1}\left(\varepsilon_{i}/\tilde {\varepsilon}_{p0}^{\Delta}\right)^{0.8}\) and the UHE nuclei cannot survive. The high magnetic field (\(\sim~{}{\rm kG}\)) naturally sets such dense photon environment that break down nuclei via photo-disintegration. UHECR protons cannot escape either. Low Luminosity GRBs [21] are among the most promising candidates for the unification scenario. The estimated optical depth, \(\tau_{p\gamma}\approx 0.1\left(B^{\prime}/80{\rm G}\right)\left(\Gamma/10 \right)^{-3}\left(L_{\gamma}/10^{47}{\rm erg/s}\right)^{\frac{1}{2}}\)\(\left(\xi_{B}/0.1\right)^{-\frac{1}{2}}\left(\varepsilon_{i}/\tilde{ \varepsilon}_{p0}^{\Delta}\right)^{1.2}\) exactly meets all the conditions as listed in table 2. The CR lading factor required for the UHECR energetics may be high, but it can be relaxed, given that the rate density \(\rho_{0}\sim 3\times 10^{-7}~{}{\rm Mpc}^{-3}{\rm yr}^{-1}\) is still quite uncertain. The external acceleration during the afterglow of engine-driven SNe has also been among the UHECR nuclei emission models [22]. As seen in Table 2, the UHECR escape condition prevents the afterglow from being the common UHECR protons and neutrino sources regardless of model-dependent arguments. Just as in the other transients, there is the (tight) margin to meet both the nuclei survival condition and the fiducial neutrino flux requirement. However, the estimate using Eq. (5) gives \(\tau_{p\gamma}\approx 1.1\times 10^{-4}\left(B^{\prime}/1.6{\rm G}\right) \left(\Gamma/10\right)^{-3}\)\(\left(L_{\gamma}/3\times 10^{45}{\rm erg/s}\right)^{\frac{1}{2}}\left(\xi_{B}/4 \right)^{-\frac{1}{2}}\left(\varepsilon_{i}/\tilde{\varepsilon}_{p0}^{\Delta }\right)^{0.7}\), which does not satisfy the neutrino flux condition. This source is hadronically too thin to explain the 100 TeV-PeV energy neutrinos measured by IceCube. ## 3 Identification of UHECR sources with neutrino follow-up observations Many of the potential high energy neutrino/UHECR sources are transient emitters. Low Luminosity GRBs, or jetted-TDEs are among the representative examples. SNe with strong interaction with circumstellar material and other non-relativistic transients may contribute to the TeV neutrino sky [23, 24] although they are unlikely to be the sources of UHECRs. Moreover, the neutrino emissions from jets in AGNs (e.g., FSRQs) are expected to occur in flare rather than a steady manner [25]. Thus it is a powerful method to search for electromagnetic (EMG) counterparts by follow-up observations triggered by a neutrino detection in order to identify neutrino or UHECR sources. It is straightforward to find the neutrino-EMG association with a rare type of the objects. The GeV-energy gamma-ray blazars detected by Fermi-LAT belong to this category. However more abundant classes of objects such as SNe or low-luminosity GRBs would yield more frequent chance coincidences between neutrino and EMG detections, which makes it challenging to claim robust associations. Moreover, the optical sky is filled with many SNe irrelevant from neutrino emissions (e.g., type Ia) and they cause significant contamination in optical follow-ups. The longer duration of neutrino transient emissions reaching to weeks expected from circumstellar SNe or TDEs would yield even more severe contamination by the unrelated SNe. The simple multi-messenger strategy faces the difficulty here. There are two feasible solutions. The first approach is to conduct follow-up observations in X-ray band. The X-ray sky is quieter as SNe are not luminous X-ray transients. The drawback is that many of the neutrino source candidates we discussed above may not be bright enough for the existing X-ray telescopes with the rapid follow-up capability. The low luminosity GRBs are good examples. Their dim luminosity of \(L_{\rm X}\sim 10^{47}\) erg/s is below the regular detection threshold unless a progenitor happens to be located in the neighborhood of our galaxy. It is, therefore, necessary to implement a sub-threshold detection trigger on a X-ray observatory with wide field of view. The Monitor of All-sky X-ray Image (MAXI) telescope [26] can meet this demand as it is regularly monitoring all sky at a single photon counting level. The MAXI-IceCube coincidence search with the sub-threshold detection algorithms will be scheduled in near future. The second approach is to search for neutrino multiplet, two (doublet) or more neutrinos originating from the same direction within a certain time frame [27]. Since only nearby sources can yield detectable neutrino multiplets, we can rule out any distant EMG transient counterpart candidates found in a follow-up observation triggered by a multiplet detection. Therefore, searches for the neutrino-optical association can be performed under less contaminated environment. It has been found that \(\sim 90\) % of sources to produce detectable neutrino multiplets by a 1 km\({}^{3}\) neutrino telescope are localized within \(z\lesssim 0.15\) while the distribution of sources to yield a singlet neutrino detection extends up to \(z\gtrsim 2\)[27]. Confining neutrino sources within the local universe realizes the following strategy of optical follow-up observations for claiming robust neutrino-optical associations. We anticipate \(\gtrsim 100\) SNe in an optical follow-up observation to search for any optical counterpart. They are mostly not associated with the neutrino detection as mentioned earlier, though it is always possible that one of them can be the real counterpart. Using the fact that sources producing neutrino multiplet are localized in the low redshift universe, we can distinguish unassociated SNe from the source to emit neutrino multiplet. Among the optical transients found in a follow-up observation triggered by a neutrino multiplet detection, the closest object is the most likely neutrino source. Because the expected redshift distribution of the source to yield neutrino doublet is quite different from that of unassociated SNe, we can judge if the closest counterpart is indeed associated with the neutrino doublet detection in a statistical way. Figure 3 shows the probability distributions of the redshift of the closest object for the two possibilities. The pronounced difference between them can construct a test statistic to examine which hypothesis is favored. For example, finding an SN-like transient at \(z=0.04\) (\(\approx 170\) Mpc) in an optical follow-up observation leads to \(\sim 2.7\sigma\) significance against the hypothesis of the incorrect coincident SN detection. Identifying the closest object from the numerous transients found in an optical survey triggered by the neutrino detection requires extensive spectroscopic measurements, which may not always be feasible. One of the practical solutions is to rely on the photometric redshift of the host galaxies whose information is available in the data taken by the survey facilities with the wide field of view such as the Vera C. Rubin Observatory. Another way to perform the intensive spectroscopy will be brought by the prime focus spectrograph on Subaru [28]. It has a remarkable capability of a wide-field simultaneous spectrography with high multiplicity. It is of great importance for neutrino and optical astronomers to closely collaborate for discoveries of yet unknown transient neutrino sources. Searches for neutrino multiplets are powerful not just for reliably identifying the EMG counterparts, but also for revealing/constraining the source characteristics. Figure 4 shows the number of sources to yield detectable neutrino multiples, \(N_{\Delta\Omega}^{\rm M}\), from the sky patch of \(\Delta\Omega=1\ {\rm deg}^{2}\)[27]. The search time scale is assumed to be \(\Delta T=30\) days, considering the typical time scale of the neutrino transient emission from TDEs and core-collapse SNe and being long enough to cover faster transients such as low luminosity GRBs and GRB afterglow. Only the diagonal region is displayed where it is consistent with the cosmic neutrino diffuse background flux, \(E_{\nu}^{2}\Phi_{\nu_{e}+\nu_{\mu}+\nu_{\tau}}\approx 10^{-8}\ {\rm GeV\ cm^{-2}\ sec^{-1}sr^{-1}}\). Because a neutrino telescope looks for \(\sim 2\pi\) sky, the parameter space of \(N_{\Delta\Omega}^{\rm M}\gtrsim 10^{-6}\) is accessible by a 1 km\({}^{3}\) neutrino telescope with 5 year observation. Null detection of the neutrino multiplets with the criteria of the false alarm rate \({\rm FAR}\lesssim 0.25\ {\rm yr^{-1}}\) leads to the allowed parameter space of \[{\cal E}_{\nu}^{\rm fl}\lesssim 5\times 10^{51}\ {\rm erg},\quad R_{0}\gtrsim 2 \times 10^{-8}\ {\rm Mpc^{-3}\ yr^{-1}}, \tag{19}\] Figure 3: Probability distribution of the redshift of the closest counterpart with bin size \(\Delta z=0.005\)[27]. The bin size is chosen for illustrative purposes. The blue curve represents the case of the signal hypothesis that the object is the neutrino source to yield the detected neutrino multiplet, and the red curve shows the case of the coincident background hypothesis, that is the chance coincident detection of an unassociated SN. \({\cal E}_{\nu}^{\rm fl}=1\times 10^{49}\ {\rm erg}\) and \(R_{0}=3\times 10^{-6}\ {\rm Mpc^{-3}\ yr^{-1}}\) are assumed for the multiplet source. if the transients of \(\Delta T\lesssim 30\) days are the major sources to contribute the high energy cosmic neutrino diffuse background radiation [27]. It will constrain the models involving jetted TDEs and super-luminous SNe in future. ## 4 Summary The observational fact that the neutrino energy flux at 100 TeV \(\sim 1\) PeV is comparable with that of UHECRs is understandable by the UHECR-neutrino multi-messenger approach. Based upon the framework of photo-hadronic interactions for producing the secondary neutrinos, we have constructed the generic scheme to describe the common sources of UHECRs and high energy neutrinos with less model-dependent way and obtained the viable parameter space required to explain the diffuse high-energy neutrino flux above 100 TeV energies and the UHECR flux above 10 EeV, simultaneously. The five conditions have essentially constrained the allowed parameter space which is rather narrow - requirements for the UHECR acceleration, the UHECR escape, the UHECR nuclei survival, the UHECR energetics, and the fiducial neutrino flux. For an astronomical object with a given photon luminosity, a number density, magnetic field strength, and its equipartition parameter \(\xi_{\rm B}\), the basic requirements to qualify as the common sources are represented in the form of conditions of \(\xi_{\rm B}\), \(p\gamma\) optical depth at UHECR energy of 10 PeV, and the CR loading factor \(\xi_{\rm CR}\). Among the known astronomical object classes, we have found that the low luminosity GRBs are the most likely candidate, followed by the jetted TDEs and FSRQs with the extreme parameter tuning. We would like to emphasize, however, that our framework to provide the generic constraints is applicable to as-yet-unknown source populations which may be revealed in future neutrino-driven multi-messenger observations. Figure 4: Number of sources to yield neutrino multiplet, \(N_{\Delta\Omega}^{\rm M}\), during time \(\Delta T=30\) days in \(\Delta\Omega=1\) deg\({}^{2}\) of sky on the parameter space of \((\mathcal{E}_{\nu}^{\rm fl},R_{0})\), the output neutrino energy from a source and the burst density rate [27]. A criteria to suppress the annual false alarm rate below \(\sim 0.25\) for \(2\pi\) sky is applied. These common source candidates are transients in optical and X-ray bands. In addition, many other neutrino source candidates at TeV sky such as the circumstellar SNe are also transients. Thus it is a key to conduct the multi-messenger follow-up observations in oder to identify the neutrino sources. Overcoming the difficulty that the optical transient sky is filled with numerous SNe, the two approaches have been proposed with the presently operating facilities - introducing a sub-threshold detection channel to a wide field of view X-ray observatory responding to high energy neutrino detection, and the neutrino multiplet detection to trigger ToO observations with optical telescopes. We are living in a new era to utilize neutrino, optical, and X-ray messengers to reveal the origin of cosmic rays, and study the hadronic emissions. **Acknowledgments** The studies written in this article have benefited from the extensive discussions with Kohta Murase. The author is also deeply grateful to Masaomi Tanaka, Nobuhiro Shimizu, and Aya Ishihara for their valuable inputs. Special thanks go to IceCube Collaboration. This work is supported partly by JSPS KAKENHI grant No.18H05206 and 23H04892.
2309.15276
A Topological Machine Learning Pipeline for Classification
In this work, we develop a pipeline that associates Persistence Diagrams to digital data via the most appropriate filtration for the type of data considered. Using a grid search approach, this pipeline determines optimal representation methods and parameters. The development of such a topological pipeline for Machine Learning involves two crucial steps that strongly affect its performance: firstly, digital data must be represented as an algebraic object with a proper associated filtration in order to compute its topological summary, the Persistence Diagram. Secondly, the persistence diagram must be transformed with suitable representation methods in order to be introduced in a Machine Learning algorithm. We assess the performance of our pipeline, and in parallel, we compare the different representation methods on popular benchmark datasets. This work is a first step toward both an easy and ready-to-use pipeline for data classification using persistent homology and Machine Learning, and to understand the theoretical reasons why, given a dataset and a task to be performed, a pair (filtration, topological representation) is better than another.
Francesco Conti, Davide Moroni, Maria Antonietta Pascali
2023-09-26T21:29:07Z
http://arxiv.org/abs/2309.15276v1
# A topological machine learning pipeline for classification ###### Abstract In this work we develop a pipeline that associates persistence diagrams to digital data, via the most appropriate filtration for the type of data considered. Using a grid search approach, this pipeline determines optimal representation methods and parameters. The development of such a topological pipeline for machine learning involves two crucial steps, that strongly affect its performance: firstly, digital data must be represented as an algebraic object with a proper associated filtration in order to compute its topological summary, the _persistence diagram_. Secondly, the persistence diagram must be transformed with suitable representation methods in order to be introduced in a machine learning algorithm. We assess the performance of our pipeline, and in parallel we compare the different representation methods on popular benchmark datasets. This work is a first step toward both an easy and ready-to-use pipeline for data classification using persistent homology and machine learning, and to understand the theoretical reasons why, given a dataset and a task to be performed, a pair (filtration, topological representation) is better than another. Keywords:topological machine learning persistent homology classification vectorization ## 1 Introduction In the last decade, scientific and industrial research has introduced the need to manage huge quantities of digital data. Numerous studies have therefore arisen to provide appropriate tools for such research. In the mathematical field, Topological Data Analysis (TDA) has started to play a major role since it provides qualitative and quantitative features to describe the data space from a geometrical point of view. Topological data analysis has its roots in Persistent Homology (PH), which combines algebraic topology with discrete Morse theory. Informally, algebraic topology studies the global shape of a space by means of features that are invariant under continuous deformation. These features are essentially the number of \(k\)-dimensional holes in the space. Persistent homology, on the other hand, studies the evolution of the data space at different scales of resolution and tracks the topological invariants (\(k\)-dimensional holes) that form and vanish during this process. These topological invariants encode the global geometrical properties of the data space. The collection of such information is called a Persistent Diagram (PD). The idea of injecting geometric information into Machine Learning (ML) and deep learning is currently a very active field in the scientific community [9, 34, 31, 5, 19]. Suffice it to say that convolutional neural networks are part of this research area. The results provided by TDA have proved very promising [11, 33, 5, 38, 35]. However, the synergy between TDA and ML is quite novel, since the main concept of TDA, namely persistence diagrams, could not be introduced in an ML approach from the get-go. A wide range of transformations have been devised to exploit the capabilities of PDs in ML algorithms [15, 10, 42, 1, 16, 37, 20]. Each of them has been devised with specific requirements. To the best of our knowledge, however, their relative effectiveness in relation to different heterogeneous types of datasets has not been the subject of extensive study. More importantly, the link between the type of data and optimal representation has not yet been addressed. In other words, even if it has been shown in the rich literature on applications of TDA and machine learning that topological features are feasible for the classification purposes of several types of data, it is still not clear which is the best way to exploit the topological descriptive power, when approaching a new classification task. In addition, a key element of persistent homology is the choice of a filtration that associates a PD with the data. The goal of this work is to investigate the theoretical reasons behind the two main choices that characterize a topological pipeline in relation to a specific goal, namely the choice of filtration and the choice of representation method. Incidentally, we will develop a topological pipeline with different available filtrations and several representation methods with the associated parameters. We will evaluate the results obtained on benchmark datasets, and in parallel compare the accuracy of the different representation methods. In doing so, we will lay the groundwork for a correlation between filtration - vectorization - task, with particular focus on data type. We emphasize the fact that the purpose of this research is not in the classification accuracy of these methods _per se_, but rather to provide a basis for understanding why certain representations are better suited for certain types of data. ## 2 Mathematical background The core idea of our pipeline has its root in Topological Data Analysis (TDA) and Machine Learning (ML). While ML is an already established and well-known branch of artificial intelligence with a wide variety of applications, TDA is a relatively new field of research which studies the geometrical and topological aspects of data. The aim of this work is to emphasize the value of a topological approach in a machine learning context of data analysis. Therefore a background on machine learning will be assumed and will not be recalled in this section, nor elsewhere in the article. Conversely, this section is mainly aimed at introducing the reader to topological data analysis, and in particular to algebraic topology and persistent homology. Albeit formalities, TDA aims to recognize and analyze patterns within data by means of topology. The main concept of TDA is persistent homology, where the patterns within data are captured across multiple scales. The persistence of a topological feature is the span over different scales of its detectability, and is an indication of its importance. The collection of such persistences is called the persistence diagram. ### Algebraic topology Algebraic topology is the mathematical field which aims to study topological spaces by means of algebraic features that are invariant under continuous deformation. Algebraic topology is a wide mathematical field and only a part of it will be relevant in TDA, therefore this subsection is only a brief introduction to its main concepts. For a more complete guide to algebraic topology, we refer the reader to [27]. The main concept of algebraic topology that we are going to use is the _homology_ of a space, which associates to a topological space a sequence of algebraic objects, namely Abelian groups or modules. More formally, the \(k\)-th homology group of a topological space \(X\) over a field \(\mathbb{F}\) is a vector space over \(\mathbb{F}\) which we denote as \(H_{k}(X;\mathbb{F})\). Beyond the technicalities of its definition, the rank of \(H_{k}(X;\mathbb{F})\) corresponds to the number of distinct \(k\)-dimensional holes in the topological space \(X\), with the exception of \(H_{0}(X;\mathbb{F})\). In this case, \(\operatorname{rank}H_{0}(X;\mathbb{F})\) corresponds to the number of \(0\)-dimensional holes plus one, that is, the number of connected components. Other types of homology can be defined, see reduced homology, relative homology and cohomology. Their definition and use are beyond the scope of this work, so they will not be addressed. In general, different choices of \(\mathbb{F}\) will results in different homology groups \(H_{k}(X;\mathbb{F})\). In this work, we limit to the case where \(\mathbb{F}=\mathbb{Z}_{2}\). As an example, Figure 1shows that the sphere \(S^{2}\) bounds a \(2\)-dimensional void. \(S^{2}\) is connected and there is no \(1\)-dimensional hole. The homology groups for the sphere are: \[H_{0}(S^{2},\mathbb{Z}_{2})=\mathbb{Z}\Rightarrow\operatorname{ rank}H_{0}(S^{2},\mathbb{Z}_{2})=1,\] \[H_{1}(S^{2},\mathbb{Z}_{2})=\mathbf{0}\Rightarrow\operatorname{ rank}H_{1}(S^{2},\mathbb{Z}_{2})=0,\] \[H_{2}(S^{2},\mathbb{Z}_{2})=\mathbb{Z}\Rightarrow\operatorname{ rank}H_{2}(S^{2},\mathbb{Z}_{2})=1,\] \[H_{k}(S^{2},\mathbb{Z}_{2})=\mathbf{0}\Rightarrow\operatorname{ rank}H_{k}(S^{2},\mathbb{Z}_{2})=0\text{ for every }k\geq 3.\] As another example, Figure 1 shows that the torus \(T^{2}\) is connected, it bounds two \(1\)-dimensional holes (the ones inside the blue and the red loop) and one \(2\)-dimensional hole. The homology groups for the torus are: \[H_{0}(T^{2},\mathbb{Z}_{2})=\mathbb{Z}\Rightarrow\operatorname{ rank}H_{0}(T^{2},\mathbb{Z}_{2})=1,\] \[H_{1}(T^{2},\mathbb{Z}_{2})=\mathbb{Z}^{2}\Rightarrow\operatorname {rank}H_{1}(T^{2},\mathbb{Z}_{2})=2,\] \[H_{2}(T^{2},\mathbb{Z}_{2})=\mathbf{Z}\Rightarrow\operatorname{ rank}H_{2}(T^{2},\mathbb{Z}_{2})=1,\] \[H_{k}(T^{2},\mathbb{Z}_{2})=\mathbf{0}\Rightarrow\operatorname{ rank}H_{k}(T^{2},\mathbb{Z}_{2})=0\text{ for every }k\geq 3.\] Algebraic topology is a powerful tool that classifies topological spaces with just a finite sequence of integers. The computation for generic topological spaces requires complicated homology and cohomology theories. Persistent homology is now introduced because it enhances the expressiveness of the topological features and it allows easier computation of the homology ranks by means of _simplicial homology_. ### Persistent homology Persistent homology studies the geometry of spaces by looking at the evolution of \(k\)-dimensional holes at different spatial resolutions. The key difference with algebraic topology is that the ranks of the homology groups are computed at various scales and the features we extract are precisely the evolution of such ranks [43, 11]. This section is only a brief introduction to persistent homology, as many of its concepts will be covered and expanded upon later, when we make actual use in the pipeline. Before continuing, it is necessary to introduce the concept of a simplicial complex. Let \(V=\{v_{0},\ldots,v_{k}\}\) be \(k+1\) affinely independent points. A \(k\)_-simplex_\(\sigma\) is the convex hull of \(V\), which is called the set of vertices of \(\sigma\). We call face of \(\sigma\) the convex hull of any subset of points of \(V\) and we write \(\tau\subseteq\sigma\) when \(\tau\) is a face of \(\sigma\). A _simplicial complex_\(\mathcal{K}\) is a set of simplexes such that \(\emptyset\in\mathcal{K}\), for every \(\sigma\in\mathcal{K}\) and every \(\tau\subseteq\sigma\) it holds that \(\tau\in\mathcal{K}\) and the intersection of any two simplexes of \(\mathcal{K}\) is always a face of both. A _triangulation_ of a topological space \(X\) is a couple \((\mathcal{K},h)\) where \(\mathcal{K}\) is a simplicial complex \(\mathcal{K}\) and \(h\colon\mathcal{K}\to X\) is a homeomorphism. What makes simplicial complexes particularly suitable for our study is that the computation of their homology groups is very simple compared to normal topological spaces. In addition, almost all topological spaces found in practice are triangulable. The ranks of the homology groups of simplicial complexes are more commonly Figure 1: The sphere \(S^{2}\) (Figure 0(a)) bounds a 2-dimensional void. The torus \(T^{2}\) (Figure 0(b)) bounds a 2-dimensional void and two 1-dimensional holes. Images: Geek3 and YassinMrabet via Wikipedia. known as Betti numbers. The computation of homology groups and their ranks specifically for simplicial complexes is known as _simplicial homology_. For more information on simplicial complexes, simplicial homology and triangulation of topological spaces we refer the reader to [27]. A _filtration_ of a simplicial complex \(\mathcal{K}\) is a finite sequence of subcomplexes such that \(\emptyset\subseteq\mathcal{K}_{0}\subseteq\cdots\subseteq\mathcal{K}_{n}=\mathcal{ K}\). Persistent homology keeps track of changes in the Betti numbers associated to each homology group of \(\mathcal{K}_{i}\) for \(i=0,\ldots,n\). The persistence of a topological feature is thus its detectability at different spatial resolutions. In particular, features with high persistence will be important to describe the shape of the data, while those with low persistence will be assimilated to noise. The collection of such features is called _persistence diagram_. To better understand the differences between persistent homology and algebraic topology, we provide the following example. Suppose we have a normal ball and a ball-shaped sponge, which is of course full of many small holes. The difference between these two objects will be detectable or not depending on the resolution at which we are looking at them. If we are looking at them from a distance, the two objects will be indistinguishable. If we look at them close up, we will see the differences. Moreover, the holes will have limited persistence. Therefore, thanks to persistent homology we can conclude that the two objects have the same shape but one is basically the noisy version of the other. Not only PH provides algebraic topology with easy tools to compute \(\operatorname{rank}H_{k}(\mathcal{K},\mathbb{F})\) but it also solves the two main problems associated with digital data, namely their discrete and noisy nature. With regard to the discrete nature, PH allows associating a structure of simplicial complex to discrete data through the filtration. In this way, non-trivial homology groups can be found. As for the noisy nature, PH highlights the most persistent structures at different scales, limiting the incidence of noise. ## 3 Topological pipeline The goal of this section is to define and describe the pipeline we use for the topological study of digital data in a machine learning context. As already mentioned, we rely heavily on the tools provided by persistent homology, which will be explored and discussed here. Figure 2 shows the general scheme of our pipeline and its main elements. The digital data (Section 3.1) is filtered (Section 3.2) in order to generate a persistence diagram (Section 3.3). We vectorize the PD by means of a vectorization method (Section 3.4). Finally, the collection of all vectors is the input for a machine learning classifier (Section 3.5). Figure 3 shows an application of the pipeline to digital data. ### Data In PH, data are often modelled as points in a metric space or as functions on some topological space. This is straightforward, as the most common digital data found in applications are either \(n\)-dimensional vectors or signals and images. Digital data are suitable for both modelling. For example, a \(n\times n\) greyscale image can be viewed either as a function with domain a grid \(\subseteq\mathbb{R}^{2}\) with values in \(\mathbb{R}\), or as a vector (a point) in the space \(\mathbb{R}^{n^{2}}\). We want to emphasize, however, that the way in which we model data is of fundamental importance for the study that follows. This is mainly due to the type of metrics that the data inherit from the two modelling. Continuing our example, the classical metric between two points of \(\mathbb{R}^{n^{2}}\) is the Euclidean one, while between two functions we have \(\|\cdot\|_{2}\), \(\|\cdot\|_{\infty}\) or others. Although the metrics can easily be changed according to the context, the different modelling has profound repercussions both mathematically and in terms of application. There is a very rich literature on both the modelling data as points in a metric space [11, 24, 13] and modelling as functions [25, 6, 5]. Depending on the type of data, one modelling may be preferred to another. Let us now describe the digital data types we studied in Section 4 and the associated preferred modelling. #### 2.0.1 Point cloud data Point clouds data are finite metric spaces \((X,\delta)\). We recall that finite metric spaces are naturally equipped with the discrete topology, and the topological dimension of any discrete space is 0. Since every finite metric space is discrete, point clouds do not inherit a (not trivial) topology. As a consequence, the homology of a point cloud is always trivial, with the exception of \(H_{0}\). As finite metric spaces, the natural modelling of this type of data is that of a vector embedded in a larger Euclidean space \((Y,d)\). Typically \(Y=\mathbb{R}^{n}\) and \(d\) is the Euclidean metric, but this model is suitable for generalization. In Section 4 we will always have \(Y=\mathbb{R}^{n}\). #### 2.0.2 Images A digital image is an image composed of pixels. In standard 8-bit greyscale images, each pixel has a discrete value ranging from 0 to 255 representing its greyscale intensity. Although it can be considered as a vector of dimension \(n\cdot m\), where the greyscale image has size \(n\times m\), modelling an image as a function is preferable. This kind of modelling takes into account the fact that only close pixels can be connected, not also distant ones. Therefore a digital image is a function from a grid \(G\subseteq\mathbb{R}^{2}\) to a suitable finite interval of \(\mathbb{R}^{n}\), depending on the number of channels of the image. The most common number of channels is 1, for greyscale, or 3, for color images. However, there exist additional image channel encodings available. Figure 2: The pipeline for a topological study of digital data in a machine learning context. A filtration associates a persistence diagram to the digital data. The persistence diagram is then vectorized by means of various vectorization methods. Finally, the vector is fed to a machine learning classifier. #### 4.1.1 Graphs Graphs are structures made by a set of vertices which are connected by edges. There is a distinction between undirected graphs, where the edges link two vertices symmetrically, and directed graphs, where the link is not symmetrical. Another distinction is between weighted and unweighted graphs. More Figure 3: Pipeline application for a point cloud data (Figure 2(a)). The persistence diagram associated (Figure 2(b)). In the second and third rows, four different vectorization methods for the same PD, namely Persistence Images (Figure 2(c)), Persistence Landscapes (Figure 2(d)), Persistence Silhouette (Figure 2(e)) and Betti Curves (Figure 2(f)). specifically, a graph is an ordered pair \(G=(V,E)\) where \(V\) is the set of vertices and \(E\subseteq\{(x,y)|x,y\in V\}\). Typically \((x,y)\in E\Rightarrow x\neq y\) but this is not always the case. A graph is seen as a function with domain \(E\) and codomain \(\mathbb{R}\). ### Filtrations Persistent homology examines the shape of the data at different scales. As already mentioned in Section 2.2, this means that at each scale \(j\) the data is represented as a simplicial complex \(\mathcal{K}_{j}\). Formally, a _filtration_ of a simplicial complex \(\mathcal{K}\) is a finite sequence of nested subcomplexes \[\emptyset\subseteq\mathcal{K}_{0}\subseteq\mathcal{K}_{1}\subseteq\cdots \subseteq\mathcal{K}_{n}=\mathcal{K}.\] The inclusion \(\mathcal{K}_{i}\hookrightarrow\mathcal{K}_{j}\) induces a homomorphism \(f_{p}^{i,j}\colon H_{p}(\mathcal{K}_{i})\to H_{p}(\mathcal{K}_{j})\) of the simplicial homology groups for each dimension \(p\) and each \(0\leq i\leq j\leq n\). Usually these homomorphisms are actually isomorphisms. In these cases, no topological events have occured between time \(i\) and time \(j\) and, more importantly, Betti numbers do not change. However, there are cases in which these homomorphisms are not injective or surjective. In these cases, topological events occur, which is precisely what we are interested in. We say that a new topological feature is born at time \(j\) when the homomorphism \(f_{p}^{i,j}\) is not surjective. We say that a topological feature dies at time \(j\) when the homomorphism \(f_{p}^{i,j}\) is not injective. We keep track of the pairs (birth, death) of the topological features and collect them in the so-called Persistence Diagram (PD). We would like to focus on the importance of filtrations in our study, as the choice of the filtration is the first fundamental step in a topological study of the data. There are several filtrations available, and they are mainly related to the choice of data modelling. We want to stress that different filtrations yield different topological features, and thus possibly very different PDs. Moreover, some filtrations enable to emphasize points with greater or lesser persistence, or prevent the creation of certain features where there should be none. Therefore, the choice of filtration should be made with caution. #### 3.2.1 Filtration for point clouds The alpha complex is a way of forming abstract simplicial complexes from a set of points. Hence, it represents an ideal filtration for point clouds. Let \((X,\delta)\) be a point cloud embedded in a larger metric space \((Y,d)\). The elements of \(X\) are the vertices of the alpha complex. Fixed a real parameter \(\alpha>0\), we define \[A_{x}^{\alpha}:=B(x,\alpha)\cap\{y\in Y:d(y,x)\leq d(y,\tilde{x})\text{ for every }x\neq\tilde{x}\in X\}\,.\] That is, we grow balls with radius \(\alpha\) centered in each point of \(X\), intersected with the Voronoi cell of each point. When \(n\) sets \(A_{x_{i}}^{\alpha}\) intersect, a \((n-1)\)-simplex is added to the simplicial complex. The growing of \(\alpha\) naturally induces a filtration. For more information on alpha complex, see [2]. We point out that, by the nerve lemma, the alpha complex is homotopy equivalent to the union of the balls and also to the Cech complex. The advantage over the Cech complex is that it is significantly smaller, thus reducing the computational cost. #### 3.2.2 Filtration for images The alpha complex is still a viable filtration for images, since they can be interpreted as vectors. However, other approaches that make greater use of the fixed structure of the image are preferable. In particular, the cubical complex is the ideal filtration as it exploits two key features of images. The first feature is that not all pixels should be connected with each other, but only with the neighbours. Figure 4 shows the possible connection between pixels in a cubical complex. The second feature is related to the modelling of images as functions. In contrast to the alpha complex, where all points were immediately inserted as vertices of the simplicial complex, a pixel becomes a vertex of the simplicial complex only when its intensity becomes greater than a certain threshold value \(t\). Similarly, a 1-simplex is added only if two adjacent pixels (in the sense of Figure 4) both have intensities greater than \(t\). The same applies to the 2-simplexes. As \(t\) increases, we obtain a filtration. More formally, an elementary cube is any translate of a unit cube \([0,1]^{n}\) embedded in Euclidean space \(\mathbb{R}^{m}\), for some \(n,m\in\mathbb{N}\) with \(n\leq m\). A set \(I\subset\mathbb{R}^{m}\) is a cubical complex if it is homeomorphic to a union of elementary cubes. For more information about cubical complexes, we refer the reader to [29]. #### 3.2.3 Filtration for graphs Graphs are particular structures which require _ad hoc_ filtrations. In particular, graphs share many similarities with point clouds, but with few key differences. The main difference is that vertices may not be connected to each other. Therefore, no matter how much you increase the filtration value, some 1-simplexes will never be created. This of course impacts also higher-dimensional simplexes. Another difference is that all vertices form 0-simplexes, but their filtration value may not be 0. For example, the number of incident edges Figure 4: Pixel connection in cubical complexes. of every vertex could be used as filtration value. There are many possible filtrations associated with graphs, depending mainly on the type of graph considered. In any case, the difference lies essentially in the filtration value associated with each simplex, not in the creation of the simplexes themselves. For this reason, we now describe how the simplexes enter the filtration, postponing the description of the filtration value chosen to the section 4.4. Every vertex form a \(0\)-simplex. If two vertices are connected by an edge, a \(1\)-simplex is formed. Similarly, when three vertices are pairwise linked, a \(2\)-simplex is formed. In general, a clique is a subset of \(V\) whose vertices are all pairwise connected. Each clique of \(k\) vertices form a \((k-1)\)-simplex. ### Persistence diagrams Persistence diagrams are the collection of pairs (birth, death) of topological features emerged by filtering a simplicial complex. We refer to this pairs as \(q_{j}=(b_{j},d_{j})\), where \(b_{j}\) is the birth of the \(j\)th \(k\)-dimensional hole and \(d_{j}\) its death. Mathematically, this collection is a multiset, that is a set in which same elements can appear multiple times. For further details on persistence diagrams, we refer the reader to [7], [12], and [23]. Each pairs has a multiplicity \(\mu(q_{j})\) indicating how many holes share both the birth and the death time. The points \((t,t)\) of the diagonal are added to the dipsistence diagram with infinite multiplicity for technical reasons. Since the death of a topological feature occurs at a larger time its birth, PDs are multiset over the set \(\bar{\Delta^{*}}:=\left\{(x,y)\in\mathbb{R}^{2}:x\leq y\right\}\cup\left\{(x, \infty):x\in\mathbb{R}\right\}\). It holds that \(\mu(q)=0\) if and only if \(q\not\in D\), where \(q\in\bar{\Delta^{*}}\) and \(D\) is a persistence diagram. The equality \(\ell(q)=k\) means that the point \(q\in D\) corresponds to a feature in \(H_{k}\). We can equip the space of persistence diagrams with the _bottleneck distance_ (also called _matching distance_) \[W_{\infty}(D,D^{\prime}):=\inf_{\varphi:\ D\to D^{\prime}}\sup_{q\in D}\|q- \varphi(q)\|_{\infty},\] where \(D,D^{\prime}\) are persistence diagrams and \(\varphi\) is a bijection from \(D\) to \(D^{\prime}\). Another popular metric in the space of PDs is the \(p\)-Wasserstein distance, defined as \[W_{p}(D,D^{\prime}):=\inf_{\varphi:\ D\to D^{\prime}}\left[\sum_{q\in D}\left( \|q-\varphi(q)\|_{\infty}\right)^{p}\right]^{\frac{1}{p}},\quad p\geq 1.\] For more information on bottleneck and Wasserstein distance, we refer the reader to [18] and [23]. We point out that in the general case there exists a bijection \(\varphi\colon D\to D^{\prime}\) only because we have added the points in the diagonal with infinite multiplicity. The most important property of PDs is their stability. That is, a small perturbation of the simplicial complex yields a small perturbation of the associated PD. This property is of fundamental importance in applications as it guarantees robustness against noise and repeatability. Multisets lack fundamental mathematical and statistical properties required in a machine learning context and therefore they cannot be directly processed by an ML algorithm. To give an example, the mean of two multisets is not well defined. Suitable transformation of PDs into objects that enjoy excellent mathematical properties and can be used in ML is needed. These transformations are called _representation methods_ or _faturization methods_. We limit our study to the _vectorization methods_, i.e. procedures for transforming a persistence diagram into a vector. Additional representation methods are currently available, i.e. kernel methods. Due to the high computational cost of these methods, they have been omitted from this work, but the interested reader can find more information in [16], [37] and [20]. ### Vectorization methods A vector representation of a PD consists of an embedding of the space of PD in a vector space or, more generally, in a Hilbert space. The fundamental requirement of this embedding is stability, i.e., that small perturbations of the PD correspond to small perturbations of the associated vector. Various embeddings are obviously possible, and each of them will define a different vector representation of the PD. The vector representations presented in this work are all stable with respect to the bottleneck distance or 1-Wasserstein distance, with the exception of Betti curves (see Section 3.4). Finally, we want to stress the fact that all these vectors live in a vector space and thus enjoy mathematical and statistical properties that were not available to the space of multisets. Hence, they can be directly introduced in a machine learning method. Nonetheless, different representations of the same PD yield different vectors, with possible very different results in an ML algorithm. We point out that this is precisely the main goal of this work, to find a correlation between task, filtration and representation. All the subsequent methods require a change of coordinates of the PD. Henceforth, unless otherwise specified, a point \(q\in D\) will have coordinates \(q=(\frac{b+d}{2},\frac{d-b}{2})\), where \((b,d)\) are the usual birth and death of a topological feature. We point out that in \(H_{0}\) there is always a connected component that never dies. Since these methods do not handle the infinite persistence of some points, we replace the infinite value by a very large one in relation to the other persistences obtained. Each of these methods is derived from the Gudhi library. For more information about Gudhi and the Python implementation of these methods, we refer the reader to [39]. #### 3.4.1 Persistence image A Persistence Image (PI) is a finite-dimensional vector representation of persistence diagrams. For more information on PI, we refer the reader to [1]. This method basically divides the PD domain into an \(n\times n\) grid and, for each point \(q\) of the PD, defines a Gaussian centered in \(q\) with variance \(\sigma\). It returns an \(n\times n\) image where the intensity of each pixel is given by the sum of the values of all Gaussians at that point in the grid, weighted by an appropriate function \(f\) that must be 0 on the diagonal, continuous and piecewise differentiable. Denoting with \(m\) the persistence value of the most persistent feature, the weight function is \[f(t):=\begin{cases}0\text{ if }t\leq 0,\\ \frac{t}{m}\text{ if }0<t<m,\\ 1\text{ if }t\geq m.\end{cases}\] The parameters of the method are \(n\) and \(\sigma\) and will be selected by grid search. Figure 5 shows a persistence diagram and three different PIs for various parameters. It is evident how different parameter values greatly influence the resulting image. In our pipeline we determine the optimal parameters with a grid search approach between the following setup: \(\sigma\in\left\{0.1,1,10\right\},n\in\left\{5,10,25\right\}\). #### 4.2.2 Persistence landscape The Persistence Landscape (PL) is another method of vector representation of PD that enjoys excellent statistical properties introduced in [10]. In particular, the PL is a function that lives in a vector space, a great mathematical environment for working with ML. More formally, PLs are piecewise constant functions \(\lambda\colon\mathbb{N}\times\mathbb{R}\to\overline{\mathbb{R}}\). To define \(\lambda\), we tent each persistence point \(q=(\frac{b+d}{2},\frac{d-b}{2})\in D\) to the baseline \(x=0\) with the following function \[\Lambda_{q}(t):=\begin{cases}t-b\text{ if }t\in[b,\frac{b+d}{2}],\\ d-t\text{ if }t\in(\frac{b+d}{2},d],\\ 0\text{ otherwise.}\end{cases}\] The persistence landscapes of \(D\) is the collection of such functions \[\lambda_{D}(k,t):=\text{kmax}_{q\in D}\,\Lambda_{q}(t),\quad k\in\mathbb{N},t \in[0,T],\] where kmax is the \(k\)-th largest value in the set and \(T\) is a real number such that \(d\leq T\) for any death time \(d\) of a topological feature. Since \(\lambda\) is piecewise constant, it can be discretized by looking only at the point at which it changes value. This discrete function is the vector representation of the PD. A Central limit theorem for PLs holds. The parameters for persistence landscapes are the Figure 5: Persistence diagram (Figure 4(a)) and three persistence images for \((\sigma,n)=(0.1,5),(0.1,10),(0.05,25)\) respectively in Figures 4(b), 4(c), 4(d). number of landscapes considered \(n\) and the discretization resolution \(r\). In our grid search approach, we consider the following setup: \(n=5,r\in\{25,50,75,100\}\). Figure 6 shows a PD and three persistence landscapes associated. #### 4.2.2 Persistence silhouette The Persistence Silhouette (PS) is a method of vector representation of PD with the same core idea of PL. More specifically, it is a piecewise constant function defined as \[\phi(t):=\frac{\sum_{j=1}^{m}w_{j}\Lambda_{j}(t)}{\sum_{j=1}^{m}w_{j}},\] where \(m\) is the number of off-diagonal points, \(w_{j}\) is a weight and \(\Lambda_{j}\) is the same functions as in Section 3.4. Again, the vector representation of the PD comes from the discretization of \(\phi\). In our pipeline, we use constant weight \(w_{j}=1\) for every \(j=1,\ldots,m\). The only parameter of the method is the resolution \(r\) and the grid search takes values \(r\in\{25,50,75,100\}\). Figure 7 shows a PD and three persistence silhouettes. We point out the similarity of the vectors with different parameters values. For more information on Persistence Silhouette, we refer the reader to [15]. Figure 6: Persistence diagram (Figure 5(a)) and three persistence landscapes for \((n,r)=(1,25),(3,50),(5,100)\) respectively in Figures 5(b), 5(c), 5(d). Figure 7: Persistence diagram (Figure 6(a)) and three persistence silhouette for \(r=25,50,100\) respectively in Figures 5(b), 5(c), 5(d). #### 5.1.2 Betti curve The Betti curve (BC) is yet another vectorization method for persistence diagrams presented in [42]. The Betti curve are a \(\mathbb{Z}\)-indexed family of functions defined as \(\beta_{z}\colon\mathbb{R}\to\mathbb{R},\beta_{z}(t):=\#\left\{q=(b,d)\in D:\ell(q )=z\text{ and }b\leq t\leq d\right\}\), where \(\ell(q)=z\) means that \(q\) is a topological event in the homology group \(H_{z}\). The function is then vectorized over a uniform grid of a closed interval and resolution \(r\). For those familiar with persistence barcodes, this method informally counts the number of bars present in the persistence barcodes at any given time, after an appropriate persistence normalization. The resolution \(r\) in our pipeline takes values in the grid \(r\in\{25,50,75,100\}\). Figure 8 shows a PD and three BCs for different resolutions of the grid. In a persistence diagram, all points of the different homology dimensions are present. However, some of these dimensions may be more or less suitable for the given study. For this reason, our pipeline follows three approaches. The first approach is to consider each homology group individually. We highlight the fact that it may happen that \(H_{i}\) of some data are empty, for \(i\neq 0\), while others are non-empty. In such cases, we replace the empty \(H_{i}\) with the single point \((0,0)\). In the following sections, this approach will be referred as \(H_{i}\) approach, for \(i\) varying in the homology groups available. The second approach is to forget about the homology dimension of the points, referred to as the 'fused' approach, whereas the third is to concatenate the vector representation of the PD for each homology dimension, referred as the 'concat' approach. We want to emphasize here that there is no homogeneity in the number of parameters used in the grid search of the various vectorization methods. In particular, persistence images have a total of nine parameter combinations, while the others only have four. This is done on the one hand to maintain a comparison with other works, on the other hand, because we are not interested in finding the best vector representation, but rather in finding links between data and vectorization methods. Moreover, some methods are more flexible than others, and therefore more parameter-dependent. Hence, in these cases, it is more suitable to test a larger number of parameters. Finally, we stress that the purpose of our study is to verify the usefulness of a topological study of the data and to investi Figure 8: Persistence diagram (Figure 7(a)) and three Betti curves for \(r=25,50,100\) respectively in Figures 7(b), 7(c), 7(d). gate the preferred vector representation in certain contexts. The stability of the representation is of fundamental importance, as it guarantees a stable synergy between TDA and ML. ### Machine Learning classifiers At this point, we are finally able to introduce the vectorization of the PDs in machine learning classifiers. We want to emphasize that the pipeline does not perform parameter tuning either for representation methods or for classifiers. We would also point out that the vector generated from the previous section may have very different sizes and necessarily different computational costs, but this aspect is not taken into account in our experiments. In the pipeline, a total of nine classifiers are trained for each representation. Such classifiers are: Support Vector Classifiers SVC with kernel RBF and C=\(\{1,2,3,5,10,20\}\); a random forest classifier (#trees = 100); and Lasso (\(\alpha=1\)). These classifiers are very standard in machine learning literature and we have reported the parameters used for each of them. For more details, we refer the reader to [21, 8, 28], and [40]. Each of these classifiers is derived from the Scikit-learn library. For more information about Scikit-learn and the Python implementation of these methods, we refer the reader to [36]. We want to emphasize that there are such a large number of SVCs just for comparison with other works. Each of the classifiers will perform a 10-fold cross validation [3] on the data and we will report the best result obtained for each run, along with the representation method that achieved it. We stress the fact that for each run we report the best accuracy result achieved, regardless of the vectorization method or classifier that achieved it. For completeness, we also provide a table with the (mean) accuracy result of the best single method at the end of each experiment. Finally, a statistical analysis of the results obtained during the experiments was carried out, and presented in Section 5. ### Further improvements The pipeline described up to this point is the basic version of our procedure. It is worth pointing out that this pipeline may have high requirements in terms of time and memory: for example, our pipeline needs a few seconds to classify a sample dynamic trajectory, while could take about an hour for classifying a complex sample of a collaboration network. This is a basic version because we are omitting some possible improvements in the two main contexts of the pipeline, TDA and ML. For example, multiple filtrations could be available for the same type of data, or more advanced ML methods such as _ensemble learning_ could be used. Since our goal is to take a first step in the study of the best PD representations for different types of digital data, these improvements are initially omitted from the pipeline. If, however, the results obtained are not satisfactory in terms of accuracy and/or stability, suitable improvements will be made. In particular, we stress the fact that the stability of the representation is our most important concern and high accuracy results without stability are not satisfactory. Necessarily, however, poor but stable results will be equally unsatisfactory. These adjustments to the basic pipeline will be discussed in detail when introduced, as they are strongly linked to the type of dataset considered. ## 4 Results In this section, we are going to apply the pipeline presented in Section 3 to different heterogeneous types of datasets. The goal of this section is to discuss the usefulness of a topological study of the data. This is accomplished by showcasing the excellent accuracy results achieved by the pipeline. All results reported here refer to test accuracy. For each dataset, we perform a 10-split cross-validation with an 80% training data and 20% test data, and report the results of the pipeline over the course of the ten runs, along with mean accuracy and standard deviation. Finally, since when we have a never-seen data we would like to know how to perform classification, we also report the accuracy value of the single best method for each dataset (i.e. the best combination {vectorization, classifier}). ### Dynamic dataset The first dataset we describe comes from data arising from a discrete dynamical system modeling fluid flow. The dynamical system here presented is a linked twisted map, that is a Poincare section of an eggbeater-type flow. A Poincare section is the discretization of a continuous dynamical system obtained by following the path of a particle's location at discrete time intervals. The equations for the linked twisted map are: \[\begin{cases}x_{n+1}=x_{n}+ry_{n}(1-y_{n})&\text{mod}1\\ y_{n+1}=y_{n}+rx_{n+1}(1-x_{n+1})&\text{mod}1\end{cases}\] where \(r\) is a positive parameter. For different values of the parameter \(r\), different orbits \(\{(x_{n},y_{n}),n\in\mathbb{N}\}\) are generated. In some cases, such orbits are dense in the domain \([0,1]^{2}\), in other cases, voids occur. The task of this application is to classify the value of the parameter \(r\) based on the orbit. In Figure 9 five different orbits generated by the same starting point for five different values of the parameter \(r\) are shown. It is clear how the value of \(r\) strongly influences the orbit, and in particular the formation of voids. In Figure 10 five orbits for the same parameter \(r\) with different starting points are shown, clarifying that the shape of the orbit depends only on the value of the parameter \(r\) and not on the starting point. This dataset is inspired by [1] and [17] and is composed of five different values of the parameter \(r=[2,3.5,4,4.1,4.3]\), each with 50 orbits generated from different random starting points, for a total of 250 orbits. Each orbit consists of 1000 points generated from 1000 iteration of the linked twisted map. We split the dataset in 80% train set and 20% test set. The dataset is naturally a point cloud, therefore the alpha filtration is best suited to generate the PDs. We recall that the pipeline performs a grid search between the most classical representation techniques for persistence diagrams, such as persistence images or persistence landscapes, for different values of the parameters. Such methods are evaluated with a ten-split cross-validation and the best results are returned. We report in Table 1 the accuracy results obtained by the pipeline for each of the ten runs, alongside the abbreviation of the best representation method (PI for persistence image and so on). It is worth noticing that the results show a good consistency of the best representation method. As already stated, consistency is of great importance to us as it demonstrates the feasibility of a single method for classification. The last row reports the mean \(\pm\) standard deviation of the pipeline over the course of the ten runs. Finally, Table 2 reports the best mean accuracy of the best single method, that is the combination vectorization method and classifier. Of course, the accuracy of a single method is lower than the mean in Table 1, since in that case the best accuracy could be achieved by different methods in the various runs. In conclusion, the results obtained by the pipeline in the dynamic dataset are very satisfactory, both from the point of view of accuracy and consistency of the representations. ### Mnist The MNIST dataset is a large dataset of handwritten digits. For more information about the MNIST Dataset, we refer the reader to [22], but it basically consists of \(28\times 28\) pixel greyscale images representing digits, and the task is to Figure 10: Example of truncated orbits \(\{(x_{n},y_{n}),n=0,\ldots,1000\}\) for the first \(1000\) iterations of the linked twisted map for \(r=4.3\) and five different starting points. Figure 9: Example of truncated orbits \(\{(x_{n},y_{n}),n=0,\ldots,1000\}\) for the first \(1000\) iterations of the linked twisted map for \(r=[2,3.5,4,4.1,4.3]\) respectively. classify each image to the corresponding digit. Figure 11 shows sample images from the MNIST dataset. To limit the computational cost of this application, only a subsample composed of 5000 training images and 1250 test images is used, following the same train-test ratio as in Section 4.1. In our first, somewhat naive, approach, we directly apply the pipeline to the PDs generated by a _greyscale filtration_ of a normalized and negative image. The negativization of the image is needed in order to focus on the digit instead of the background. For example, in Figure 11d the digit "8" topologically is one connected components and two 1-cycles. In the negative image, this is exactly what is computed by the pipeline. In the raw image instead are detected three connected components (the three yellow parts) and only one 1-cycles. This behaviour is due to the fact that the greyscale filtration is a _sublevel filtration_ and so the interesting parts of the image should have low intensity. In Table 3 we report the results achieved and it is clear how they are not at all satisfactory. This behaviour is not entirely unexpected since the homology of handwritten digits is almost always trivial, with few exceptions. As discussed in Section 3.6, we are now going to introduce some improvements to the basic pipeline in order to achieve better results. The greyscale filtration cannot necessarily capture the difference between various digits, as their homology is similar. A switch of filtration is therefore necessary. Following [26] we introduce the _height filtration_, the _radial filtration_ and the _density filtration_. All \begin{table} \begin{tabular}{l c c c c} \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H_{1}\) (fused) & \(H_{0}+H_{1}\) (concat) \\ \hline Run 1 & 0.493(PI) & 0.960(PL) & 0.507(PI) & 0.920(PL) \\ Run 2 & 0.480(PI) & 0.880(PL) & 0.600(PI) & 0.880(PL) \\ Run 3 & 0.507(PI) & 0.933(PL) & 0.667(PI) & 0.933(PL) \\ Run 4 & 0.480(PI) & 0.907(BC) & 0.533(PI) & 0.907(BC) \\ Run 5 & 0.453(PI) & 0.960(PL) & 0.573(PI) & 0.933(PL) \\ Run 6 & 0.533(PI) & 0.920(PL) & 0.560(PI) & 0.907(PL) \\ Run 7 & 0.547(PI) & 0.960(PL) & 0.560(PI) & 0.947(PL) \\ Run 8 & 0.520(PI) & 0.947(PL) & 0.613(PI) & 0.933(PL) \\ Run 9 & 0.560(PI) & 0.907(PL) & 0.533(PI) & 0.893(BC) \\ Run 10 & 0.520(PI) & 0.933(PL) & 0.507(PI) & 0.933(PL) \\ \hline Mean: & \(0.509\pm 0.031\) & \(0.931\pm 0.026\) & \(0.565\pm 0.048\) & \(0.919\pm 0.020\) \\ \hline \end{tabular} \end{table} Table 1: Accuracy for the dynamical system dataset. \begin{table} \begin{tabular}{l c c c} \hline Homology & Accuracy & Vectorization & Classifier \\ \hline \(H_{0}\) & 0.489 & Persistence Images & RandomForestClassifier \\ \(H_{1}\) & 0.921 & Persistence Landscapes SVC(kernel=’rbf’, C=10) \\ \(H_{0}+H_{1}\) (fused) & 0.553 & Persistence Images & RandomForestClassifier \\ \(H_{0}+H_{1}\) (concat) & 0.905 & Persistence Landscapes RandomForestClassifier \\ \hline \end{tabular} \end{table} Table 2: Best method for the dynamical system dataset. these filtrations capture topological features different from greyscale filtration, which simply captures the global feature of the image. We want to stress that height filtration, radial filtration and density filtration require a binarization of the image in order to be applied. The binarization of the image must be handled carefully for mainly two reasons. The first reason is the inevitable loss of topological features during this process. The second reason is the (arbitrary) choice of the threshold, which could not be obvious a priori. Nonetheless, the MNIST dataset seems particularly suited for binarization, and a trivial threshold of 0.4 will work just fine. We stress that, despite being referred to as filtrations, this is in fact an abuse of notation, as they are not actually filtrations in the sense of Section 3.2. The following filtrations are a mere manipulation of the binarized input image, and return an image of the same dimensions. There is no persistent homology component that defines a simplicial complex to the given image. In fact, the usual cubical persistence is applied to these filtered images as well. The key difference is that the filtered images have filtration values that emphasize certain topological components. Figure 11: Sample images from MNIST dataset. It can be seen at a glance that the homology of different digits is almost always trivial or close to trivial. \begin{table} \begin{tabular}{c c c c c} \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H_{1}\) (fused) & \(H_{0}+H_{1}\) (concat) \\ \hline Run 1 & 0.200(PI) & 0.305(PL) & 0.355(PI) & 0.325(PL) \\ Run 2 & 0.177(PI) & 0.318(PL) & 0.346(PI) & 0.319(PL) \\ Run 3 & 0.185(PI) & 0.322(PL) & 0.349(PI) & 0.327(PS) \\ Run 4 & 0.174(PI) & 0.318(PL) & 0.354(PI) & 0.333(PL) \\ Run 5 & 0.190(PS) & 0.321(PL) & 0.353(PI) & 0.329(PL) \\ Run 6 & 0.182(PI) & 0.326(PL) & 0.356(PI) & 0.338(PL) \\ Run 7 & 0.186(PI) & 0.315(PL) & 0.342(PI) & 0.336(PL) \\ Run 8 & 0.196(PS) & 0.330(PL) & 0.364(PI) & 0.353(PL) \\ Run 9 & 0.181(PI) & 0.305(PL) & 0.355(PI) & 0.318(BC) \\ Run 10 & 0.182(PI) & 0.318(PL) & 0.355(PI) & 0.325(PL) \\ \hline Mean: & \(0.185\pm 0.008\) & \(0.318\pm 0.008\) & \(0.353\pm 0.006\) & \(0.330\pm 0.010\) \\ \hline \end{tabular} \end{table} Table 3: Accuracy for MNIST dataset. #### 4.2.2 Height filtration The height filtration detects the emergence of topological features by looking at images only along certain directions. The birth and death value of the features is therefore linked to their position in the image, and not only to the intensity of the pixels. This is a great improvement in the case of digits, as for example along the direction \((0,1)\) the 1-cycle of the digit "6" will have a low birth value, while in the case of the digit "9" the 1-cycle will have higher birth value, and thus different PDs. With the greyscale filtration, both digits would result in one connected component and one 1-cycle and their persistence would only be determined by the thickness of the loop. For technical reasons, linked to how the height filtration algorithm handles the images, it is not necessary to use the negative of the images. We now describe in more detail the height filtration, presented in [41, 26]. Let \(\mathcal{B}\colon I\subseteq\mathbb{Z}^{2}\to\{0,1\}\) be a binary image and \(v\in\mathbb{R}^{2}\) with \(\|v\|_{2}=1\). We denote with \(<\cdot,\cdot>\) the Euclidean inner product. The height filtration \(\mathcal{H}\) of \(\mathcal{B}\) and direction \(v\) is defined by \[\mathcal{H}(p):=\begin{cases}<v,p>&\text{if }\mathcal{B}(p)=1\\ H_{\infty}&\text{if }\mathcal{B}(p)=0\end{cases}\] where \(p\in I\) and \(H_{\infty}\) is the filtration value of the pixel farthest from \(v\). We define eight different directions for the height filtration, namely the vectors \((0,1)\), \((0,-1)\), \((1,0)\), \((-1,0)\), \((1,1)\), \((1,-1)\), \((-1,1)\), \((-1,-1)\). In Figure 12 we show the different directions of filtration, alongside four filtered images generated by the height filtration for different direction vectors. We highlight the fact that in Figure 10(d) and in Figure 11(a) the digit "8" is one the negative of the other, for the reasons already mentioned. #### 4.2.3 Radial filtration Similarly to the height filtration, the radial filtration detects the emergence of topological features as we move away from a certain center of the image. This filtration is more suited to detect heterogeneity within the image, but is way more context-dependent than the height filtration. In particular, the size of the image plays a crucial role in the choice of the centers. The radial filtration was introduced in [32, 26]. Given a binary image \(\mathcal{B}\colon I\subseteq\mathbb{Z}^{2}\to\{0,1\}\) Figure 12: The eight directions used for the height filtration (Figure 11(a)) and resulting filtrated images along four directions (Figures 11(b), 11(c), 11(d), 11(e)). and a center \(c\in I\), the radial filtration of \(\mathcal{B}\) is defined as \[\mathcal{R}(p):=\begin{cases}\|c-p\|_{2}&\text{if }\mathcal{B}(p)=1\\ R_{\infty}&\text{if }\mathcal{B}(p)=0\end{cases}\] where \(p\in I\) and \(R_{\infty}\) is the filtration value of the pixel farthest from \(c\). We define nine centers for the radial filtration, namely the points \((13,6)\), \((6,13)\), \((13,13)\), \((20,13)\), \((13,20)\), \((6,6)\), \((6,20)\), \((20,6)\), \((20,20)\). In Figure 13 we show the different centers of filtration, alongside four filtered images generated by the radial filtration for different centers. #### 4.2.2 Density filtration Finally, the density filtration [26] measures the number of lighted pixels in the neighbours of a certain pixel and is better suited to detect clusters of lighted pixels. Given a binary image \(\mathcal{B}\colon I\subseteq\mathbb{Z}^{2}\to\{0,1\}\) and a radius \(r\in\mathbb{R}\), the density filtration is defined as \[\mathcal{D}(p):=\#\left\{v\in I,\mathcal{B}(v)=1\text{ and }\|v-p\|\leq r\right\}\] where \(p\in I\) and \(\|\cdot\|\) is any norm. In our pipeline, we chose \(r=6\) and the Euclidean norm. In Figure 14 we show an image from the MNIST dataset and the filtered image with respect to the density filtration with \(r=6\) and the euclidean norm. #### 4.2.3 Improved pipeline We have defined eight directions for the height filtration, nine centers for the radial filtration and one radius for the density filtration. The result is that from a single image we now obtain 18 different PDs for each homology dimension, each corresponding to a different filtration/parameter combination. Similarly to how we handled the different homology dimensions, we follow two approaches. The first approach is to simply collapse all the PDs into one and then apply the pipeline from the PDs representation onwards. We refer to this approach as the 'collapse approach'. The second idea is to first represent Figure 13: The nine centers used for the radial filtration (Figure 12(a)) and resulting filtrated images with respect to four different centers (Figures 12(b), 12(c), 12(d), 12(e)). each one of the PDs with a vectorization method, concatenate the 18 vectors thus resulting and use this'multivector' to classify the images. We refer to this approach as the'multivector approach'. We stress that the multivector approach and the collapse approach both result in two vectors, but the dimension of these vectors is very different. More precisely, the result of the multivector approach is 18 times the dimension of the collapse approach. There are of course major consequences in the computational cost of the two approaches, but as this is not the purpose of our study, we will overlook these aspects. In Table 4 we report the results of the collapse approach and in Table 5 the results of the multivector approach. Both the approaches are very satisfactory and represent a great improvement over the greyscale filtration. We can however notice how the representation methods are not as consistent as we would like. Table 6 reports the best single method for the MNIST dataset and the accuracy results are very high. This means that the non-consistency of the representation is only due to the high accuracy of every representation method, and not to a variability of the correct vectorization, thereby making the non-consistency less impactful. \begin{table} \begin{tabular}{c c c c c} \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H_{1}\) (fused) & \(H_{0}+H_{1}\) (concat) \\ \hline Run 1 & 0.733(PI) & 0.629(PL) & 0.732(PI) & 0.792(PI) \\ Run 2 & 0.742(PI) & 0.632(PI) & 0.742(PI) & 0.796(PI) \\ Run 3 & 0.732(PI) & 0.612(PI) & 0.762(PI) & 0.787(PI) \\ Run 4 & 0.739(PI) & 0.639(PL) & 0.753(PI) & 0.806(PI) \\ Run 5 & 0.739(PI) & 0.622(PI) & 0.741(PI) & 0.806(PI) \\ Run 6 & 0.733(PL) & 0.620(PL) & 0.738(PI) & 0.796(PI) \\ Run 7 & 0.722(PI) & 0.649(PL) & 0.752(PI) & 0.801(PL) \\ Run 8 & 0.716(PI) & 0.635(PL) & 0.738(PI) & 0.779(PI) \\ Run 9 & 0.726(PL) & 0.634(PL) & 0.770(PI) & 0.801(PI) \\ Run 10 & 0.736(PI) & 0.626(PI) & 0.743(PI) & 0.794(PL) \\ \hline Mean: & 0.732 \(\pm\) 0.008 & 0.630 \(\pm\) 0.010 & 0.747 \(\pm\) 0.011 & 0.796 \(\pm\) 0.008 \\ \hline \end{tabular} \end{table} Table 4: Accuracy for MNIST dataset of the collapse approach. Figure 14: The original digit ”8” (Figure 14a) and the resulting filtered image with respect to the density filtration with radius \(r=6\) (Figure 14b). #### 4.2.2 Comparison with other TDA approaches Finally, a comparison is made with two papers that also use TDA and the MNIST dataset. Our improved approach is partially inspired by the work of [26], in which the authors perform 26 filtrations of the same image. They compute 14 metrics for each of these filtrations and the resulting vector of topological features for each image has a dimension of 728 (26 filtrations \(\times 14\) metrics \(\times 2\) homology dimensions). After a feature selection using the Pearson correlation index, only 84 not fully correlated features remain. The resulting vector is then passed to a random forest classifier. For the sake of comparison, since in our work we train only on a 5000 images subsample of the MNIST dataset with multiple classifiers, we repeat their experiment in our context. For this reason, the results reported in this section slightly differ from those obtained in the original work. In [4], the authors perform the height filtration along the horizontal and vertical axes, for each homology dimension, for a total of 8 PDs. They use some vectorization methods, as well as kernel methods for PDs and an adaptive template system and then classify with the same classifiers as in our work (with the exclusion of lasso). Again, for the sake of comparison, we repeat the experiment in our context, only without the kernel method, for computational and comparison reasons. The results of both [26] and [4] are presented in Table 7. Regarding the results obtained by [4], with (TF) we indicate that the best method was tent functions. The results obtained \begin{table} \begin{tabular}{l c c c} \hline \hline Homology & Accuracy & Vectorization & Classifier & Approach \\ \hline \(H_{0}\) & 0.911 & PI & SVC(kernel=’rbf’, C=10) Multivector \\ \(H_{1}\) & 0.624 & PL & SVC(kernel=’rbf’, C=20) & Collapse \\ \(H_{0}+H_{1}\) (fused) & 0.938 & PI & SVC(kernel=’rbf’, C=20) Multivector \\ \(H_{0}+H_{1}\) (concat) & 0.936 & PI & SVC(kernel=’rbf’, C=20) Multivector \\ \hline \hline \end{tabular} \end{table} Table 6: Best method for MNIST dataset. \begin{table} \begin{tabular}{c c c c c} \hline \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H_{1}\) (fused) & \(H_{0}+H_{1}\) (concat) \\ \hline Run 1 & 0.911(PL) & 0.614(PL) & 0.944(PI) & 0.937(PL) \\ Run 2 & 0.922(PL) & 0.620(PL) & 0.944(BC) & 0.949(PL) \\ Run 3 & 0.916(PL) & 0.610(PS) & 0.943(BC) & 0.945(PL) \\ Run 4 & 0.901(PL) & 0.619(PL) & 0.942(BC) & 0.929(PS) \\ Run 5 & 0.911(PL) & 0.601(PL) & 0.937(BC) & 0.942(PL) \\ Run 6 & 0.919(PL) & 0.616(PL) & 0.947(BC) & 0.943(PL) \\ Run 7 & 0.916(PL) & 0.630(PL) & 0.937(PI) & 0.939(PL) \\ Run 8 & 0.911(PL) & 0.615(PL) & 0.934(BC) & 0.935(PS) \\ Run 9 & 0.918(PL) & 0.617(PL) & 0.946(PL) & 0.944(PL) \\ Run 10 & 0.924(PL) & 0.625(PL) & 0.944(BC) & 0.934(PL) \\ \hline Mean: & \(0.915\pm 0.006\) & \(0.617\pm 0.007\) & \(0.942\pm 0.004\) & \(0.940\pm 0.006\) \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy for MNIST dataset of the multivector approach. from our improved pipeline are in line with, if not slightly better than, those obtained from other works. It should be noted that the pipeline described in [26] does not use representation methods but only metrics on persistence diagrams. ### Fmnist Fashion MNIST (FMNIST) is a dataset of Zalando's article images, consisting of \(28\times 28\) pixel greyscale images divided into ten classes, exactly like MNIST. FMNIST is intended to be a direct drop-in replacement for the original MNIST. This dataset is strongly adequate for our study since we know that the topology of handwritten digits is almost always trivial, while this is not the case in fashion objects. For more information on the FMNIST dataset, we refer the reader to [45]. Figure 15 shows sample images from the FMNIST dataset. Since the context is fairly identical to Section 4.2, we follow exactly the same approach. Table 8 gives the results of the pipeline to the FMNIST dataset. Again, the results are not at all adequate, although there is a consistency in the vectorization method. Nevertheless, we can already observe a clear increase compared to the respective MNIST results (Table 3). This is in accordance with the fact that greyscale filtration is more suitable when the homology of the image is non-trivial, as in the case of fashion images. Following the approach used in the previous section, we introduce the exact same improvements to the pipeline as the context is precisely the same. In Figure 16 we show some filtered images for the FMNIST dataset. Table 9 and Table 10 report the results of respectively the collapse approach and the multivector approach for the FMNIST dataset. Both these approaches are a great improvement over the original pipeline, meaning that also for more complex images a diversification of the filtration may be well suited. The dataset is clearly more convoluted than the MNIST dataset, and this may \begin{table} \begin{tabular}{c c c} \hline Accuracy: & [26] pipeline & [4] pipeline \\ \hline Run 1 & 0.945 & 0.916(TF) \\ Run 2 & 0.929 & 0.924(TF) \\ Run 3 & 0.934 & 0.926(TF) \\ Run 4 & 0.946 & 0.923(PI) \\ Run 5 & 0.945 & 0.931(TF) \\ Run 6 & 0.934 & 0.925(TF) \\ Run 7 & 0.956 & 0.926(TF) \\ Run 8 & 0.943 & 0.926(PI) \\ Run 9 & 0.948 & 0.927(TF) \\ Run 10 & 0.933 & 0.926(TF) \\ \hline Mean: & \(0.941\pm 0.008\) & \(0.925\pm 0.004\) \\ \hline \end{tabular} \end{table} Table 7: Accuracy for MNIST dataset of [26] pipeline and [4] pipeline. explain the significant drop in results compared to the previous section. The consistency of the representation is however very remarkable. Following Section 4.2, we compare our results with those obtained by the [26] and [4] pipelines. We highlight the fact that in both papers the FMNIST dataset was not treated, but given the similarity of the two databases we simply reused their code for MNIST. The results of both pipelines are reported in Table 12. The accuracy results for this application are quite surprising. In particular, our pipeline does not perform very well, despite the improvement from the previous section. The pipeline from [26] achieves slightly better results, while the [4] pipeline performs best of all. Possible reasons may be a different choice of parameters but also a simpler filtration which, in this particular setting, is more suited for the dataset. \begin{table} \begin{tabular}{c c c c c} \hline \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H_{1}\) (fused) & \(H_{0}+H_{1}\) (concat) \\ \hline Run 1 & 0.678(PL) & 0.431(PL) & 0.750(PL) & 0.717(PL) \\ Run 2 & 0.679(PL) & 0.420(PS) & 0.702(PL) & 0.682(PL) \\ Run 3 & 0.704(PL) & 0.448(PL) & 0.718(PL) & 0.715(PL) \\ Run 4 & 0.690(PL) & 0.408(PL) & 0.721(PL) & 0.706(PL) \\ Run 5 & 0.678(PL) & 0.418(PL) & 0.714(PL) & 0.707(PL) \\ Run 6 & 0.670(PL) & 0.397(PL) & 0.707(PL) & 0.678(PL) \\ Run 7 & 0.686(PL) & 0.412(PL) & 0.705(PL) & 0.688(PL) \\ Run 8 & 0.698(PL) & 0.425(PL) & 0.721(PL) & 0.712(PL) \\ Run 9 & 0.690(PL) & 0.438(PL) & 0.716(PL) & 0.707(PL) \\ Run 10 & 0.682(PL) & 0.414(PL) & 0.709(PL) & 0.696(PL) \\ \hline Mean: & \(0.686\pm 0.010\) & \(0.421\pm 0.014\) & \(0.716\pm 0.013\) & \(0.701\pm 0.013\) \\ \hline \hline \end{tabular} \end{table} Table 10: Accuracy for FMNIST dataset of the multivector approach. Figure 16: The eight directions used for the height filtration (Figure 16a) and resulting filtrated image (Figure 16b). The nine centers used for the radial filtration (Figure 16c) and the resulting filtered image (Figure 16d). Density filtered image (Figure 16e). \begin{table} \begin{tabular}{c c c c} \hline \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H_{1}\) (fused) & \(H_{0}+H_{1}\) (concat) \\ \hline Run 1 & 0.642(PL) & 0.430(PI) & 0.632(PL) & 0.662(PL) \\ Run 2 & 0.627(PL) & 0.410(PI) & 0.611(PL) & 0.640(PL) \\ Run 3 & 0.636(PL) & 0.442(PI) & 0.619(PL) & 0.672(PL) \\ Run 4 & 0.630(PL) & 0.391(PL) & 0.613(PL) & 0.654(PL) \\ Run 5 & 0.634(PL) & 0.418(PI) & 0.621(PL) & 0.661(PL) \\ Run 6 & 0.620(PL) & 0.410(PI) & 0.612(PL) & 0.642(PL) \\ Run 7 & 0.637(PL) & 0.421(PI) & 0.611(PL) & 0.662(PL) \\ Run 8 & 0.640(PL) & 0.434(PI) & 0.637(PL) & 0.662(PL) \\ Run 9 & 0.646(PL) & 0.426(PL) & 0.628(PI) & 0.656(PL) \\ Run 10 & 0.627(PL) & 0.413(PL) & 0.610(PL) & 0.646(PL) \\ \hline Mean: & \(0.634\pm 0.008\) & \(0.419\pm 0.014\) & \(0.619\pm 0.009\) & \(0.656\pm 0.010\) \\ \hline \hline \end{tabular} \end{table} Table 9: Accuracy for FMNIST dataset of the collapse approach. ### Collab COLLAB dataset is a network graphs dataset of scientific collaborations coming from [46]. It consists of 5000 graphs derived from three public collaboration datasets which also serve as labels: _high energy physics_, _condensed matter physics_ and _astro physics_. Each node of the graphs is an author, and there is a link between two authors if they coauthor a scientific article. COLLAB is a dataset of weighted, undirected graphs. Every collaboration between \(n\) authors contributes to the edge weight between those authors of a factor \(1/(n-1)\). The vertices are not weighted, this means that all vertices immediately enter the filtration as 0-simplexes. The filtration value of the 1-simplexes is the weight of the edge connecting the two vertices and for 2-simplexes we chose as filtration value the maximum weight of the edges forming it. For computational reasons, the maximum homology dimension computed is \(H_{2}\). In Figure 17 we show a graph of COLLAB and the associated PD. For aesthetic reasons we have included only a small portion of the 2-simplexes and the edge weight is not displayed. The graphs of this dataset are extremely connected and the computational cost in order to compute their PD is very high. In Table 13 we report the results achieved by the pipeline on COLLAB dataset, and in Table 14 the best combination representation - classifier. These results are quite satisfactory and in line with other \begin{table} \begin{tabular}{l c c c} \hline Homology & Accuracy & Vectorization & Classifier & Approach \\ \hline \(H_{0}\) & 0.681 & PL & RFC & Multivector \\ \(H_{1}\) & 0.417 & PI & RFC & Collapse \\ \(H_{0}+H_{1}\) (fused) & 0.716 & PL & RFC & Multivector \\ \(H_{0}+H_{1}\) (concat) & 0.701 & PL & RFC & Multivector \\ \hline \end{tabular} \end{table} Table 11: Best method for FMNIST dataset. \begin{table} \begin{tabular}{c c c} \hline Accuracy: [26] pipeline & [4] pipeline \\ \hline Run 1 & 0.753 & 0.810(PI) \\ Run 2 & 0.739 & 0.795(PI) \\ Run 3 & 0.750 & 0.818(TF) \\ Run 4 & 0.757 & 0.793(PI) \\ Run 5 & 0.769 & 0.795(PI) \\ Run 6 & 0.738 & 0.792(PI) \\ Run 7 & 0.750 & 0.802(PI) \\ Run 8 & 0.748 & 0.813(PI) \\ Run 9 & 0.752 & 0.803(PI) \\ Run 10 & 0.736 & 0.815(PI) \\ \hline Mean: & \(0.749\pm 0.009\) & \(0.804\pm 0.009\) \\ \hline \end{tabular} \end{table} Table 12: Accuracy for FMNIST dataset of [26] pipeline and [4] pipeline. topology-based methods, such as PersLay [14], achieving an accuracy of 76.4% (mean accuracy over ten runs of a 10-fold classification evaluation). ## 5 Discussion The proposed pipeline proved to be a valuable classification tool in various contexts. Moreover, some patterns emerged in the course of the experiments. In particular, for point clouds and graphs, it seems that the maximum homology dimension alone is sufficient to obtain very appreciable results. For images, on the \begin{table} \begin{tabular}{c c c c c c} \hline Accuracy: & \(H_{0}\) & \(H_{1}\) & \(H_{2}\) & \(H_{0}+H_{1}+H_{2}\) (fused) & \(H_{0}+H_{1}+H_{2}\) (concat) \\ \hline Run 1 & 0.602(PI) & 0.549(PS) & 0.731(PI) & 0.730(PI) & 0.730(PI) \\ Run 2 & 0.613(PI) & 0.543(PS) & 0.760(PI) & 0.759(PI) & 0.747(PI) \\ Run 3 & 0.613(PI) & 0.549(PS) & 0.741(PI) & 0.747(PI) & 0.739(PI) \\ Run 4 & 0.621(BC) & 0.542(PL) & 0.736(PI) & 0.749(PI) & 0.737(PI) \\ Run 5 & 0.628(BC) & 0.551(PS) & 0.746(PI) & 0.758(PI) & 0.752(PI) \\ Run 6 & 0.621(PI) & 0.557(PS) & 0.759(PI) & 0.763(PI) & 0.753(PI) \\ Run 7 & 0.609(PI) & 0.550(PS) & 0.736(PI) & 0.750(PI) & 0.734(PI) \\ Run 8 & 0.626(BC) & 0.557(PS) & 0.750(PI) & 0.751(PI) & 0.725(PI) \\ Run 9 & 0.615(PS) & 0.559(PS) & 0.745(PI) & 0.749(PI) & 0.739(PI) \\ Run 10 & 0.607(PS) & 0.567(PS) & 0.753(PI) & 0.748(PI) & 0.739(PI) \\ \hline Mean: & \(0.616\pm 0.008\) & \(0.552\pm 0.007\) & \(0.746\pm 0.009\) & \(0.750\pm 0.009\) & \(0.739\pm 0.009\) \\ \hline \end{tabular} \end{table} Table 13: Accuracy for COLLAB dataset. Figure 17: A graph of COLLAB (Figure 17a) and the corresponding PD (Figure 17b). For aesthetic reasons, only a small sample of 2-simplexes (green points) are shown and the edge weight is not displayed. other hand, only \(H_{0}\) achieves good results. The 'fused' and 'concat' approaches are consistently among the best performers, with the exception of the dynamic dataset where 'fused' fails. This discrepancy does not seem to be explained by heterogeneity in the number of points in the homology dimensions, which are comparable in all datasets with the exception of COLLAB. For these reasons, it would seem that regardless the type of data under consideration and the number of points in the different homology dimensions, the 'concat' approach is the safest method. Alternatively, if the data are not images and one wishes to reduce the computational cost, using only the maximum homology dimension seems to be a viable option. No correlation emerged between data type and vectorization method. In general, it seems that persistence image and persistence landscape are always the safest options. In order to support these statements, an analysis of the statistical significance has been performed (using the paired t-test) on a subset of the results presented in Section 4. In the following, each table shows the p-values associated with the mean accuracy results of the different vectorization methods for each homology dimension. For the sake of brevity, we describe in detail only Table 15 and Table 19. Table 15 has been computed as follows. In Section 4 we have computed the mean accuracy of each classifier and each vectorization method previously described over the course of the ten runs of the cross-validation. This consists of a matrix of 9 rows (one for each classifier) and 21 columns (one for each vectorization method). We have selected the best row (in terms of mean accuracy) for each vectorization method and tested each vectorization method against each other using the t-test function from scipy. For more information on t-test and scipy we refer the reader to [30, 44]. In Section 4 we have assessed that PI is the best vectorization method for \(H_{0}\) and \(H_{0}+H_{1}\) 'fused', while PL is the best method for \(H_{1}\) and \(H_{0}+H_{1}\) 'concat'. Table 15 ensures that these statements have statistical validity, because the p-value associated with these methods and homology dimensions is sufficiently small. In Table 19 we have followed the same procedure, with the difference that we have compared each homological dimension against each other for every dataset. In particular, the p-value of the 'concat' approach is consistently small, confirming that its usefulness is supported by statistical results. \begin{table} \begin{tabular}{l c l l} \hline Homology & Accuracy & Vectorization & Classifier \\ \hline \(H_{0}\) & 0.613 & Betti Curves & RandomForestClassifier \\ \(H_{1}\) & 0.550 & Persistence Silhouette & RandomForestClassifier \\ \(H_{2}\) & 0.746 & Persistence Images & RandomForestClassifier \\ \(H_{0}+H_{1}+H_{2}\) (fused) & 0.749 & Persistence Images & RandomForestClassifier \\ \(H_{0}+H_{1}+H_{2}\) (concat) & 0.736 & Persistence Images & RandomForestClassifier \\ \hline \end{tabular} \end{table} Table 14: Best method for COLLAB dataset. \begin{table} \begin{tabular}{c|c c c c} \hline **p-value** & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H1\) fused & \(H_{0}+H1\) concat \\ \hline **PI vs PL** & \(4.55\cdot 10^{-9}\) & \(1.02\cdot 10^{-8}\) & \(1.82\cdot 10^{-2}\) & \(3.39\cdot 10^{-7}\) \\ **PI vs PS** & \(8.20\cdot 10^{-7}\) & \(6.32\cdot 10^{-7}\) & \(1.58\cdot 10^{-2}\) & \(3.98\cdot 10^{-6}\) \\ **PI vs BC** & \(1.21\cdot 10^{-3}\) & \(8.97\cdot 10^{-9}\) & \(2.73\cdot 10^{-1}\) & \(4.18\cdot 10^{-5}\) \\ **PL vs PS** & \(1.06\cdot 10^{-4}\) & \(2.27\cdot 10^{-2}\) & \(6.23\cdot 10^{-2}\) & \(9.62\cdot 10^{-5}\) \\ **PL vs BC** & \(1.14\cdot 10^{-6}\) & \(3.62\cdot 10^{-2}\) & \(4.27\cdot 10^{-2}\) & \(6.76\cdot 10^{-5}\) \\ **PS vs BC** & \(7.92\cdot 10^{-5}\) & \(4.70\cdot 10^{-3}\) & \(4.91\cdot 10^{-3}\) & \(1.46\cdot 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 16: p-value statistic for the MNIST dataset. \begin{table} \begin{tabular}{c|c c c c} \hline **p-value** & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H1\) fused & \(H_{0}+H1\) concat \\ \hline **PI vs PL** & \(2.02\cdot 10^{-2}\) & \(6.40\cdot 10^{-5}\) & \(2.77\cdot 10^{-1}\) & \(9.63\cdot 10^{-3}\) \\ **PI vs PS** & \(1.34\cdot 10^{-1}\) & \(2.54\cdot 10^{-5}\) & \(1.01\cdot 10^{-3}\) & \(6.45\cdot 10^{-3}\) \\ **PI vs BC** & \(3.96\cdot 10^{-1}\) & \(1.96\cdot 10^{-3}\) & \(2.68\cdot 10^{-2}\) & \(8.23\cdot 10^{-1}\) \\ **PL vs PS** & \(6.91\cdot 10^{-3}\) & \(7.16\cdot 10^{-1}\) & \(9.79\cdot 10^{-1}\) & \(2.67\cdot 10^{-1}\) \\ **PL vs BC** & \(1.53\cdot 10^{-2}\) & \(2.05\cdot 10^{-6}\) & \(3.97\cdot 10^{-3}\) & \(1.66\cdot 10^{-3}\) \\ **PS vs BC** & \(6.25\cdot 10^{-1}\) & \(1.04\cdot 10^{-6}\) & \(8.16\cdot 10^{-3}\) & \(3.46\cdot 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 17: p-value statistic for the FMNIST dataset. \begin{table} \begin{tabular}{c|c c c c} \hline **p-value** & \(H_{0}\) & \(H_{1}\) & \(H_{0}+H1\) fused & \(H_{0}+H1\) concat \\ \hline **PI vs PL** & \(2.03\cdot 10^{-9}\) & \(9.33\cdot 10^{-3}\) & \(1.27\cdot 10^{-3}\) & \(5.06\cdot 10^{-2}\) \\ **PI vs PS** & \(2.03\cdot 10^{-9}\) & \(3.44\cdot 10^{-1}\) & \(5.38\cdot 10^{-1}\) & \(7.97\cdot 10^{-1}\) \\ **PI vs BC** & \(2.03\cdot 10^{-9}\) & \(4.60\cdot 10^{-3}\) & \(1.27\cdot 10^{-3}\) & \(3.84\cdot 10^{-3}\) \\ **PL vs PS** & Null & \(1.28\cdot 10^{-3}\) & \(2.26\cdot 10^{-4}\) & \(1.34\cdot 10^{-4}\) \\ **PL vs BC** & Null & \(4.61\cdot 10^{-2}\) & Null & \(2.55\cdot 10^{-3}\) \\ **PS vs BC** & Null & \(7.90\cdot 10^{-3}\) & \(2.26\cdot 10^{-4}\) & \(1.37\cdot 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 15: p-value statistic for the dynamical system dataset. ## 6 Conclusion Results of classification discussed in the previous sections show that the proposed pipeline is a procedure able to maximize the capabilities of topological data analysis and machine learning. Such pipeline allows the analysis of digital data without restrictions such as data type or acquisition method. Moreover, the pipeline is not limited to the size of the dataset, which is often the case of the most recent and best-performing methods for classification based on deep learning architectures. In addition, interesting correlations arose between homology dimension and classification results. The concatenation of PDs in the different homology dimensions consistently seems to be the most suitable choice. In the very near future, we will further investigate the correlation between filtration, vectorization and data type in very challenging settings arising from real-world datasets; e.g. remote sensing (for climate prediction), and in Raman spectroscopy (for cancer staging). \begin{table} \begin{tabular}{c|c c c c c} \hline \hline **p-value** & \(H_{0}\) & \(H_{1}\) & \(H_{2}\) & \(H_{0}+H1\) & \(H_{0}+H1\) \\ & & & & fused & concat \\ \hline **PI vs PL** & \(3.73\cdot 10^{-2}\) & \(1.53\cdot 10^{-5}\) & \(7.87\cdot 10^{-4}\) & \(7.89\cdot 10^{-4}\) & \(1.13\cdot 10^{-3}\) \\ **PI vs PS** & \(1.52\cdot 10^{-1}\) & \(7.06\cdot 10^{-5}\) & \(5.17\cdot 10^{-3}\) & \(2.32\cdot 10^{-4}\) & \(7.29\cdot 10^{-4}\) \\ **PI vs BC** & \(7.80\cdot 10^{-1}\) & \(3.71\cdot 10^{-4}\) & \(1.13\cdot 10^{-3}\) & \(2.01\cdot 10^{-5}\) & \(8.58\cdot 10^{-5}\) \\ **PL vs PS** & \(3.14\cdot 10^{-4}\) & \(4.24\cdot 10^{-1}\) & \(3.36\cdot 10^{-5}\) & \(2.67\cdot 10^{-1}\) & \(2.56\cdot 10^{-1}\) \\ **PL vs BC** & \(4.45\cdot 10^{-5}\) & \(9.19\cdot 10^{-3}\) & \(6.17\cdot 10^{-2}\) & \(2.20\cdot 10^{-1}\) & \(3.46\cdot 10^{-1}\) \\ **PS vs BC** & \(7.63\cdot 10^{-4}\) & \(7.05\cdot 10^{-3}\) & \(8.99\cdot 10^{-1}\) & \(7.00\cdot 10^{-1}\) & \(3.31\cdot 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 18: p-value statistic for the COLLAB dataset. \begin{table} \begin{tabular}{c|c c c c} \hline \hline **p-value** & **Dynamical** & **MNIST** & **FMNIST** & **COLLAB** \\ \hline \(H_{0}\) vs \(H_{1}\) & \(5.71\cdot 10^{-5}\) & \(1.82\cdot 10^{-10}\) & \(1.78\cdot 10^{-8}\) & \(1.39\cdot 10^{-10}\) \\ \(H_{0}\) vs \(H_{2}\) & - & - & - & \(9.82\cdot 10^{-3}\) \\ \(H_{0}\) vs fused & \(1.35\cdot 10^{-1}\) & \(7.81\cdot 10^{-6}\) & \(5.26\cdot 10^{-4}\) & \(1.31\cdot 10^{-2}\) \\ \(H_{0}\) vs concat & \(2.18\cdot 10^{-6}\) & \(3.69\cdot 10^{-3}\) & \(3.58\cdot 10^{-2}\) & \(3.87\cdot 10^{-3}\) \\ \(H_{1}\) vs \(H_{2}\) & - & - & - & \(3.84\cdot 10^{-5}\) \\ \(H_{1}\) vs fused & \(3.56\cdot 10^{-4}\) & \(5.27\cdot 10^{-10}\) & \(1.01\cdot 10^{-8}\) & \(3.65\cdot 10^{-5}\) \\ \(H_{1}\) vs concat & \(2.82\cdot 10^{-2}\) & \(5.66\cdot 10^{-14}\) & \(3.82\cdot 10^{-8}\) & \(1.34\cdot 10^{-5}\) \\ \(H_{2}\) vs fused & - & - & - & \(5.73\cdot 10^{-1}\) \\ \(H_{2}\) vs concat & - & - & - & \(3.50\cdot 10^{-1}\) \\ fused vs concat & \(8.08\cdot 10^{-6}\) & \(1.31\cdot 10^{-1}\) & \(1.04\cdot 10^{-4}\) & \(1.15\cdot 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 19: p-value statistic for the homology dimensions.
2308.16456
Least Squares Maximum and Weighted Generalization-Memorization Machines
In this paper, we propose a new way of remembering by introducing a memory influence mechanism for the least squares support vector machine (LSSVM). Without changing the equation constraints of the original LSSVM, this mechanism, allows an accurate partitioning of the training set without overfitting. The maximum memory impact model (MIMM) and the weighted impact memory model (WIMM) are then proposed. It is demonstrated that these models can be degraded to the LSSVM. Furthermore, we propose some different memory impact functions for the MIMM and WIMM. The experimental results show that that our MIMM and WIMM have better generalization performance compared to the LSSVM and significant advantage in time cost compared to other memory models.
Shuai Wang, Zhen Wang, Yuan-Hai Shao
2023-08-31T04:48:59Z
http://arxiv.org/abs/2308.16456v1
# Least Squares Maximum and Weighted Generalization-Memorization Machines ###### Abstract In this paper, we propose a new way of remembering by introducing a memory influence mechanism for the least squares support vector machine (LSSVM). Without changing the equation constraints of the original LSSVM, this mechanism, allows an accurate partitioning of the training set without overfitting. The maximum memory impact model (MIMM) and the weighted impact memory model (WIMM) are then proposed. It is demonstrated that these models can be degraded to the LSSVM. Furthermore, we propose some different memory impact functions for the MIMM and WIMM. The experimental results show that that our MIMM and WIMM have better generalization performance compared to the LSSVM and significant advantage in time cost compared to other memory models. Generalization-memorization mechanism, Kernel, Support vector machine, Kernel functiontypesetting. ## I Introduction Zero experience risk, also known as memory of training data, has been widely researched and discussed in machine learning [1, 2, 3]. Traditional learning machines require to classify the training samples correctly as much as possible, but it is prone to fall into the overfitting problem. Therefore, to avoid overfitting, we commonly use regularization techniques but also reduce the memory ability, e.g.support vector machines (SVMs) [4]. However, more powerful tools have been proposed in machine learning based on the zero empirical risks. For instance, Deep Neural Network (DNN) [1, 5, 6] has a structure of multiple hidden layers. Each neuron receives inputs from the neurons in the previous layer and generates outputs that serve as inputs to the neurons in the next layer. And each hidden layer contains multiple neurons to achieve the almost zero empirical risk. It is also realized by Recurrent Neural Network (RNN) [7, 8, 9, 10] which is a neural network model commonly used in sequential data processing. Compared to traditional feed-forward neural networks, RNN considers temporal dependencies when processing sequential data. Information is allowed to be passed from the current time step to the next time step. This recurrent structure allows RNN to process sequence inputs of arbitrary length and to capture temporal dependencies in the sequence. The Long Short-Term Memory (LSTM) [11, 12, 13] is a particular RNN for solving the long-term dependency problem in RNN. Unlike the traditional RNN, the LSTM model introduces three gates (input gate, forget gate and output gate) and a memory unit to effectively capture and remember critical information in long sequences. Unlike the LSTM model of memory, Devansh Arpitel al [5] investigated the role of memory in deep learning, linking it to ability, generalization, and adversarial robustness. It is also shown that the training data itself plays an important role in determining the degree of memory. Zhang et al [14].explored a new mechanism to improve model generalization through explicit memory and proposed the residual memory (ResMem) algorithm, a new approach to augment existing prediction models (e.g., neural networks) by fitting the residuals of the model with \(k\)-nearest-neighbor based moderators. Indeed, memory systems have been widely explored by researchers to enhance memorization capabilities in various domains. For instance, in the field of machine learning and artificial intelligence, memory mechanisms have been proposed to assist learners in remembering and revising learning tasks [15, 16, 17].Rafferty et al. [18] presented an observable Partially Observable Markov Decision Process (POMDP) planning problem to address memory tasks [19, 20], while Settle and Meeder [21] developed a trainable memory retention model that optimizes revision schedules for effective memorization. In the realm of deep reinforcement learning, researchers have explored novel methods and optimal policies, elevating the efficiency and engagement of learners [22, 23, 24]. In other works related to memory, researchers have focused on statistical characteristics of learners' memory behavior rather than just time-series features [25, 26]. This approach has been extended to consider forgetting mechanisms and spaced repetition to improve memory retention [27, 28]. By transforming the optimization problem into a stochastic shortest path problem, these methods aim to enhance the learning process through efficient memory utilization and forgetting strategies [29, 30, 31]. Recently, Vapnik and Izmailov [4, 32] studied the memory problem of SVMs and introduced two RBF kernels in the SVMs to improve their memory capability, called SVM\({}^{m}\). The two RBF kernels, one for generalization and one for memory, are used to memorize the training samples by properly tuning their parameters to achieve zero empirical risk and have a well generalization performance. Subsequently, a generalization-memorization machine (GMM) [33, 34] presented a more general model and explained the mechanism of SVM\({}^{m}\) more clearly. In this paper, another new memory mechanism is proposed. It contains two memory models by the least squares sense, i.e., a Maximum Impact Memory Model (MIMM) and a Weighted Impact Memory Model(WIMM). Their learning rate are much faster than the GMM and SVM\({}^{m}\),while guaranteeing their zero empirical risks. The main contributions of this paper are as follow: * For the memory problem, we proposed the maximum memory impact (MIMM), which uses only the nearest training points for test point judgments and gives a sufficient condition for the empirical risk of the model to be zero. * For the MIMM model, we constructed a memory influence function suitable for the model to ensure the memory capacity of the model. * We provide a clearer interpretation of the memory kernel of the model and derivatively give conditions for the model to degenerate to LSSVM. * Compared with other memory models, the two memory models we proposed, WIMM and MIMM, are shorter in terms of time cost in memorizing the same learning task machines. The next section provides a brief overview of the development of Support Vector Machines (SVM) and Least Squares Support Vector Machines (LSSVM). It also reviews the GMM models. The third section introduces the new objective function and the novel memory mechanism. This includes discussing memory cost and impact functions and how they contribute to solving the MIMM and WIMM models. The last section presents the numerical experiments conducted to validate the proposed MIMM and WIMM models. Conclusions drawn from these experiments are also discussed in this section ## II Review Consider a binary classification problem in a n-dimensional real space \(\mathbb{R}^{n}\). The training set is given by T = \(\{(\mathbf{x}_{i},y_{i})|i=1,2,...,m\}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) is the \(i\)th sample, and \(y_{i}\in\{+1,-1\}\) is the corresponding label. The training samples and their labels are organized into matrix \(\mathbf{X}\)\(\in\mathbb{R}^{n\times m}\) and diagonal matrix \(\mathbf{Y}\) with diagonal elements \(\mathbf{Y}_{ii}=y_{i}\) (\(i=1,...,m\)), respectively. SVM [4, 35, 36] deals with this binary classification problem by finding a pair of parallel hyperplanes in the feature space, where the margin is maximized to separate the two classes as much as possible. Scholkopf et al [37]. proposed a new class of regression and classification models based on the SVM, in which a parameter \(\nu\) was introduced to not only effectively controls the number of support vectors but also suit for different data distributions well. Twin Support Vector Machine (TWSVM) was introduced by Jayadeva et al. [38]. The TWSVM approach aims to identify a pair of non-parallel hyperplanes that can effectively solve the classification problem, resulting in a reduced problem size compared to traditional SVMs. To further accelerate the learning speed of SVMs,the Least Squares Support Vector Machine (LSSVM) [39, 40] was proposed by J.A.K. Suykens et al. Due to the equation constraints in the LSSVM formulation, it requires to solve a system linear equations rather than the quadratic programming problem in the SVM. However, the zero empirical risks are guaranteed in neither of these SVMs. Recently, Vapnik and Izmailov [4, 32, 41] proposed a new kernel function consist of two Gaussian kernels as \(K(x,x^{\prime})=\tau\exp\{-\sigma^{2}(x-\hat{x})^{2}\}+(1-\tau)\exp\{-\sigma ^{2}_{*}(x-\hat{x})^{2}\}\) (where \(0\leq\tau\leq 1\),and\(\sigma_{*}\gg\sigma\)). This kernel function could greatly improve the memory ability of SVM. To memorize all the training samples,Wang et al. [33] proposed a generalization-memorization machine(GMM) under the principle of large margins, and this mechanism can obtain zero empirical risk easily. Hard Generalization-Mem -orization Machine (HGMM) [33] constructed a classification decision with \(f(\mathbf{x})=<\mathbf{w},\varphi(\mathbf{x})>+b+\sum\limits_{i=1}^{m}y_{i}c_ {i}\delta(\mathbf{x}_{i},\mathbf{x})\), and \(\mathbf{w}\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\) by solving \[\begin{split}\min_{\mathbf{w},b,c}&\frac{1}{2}\| \mathbf{w}\|^{2}+\frac{\lambda}{2}\|c\|^{2}\\ \mathrm{s.t.}& y_{i}(<\mathbf{w},\varphi(\mathbf{x}_{ i})>+b+\sum\limits_{j=1}^{m}y_{j}c_{j}\delta(\mathbf{x}_{i},\mathbf{x}_{j}))\geq 1,\\ & i=1,...,m,\end{split} \tag{1}\] where \(<\cdot,\cdot>\) denotes the inner product, \(\varphi(\cdot)\) is the mapping, and \(\lambda\) is the positive parameter, \(c=(c_{1},...,c_{m})^{\top}\) denotes the memory cost of the training sample, \(\delta(\mathbf{x}_{i},\mathbf{x})\) is a memory impact function that we define in advance. For a new sample x, if f(x)? 0, it is classified as positive class with \(y=+1\), otherwise it is classified as negative class with \(y=-1\). In general, we can solve the pairwise problem of (1) \[\begin{split}\min_{\alpha}&\frac{1}{2}\alpha^{\top} \mathbf{Y}(K(\mathbf{X},\mathbf{X})+\frac{1}{\lambda}\bigtriangleup\bigtriangleup \bigtriangleup\mathbf{\Gamma}^{\top}\alpha,\\ \mathrm{s.t.}&\mathbf{1}^{\top}\mathbf{Y}\alpha=0, \alpha\geq 0,\end{split} \tag{2}\] where \(\alpha\in\mathbb{R}^{m}\) is a Lagrangian multiplier vector, \(K(\cdot,\cdot)=<\varphi(\cdot),\varphi(\cdot)>\) is a kernel function, and \(\mathbf{1}\) is a vector with the appropriate dimension. Specifically, a new sample x will be classified as +1 or -1 depending on the decision \[f(\mathbf{x})=\sum\limits_{i=1}^{m}y_{i}\alpha_{i}K(\mathbf{x}_{i},\mathbf{x} )+b+\sum\limits_{i=1}^{m}y_{i}c_{i}\delta(\mathbf{x}_{i},\mathbf{x}). \tag{3}\] Furthermore, by finding a non-zero component \(\alpha_{k}\) in the solution \(\alpha\)(2) of the problem, we obtain \(b=y_{k}-y_{k}\sum\limits_{i=1}^{m}y_{i}(\alpha_{i}\)\(K(\mathbf{x}_{i},\mathbf{x}_{k})+c_{i}\delta(\mathbf{x}_{i},\mathbf{x}_{k}))\). The above HGMM has good generalization ability for many problems, but it is time consuming for big data problems and cannot always classify all training samples quickly. For a memory problem, we not only need to be able to remember the training samples quickly, but also need to give labels quickly during testing. The optimization problem (1) with a memory cost function is a practical path to memorize the training samples. We consider the case where the constraints of this optimization problem are equivocal and propose a new construction on the optimization problem. From this perspective, for our machine learning model, we can solve the problem by solving a system of linear equations. In other words, we have a faster memory effect compared to HGMM, regardless of the complexity of the corresponding learning task. Also, we consider a new type of memory different from the weighted memory in HGMM and propose several constructions of new memory functions. ## III Memory Model ### _Weighted Impact Memory Model (WIMM)_ Our WIMM hires the decision function as \[f(\mathtt{x})=<\mathtt{w},\varphi(\mathtt{x})>+b+\sum\limits_{i=1}^{m}y_{i}\xi_{i }\delta(\mathtt{x}_{i},\mathtt{x}), \tag{4}\] where \(\delta(\mathtt{x}_{i},\mathtt{x})\) is the memory influence function, and it can be the similarity function between \(\mathtt{x}_{i}\) and \(\mathtt{x}\), e.g., \[\delta(\mathtt{x}_{i},\mathtt{x}_{j})=\frac{1}{\sigma\sqrt{2\pi}}\exp{(-\frac {\parallel\mathtt{x}_{i}-\mathtt{x}_{j}\parallel^{2}}{2\sigma^{2}})},\quad \sigma>0, \tag{5}\] \[\delta(\mathtt{x}_{i},\mathtt{x}_{j})=\max\{\rho-\parallel\mathtt{x}_{i}- \mathtt{x}_{j}\parallel,0\},\quad\rho>0, \tag{6}\] \[\delta(\mathtt{x}_{i},\mathtt{x}_{j})=\left\{\begin{matrix}\parallel\mathtt{x }_{i}-\mathtt{x}_{j}\parallel,&\text{if }\parallel\mathtt{x}_{i}-\mathtt{x}_{j} \parallel\leq\varepsilon,\;\varepsilon>0,\\ 0,&\mathrm{else},\end{matrix}\right. \tag{7}\] and \[\delta(\mathtt{x}_{i},\mathtt{x}_{j})=\left\{\begin{matrix}\frac{b}{\parallel \mathtt{x}_{i}-\mathtt{x}_{j}\parallel},&\text{if }\mathtt{x}_{i}\neq,\mathtt{x}_{j}\;b>0,\\ 1,&\text{else}.\end{matrix}\right. \tag{8}\] The above functions measure the similarity between \(\mathtt{x}_{i}\) and \(\mathtt{x}_{j}\). These influence functions are symmetric, and the memory of each training sample will have an effect on the prediction only if its memory cost is not zero. Then, when combined with the decision function (4), the effect of memory can be achieved. Therefore, our WIMM considers to \[\min_{w,b,\xi} \frac{1}{2}\|\mathtt{w}\|^{2}+\frac{\gamma}{2}\sum\limits_{i=1}^ {m}\xi_{i}^{2}+\lambda\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m}y_{i}y_{j}\xi_ {j}\delta(\mathtt{x}_{i},\mathtt{x}_{j}), \tag{9}\] \[\mathrm{s.t.} y_{i}(<\mathtt{w},\varphi(\mathtt{x}_{i})>+b+\sum\limits_{j=1}^ {m}y_{j}\xi_{j}\delta(\mathtt{x}_{i},\mathtt{x}_{j}))=1,\] \[i=1,...,m,\] where \(\lambda,\gamma\) is a positive parameter, \(\xi=(\xi_{1},...,\xi_{m})^{\top}\) denotes the memory costs of training samples, and \(\delta(x_{i},x_{j})\) is the memory impact function. Obviously, we use the decision function (4), set the memory cost as a variable and predefine the memory influence function in the decision. From the constraints of (III-A), it is necessary to remember all the training samples. The goal of problem (III-A) is to find the optimal strategy with the lowest possible memory cost as well as memory impact. To solve problem (III-A), we derive its Lagrangian function as \[L(\mathtt{w},b,\xi)=\frac{1}{2}\|\mathtt{w}\|^{2}+\frac{\gamma}{ 2}\sum\limits_{i=1}^{m}\xi_{i}^{2}+\lambda\sum\limits_{i=1}^{m}y_{i}\sum\limits _{j=1}^{m}y_{j}\xi_{j}\delta(\mathtt{x}_{i},\mathtt{x}_{j})\] \[\quad+\sum\limits_{i=1}^{m}\alpha_{i}(1-y_{i}(<\mathtt{w},\varphi (\mathtt{x}_{i})>+b+\sum\limits_{j=1}^{m}y_{j}\xi_{j}\delta(\mathtt{x}_{i}, \mathtt{x}_{j}))), \tag{10}\] where \(\alpha_{i}\in\mathbb{R}\) is the Lagrangian multiplier with \(i=1,\ldots,m\). Let its partial derivatives w.r.t. \(\mathtt{w},b,\xi_{i}\) and \(\alpha_{i}\) equal zeros, and we have \[\begin{cases}\frac{\partial L}{\partial\mathtt{w}}=\mathtt{w}-\sum\limits_{i =1}^{m}\alpha_{i}y_{i}\varphi(\mathtt{x}_{i}),\\ \frac{\partial L}{\partial b}=\sum\limits_{i=1}^{m}\alpha_{i}y_{i},\\ \frac{\partial L}{\partial\xi_{i}}=c\xi_{i}+\lambda y_{i}\sum\limits_{j=1}^{m }y_{j}\delta(\mathtt{x}_{i},\mathtt{x}_{j})-y_{i}\alpha_{i}\sum\limits_{j=1}^{ m}y_{j}\delta(\mathtt{x}_{i},x_{j})=0,\\ \frac{\partial L}{\partial\alpha_{i}}=1-y_{i}(<w,\varphi(\mathtt{x}_{i})>_{i }+b+\sum\limits_{j=1}^{m}y_{j}\xi_{j}\delta(\mathtt{x}_{i},\mathtt{x}_{j})). \end{cases} \tag{11}\] Letting the partial derivative equal 0 gives \[\begin{cases}\mathtt{w}=\sum\limits_{i=1}^{m}\alpha_{i}y_{i}\varphi(\mathtt{x} _{i}),\\ \sum\limits_{i=1}^{m}\alpha_{i}y_{i}=0,\\ \xi_{i}=\frac{\alpha_{i}y_{i}\sum\limits_{j=1}^{m}y_{j}\delta(\mathtt{x}_{i}, \mathtt{x}_{j})-\lambda y_{i}\sum\limits_{j=1}^{m}j\delta(\mathtt{x}_{i}, \mathtt{x}_{j})}{c},\quad i=1,\ldots,m,\\ y_{i}(<w,\varphi(\mathtt{x}_{i})>_{i}+b+\sum\limits_{j=1}^{m}y_{j}\xi_{j} \delta(\mathtt{x}_{i},\mathtt{x}_{j}))=1,\\ i=1,\ldots,m.\end{cases} \tag{12}\] To facilitate the solution, we reformulate problem (12) as \[\begin{pmatrix}\mathbf{Y}K(\mathbf{X},\mathbf{X})\mathbf{Y}+\mathbf{Y}\bigtriangleup \triangleup\bigtriangledown^{\top}\mathbf{Y}&\mathbf{Y1}\\ \mathbf{1}^{\top}\mathbf{Y}&0\end{pmatrix}\begin{pmatrix}\alpha\\ b\end{pmatrix}=\begin{pmatrix}\mathbf{1}+\frac{\lambda}{\gamma}\bigtriangleup \triangleup\bigtriangledown^{\top}\mathbf{1}\\ 0\end{pmatrix}, \tag{13}\] where \(\bigtriangleup\in\mathbb{R}^{m\times m}\)and its elements are \(\delta(\mathtt{x}_{i},\mathtt{x}_{j})\) with \(i,j=1,...,m.,K(\mathbf{X},\mathbf{X})\) is a kernel matrix, \(\alpha=(\alpha_{1},...,\alpha_{m})^{\top}\) and \(\mathbf{1}=(1,...,1)^{\top}\). After solving the above system of equations, the final decision is \[f(\mathtt{x})=\sum\limits_{i=1}^{m}y_{i}\alpha_{i}K(\mathtt{x}_{i},\mathtt{x} )+b+\sum\limits_{i=1}^{m}y_{i}\xi_{i}\delta(\mathtt{x}_{i},\mathtt{x}). \tag{14}\] Furthermore, by in problem (12) we obtain \(\xi=(\xi_{1},...,\xi_{m})^{\top}\). ### _Maximum Impact Memory Model (MIMM)_ Different from the WIMM, our MIMM selects the closest training sample of the unknown sample to affect it by decision function as \[f(\mathtt{x})=<\mathtt{w},\varphi(\mathtt{x})>+b+y_{i}\xi_{i}\delta(\mathtt{x },\mathtt{x}_{k}), \tag{15}\] where \(\mathtt{x}_{k}\) is denoted as the centroid of the training point \(\mathtt{x}_{i}\) of the same kind. For example, suppose \(\overline{x}_{+}\) and \(\overline{x}_{-}\) are positive and the negative class centroids, respectively. It is a straightforward way to use \(x_{+}\) or \(x_{-}\) in \(\delta(x_{k},x)\) as the memory influence function. Thus, our MIMM considers to \[\min_{w,b,\xi} \frac{1}{2}\|\mathtt{w}\|^{2}+\frac{\gamma}{2}\sum\limits_{i=1} ^{m}\xi_{i}^{2}+\lambda\sum\limits_{i=1}^{m}\xi_{i}\delta_{i}\] \[\mathrm{s.t.} y_{i}(<\mathtt{w},\varphi(\mathtt{x}_{i})>+b)=1-\xi_{i}\delta_{i}, \qquad i=1,\ldots,m, \tag{16}\] where \(\delta_{i}=\delta(x_{k},x_{i})\) is the memory impact function we define. Instead of using all training samples in our MIMM decision, as in WIMM, we memorize the training samples by finding the closest training samples to the test sample points. Correspondingly, the Lagrangian function of (16) is \[\begin{split}& L(\mathbf{w},b,\xi)=\frac{1}{2}\|\mathbf{w}\|^{2}+ \frac{\gamma}{2}\sum\limits_{i=1}^{m}\xi_{i}^{2}+\lambda\sum\limits_{i=1}^{m} \xi_{i}\delta_{i}\\ &+\sum\limits_{i=1}^{m}\alpha_{i}(1-\xi_{i}\delta_{i}-y_{i}(<w, \varphi(\mathbf{x}_{i})>+b)).\end{split} \tag{17}\] Find the partial derivatives of w.r.t. \(w,b,\xi_{i}\)and \(\alpha_{i}\),and we have \[\begin{cases}\frac{\partial L}{\partial\mathbf{w}}=\mathbf{w}-\sum\limits_{i= 1}^{m}\alpha_{i}y_{i}\mathbf{x}_{i},\\ \frac{\partial L}{\partial b}=\sum\limits_{i=1}^{m}\alpha_{i}y_{i},\\ \frac{\partial L}{\partial\xi_{i}}=c\xi_{i}+\lambda\delta_{i}-\alpha_{i} \delta_{i},\\ \frac{\partial L}{\partial\alpha_{i}}=1-\xi_{i}\delta_{i}-y_{i}(<w, \varphi(\mathbf{x}_{i})>+b)\quad i=1,\ldots,m.\end{cases} \tag{18}\] Letting the partial derivative equal 0 gives \[\begin{cases}\mathbf{w}=\sum\limits_{i=1}^{m}\alpha_{i}y_{i}\mathbf{x}_{i},\\ \sum\limits_{i=1}^{m}\alpha_{i}y_{i}=0,\\ \xi_{i}=\frac{\alpha_{i}\delta_{i}-\lambda\delta_{i}}{\gamma},\quad i=1,\ldots,m,\\ y_{i}(<w,\varphi(\mathbf{x}_{i})>+b)-1+\xi_{i}\delta_{i}=0,\quad i=1,\ldots,m. \end{cases} \tag{19}\] After simplifying the system of equations, we get: \[\begin{pmatrix}\mathbf{Y}K(\mathbf{X},\mathbf{X})\mathbf{Y}+\mathbf{Y}\mathbf{D }\mathbf{D}^{\top}\mathbf{Y}&\mathbf{Y}\mathbf{I}\\ \mathbf{I}^{\top}\mathbf{Y}&0\end{pmatrix}\begin{pmatrix}\alpha\\ b\end{pmatrix}=\begin{pmatrix}\mathbf{1}+\frac{\lambda}{\gamma}\mathbf{D} \mathbf{D}^{\top}\mathbf{I}\\ 0\end{pmatrix}, \tag{20}\] where \(\mathbf{D}\in\mathbb{R}^{m\times m}\)and its a diagonal matrix with \(\mathbf{D}_{ii}=\delta_{i}(i=1,...,m)\). Thus, the WIMM model obtains \(b\) and \(\alpha\) by solving the system of linear equations (20), and then \(\xi\) by the optimality condition (19), the final decision is \[f(\mathbf{x})=\sum\limits_{i=1}^{m}y_{i}\alpha_{i}K(\mathbf{x}_{i},\mathbf{x}) +b+y_{i}\xi_{i}\delta_{i}(\mathbf{x},\mathbf{x}_{k}). \tag{21}\] Indeed, the advantage of memorization becomes evident when we combine the memorization function with the LSSVM method. This combination allows us to carefully observe and analyze the impact of each memorization influence function on the overall performance of the model of the combined model. Specifically, consider a learner that incorporates a generalized kernel \(K_{g}(x,x)=\exp{(-\frac{\|x_{i}-x_{j}\|^{2}}{\sigma^{2}})}\) and a memory kernel \(K_{m}\), where the memory influence function is chosen as equations (5), (6), (7), and (8). Figure 1 illustrates the generated memory influence. With this memory influence, we can intuitively observe the range and degree of influence for each different influence function. We utilize the memory influence function to establish a rule, where classification is remembered only within a small region around the training data points. By adjusting the parameters of the influence function, we control the trade-off between generalization and memory in the algorithm. ## IV Discussion **Proposition 1**: _The empirical risk of WIMM is zero if and only if problem (9) has at least one feasible solution. Similarly, the empirical risk of MIMM is zero if and only if problem (16) has at least one feasible solution._ The feasibility of problems (9) or (16) depends on the properties of the memory influence matrices \(\triangle\) or \(\mathbf{D}\). Generally, we have the following sufficient conditions for practical applications. **Proposition 2**: _The empirical risk of WIMM is zero if and only if matrix \(\triangle\) is nonsingular. Similarly, the empirical risk of MIMM is zero if and only if the \(\mathbf{D}\) matrix is non-singular. Proof. We have considered the case where the \(\triangle\) matrix is non-singular. It can be shown that \(\mathbf{Y}(K(\mathbf{X},\mathbf{X})+\triangle)\mathbf{Y}^{\top}\) is also non-singular. Additionally, as \(r(\mathbf{I}^{\top}\mathbf{Y})=1\), the problem (13) must have a unique solution. Similarly, it can be demonstrated that when the \(\mathbf{D}\) matrix is non-singular, the problem (20) must also have a unique solution. This conclusion follows from proposition (1). \(\square\)_ **Proposition 3**: _MIMM is equivalent to the LSSVM model if and only if \(\mathbf{D}\) is a unit array and \(\lambda=0\)._ Proof. When \(\mathbf{D}\) is a unitary matrix and \(\lambda=0\), the problem (20) is clearly in the form of a least squares system of linear equations to be solved. The proposition is proved and the conclusion holds. \(\square\)_ **Proposition 4**: _WIMM is equivalent to the LSSVM model if and only if \(\triangle\) is a unit array and \(\lambda=0\)._ Proof. When \(\triangle\) is a unitary matrix and \(\lambda=0\), the problem (13) is clearly in the form of a system of linear equations solved by least squares. The proposition is proved and the conclusion holds. \(\square\)_ Fig. (2) gives information about the interrelationships between the three memory kernels by comparing the memory Fig. 1: Different types of memory kernels.\({}^{1}\),\({}^{2}\),\({}^{3}\) and \({}^{4}\) with the influence function 5, 6, 7 and 8. Where red indicates the extent of memory influence and green indicates the extent of generalization influence. kernels of equations (20), (13) and SVM\({}^{m}\)[32]. We can find that these three memory kernels are **D**, \(\bigtriangleup\) and \(k_{m}\), which can be obtained by the matrix structure, **D** is a diagonal matrix, and \(\bigtriangleup\) and \(k_{m}\) are symmetric matrices. By tuning the parameters, **D**, \(\bigtriangleup\) and \(k_{m}\) can be varied to a unitary matrix. Thus there exists an intersection of these three memory kernels. Since \(\bigtriangleup\) can choose more than just one type of Gaussian kernel, \(\bigtriangleup\) contains \(k_{m}\). Since \(\bigtriangleup\) and **D** have different influence functions, \(\bigtriangleup\) and **D** only have an intersection but no containment relationship. ## V Experiments This section utilizes several calibration datasets from UCI, for which Table (I) provides detailed information. We analyze the performance of our WIMM and MIMM models on various benchmark datasets, along with their execution times on large datasets. Additionally, we test the generalization performance of the two models and their ability to adapt to noise. The classical LSSVM utilizes linear kernels, while the SVM\({}^{m}\) and HGMM models employ linear generalization kernels and RBF memory kernels. In contrast, our WIMM and MIMM models both utilize linear kernels. All these models were implemented using MATLAB 2017a on a PC equipped with an Intel Core Duo processor (dual 4.2 GHz) and 32 GB of RAM. For the RBF kernel \(K(x_{i},x_{j})=\exp(-\sigma\parallel x_{i}-x_{j}\parallel^{2})\), we tested parameters \(\sigma\) from the set \(2^{i}|i=-6,-5,...,5\), and for other models, we tested weighing parameters from the same set. To begin the comparison, we evaluated the memory performance of the linear kernel in WIMM and MIMM models on some small datasets, with the linear kernel in LSSVM used as the benchmark. To assess the memory capacity of the WIMM model, Table (II) presents the highest training and testing accuracies achieved by the WIMM model. This table provides valuable insights into the model's ability to memorize and generalize effectively on the tested datasets. It can be observed from Table (II) that the WIMM model with the memory influence function (5, 6, 7) achieves a training accuracy of \(100\%\) on all datasets. However, the failure to reach \(100\%\) training accuracy can be attributed to the irreversibility of the \(\bigtriangleup\) term in the function and the impact of different influence functions on the data's memory capacity. Among the various influence functions, the memory influence function (5) yields the highest test accuracy for most of the datasets. Consequently, for the remaining experiments, we utilize the memory influence function (5) as the basis for our WIMM model. Likewise, to evaluate the memory capacity of the MIMM model, Table (III) displays the maximum training and testing accuracies achieved by the MIMM model. It is evident from Table (III) that the MIMM model attains a training accuracy of \(100\%\) when using the memory influence function (8). The reason the other influence functions do not achieve \(100\%\) training accuracy is due to the irreversibility of **D** in these functions. The choice of different influence functions impacts the data's memory capacity. Among the various influence functions, the memory influence function (8) yields the highest test accuracy for the majority of the datasets. As a result, for the subsequent experiments, we adopt the memory influence function (8) as the basis for our MIMM model. Next, to compare the running times of other memory models under optimal parameters, we recorded the execution times along with the corresponding accuracies on a larger dataset. This evaluation allows us to further assess the trade-off between the time consumed for memorization and the achieved performance on the same task for different memory models. For each dataset, approximately \(70\%\) of the total samples were randomly selected for training, ensuring that half of them belonged to the positive category and the other half to the negative category, while the remaining samples constituted the test set. This process was repeated five times, and the highest average training accuracy along with its standard deviation, the corresponding highest test accuracy, and the time taken to run the model once with optimal parameters were recorded for each dataset. The shortest time spent is indicated in bold in Table (IV). From Table (IV), it is evident that the test accuracies do not differ significantly. Notably, both WIMM and MIMM exhibit shorter execution times compared to HGMM and SVM\({}^{m}\). This efficiency can be attributed to the fact that WIMM and MIMM models are solved as linear system of equations, whereas HGMM and SVM\({}^{m}\) are solved as quadratic programming problems. In practical applications, many tasks involve learning with labeled noise. Therefore, to examine the ability of WIMM and \begin{table} \begin{tabular}{c c c c} \hline \hline ID & Name & m & n \\ \hline (a) & Cleveland & 173 & 13 \\ (b) & Ionosphere & 351 & 34 \\ (c) & New-thyroid & 215 & 4 \\ (d) & Parkinsons & 195 & 22 \\ (e) & Sonar & 208 & 60 \\ (f) & TicTacToe & 958 & 27 \\ (g) & Voveel & 988 & 13 \\ (h) & Wisconsin & 683 & 9 \\ (i) & German & 1000 & 20 \\ (j) & Shuttle & 1829 & 9 \\ (k) & Segment & 2308 & 19 \\ (l) & Waveform & 5000 & 21 \\ (m) & TwoNorm & 7400 & 20 \\ (n) & IJCNN01 & 49990 & 22 \\ \hline \hline \end{tabular} \end{table} TABLE I: Details of benchmark datasets Fig. 2: A memory relation diagram for MIMM,WIMM and SVM\({}^{m}\)[32], where yellow, pink and blue-gray denote the **D**, \(\bigtriangleup\) and \(k_{m}\) memory kernel matrices, respectively. MIMM models to adapt to noise, we conducted experiments with datasets containing labeled noise. For certain datasets from Table (I), we randomly select \(80\%\) of the training samples to form the training set, while the remaining samples constitute the test set. We then introduce label noise to the training set, setting \(5\%\), \(10\%\), \(15\%\), and gradually up to \(50\%\) of the labels to the opposite class. This process is repeated five times, and we record the highest test accuracy along with the corresponding average training accuracy for comparison with LSSVM. From Figures (3) and (4), we can observe the following trends: i) The training accuracy of LSSVM is not consistently \(100\%\) except for our WIMM and MIMM models. ii) The test performance of LSSVM is consistently lower than our model and exhibits instability with increasing label noise. iii) The test performance of our models (WIMM and MIMM) shows a gradual decline as the label noise increases in a regular manner. These observations suggest that our WIMM and MIMM models outperform LSSVM in handling labeled noise and offer more stable and robust performance under noisy conditions. Moreover, in many tasks, obtaining an adequate number of training samples can be particularly challenging. Hence, we further investigate the performance of these models in comparison to our WIMM and MIMM models under conditions of limited training samples. For selected datasets from Table (I), we randomly select \(80\%\) of the samples to form the training set, and the remaining samples constitute the test set. Subsequently, we vary the proportion of training samples used, ranging from \(10\%\) to \(100\%\), in incremental steps. The models are tested on the dataset, and this process is repeated five times. We record the highest test accuracy along with the corresponding average training accuracy for comparison with LSSVM. From Figures (5) and (6), the following observations are made: i) Apart from our WIMM and MIMM models, the training accuracy of LSSVM is not consistently \(100\%\). ii) The test performance of LSSVM is consistently inferior to our \begin{table} \begin{tabular}{l l l l l l l|l l l l} \hline \hline ID & LSSVM & WIMM\({}^{1}\) & WIMM\({}^{2}\) & MIMM\({}^{3}\) & WIMM\({}^{4}\) & LSSVM & WIMM\({}^{1}\) & WIMM\({}^{2}\) & WIMM\({}^{3}\) & WIMM\({}^{4}\) \\ & train(\(\%\)) & train(\(\%\)) & train(\(\%\)) & train(\(\%\)) & train(\(\%\)) & train(\(\%\)) & test(\(\%\)) & test(\(\%\)) & test(\(\%\)) & test(\(\%\)) & test(\(\%\)) \\ \hline (a) & \(96.39\pm 0.47\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(90.77\pm 1.0\) & \(94.82\pm 4.18\) & \(95.44\pm 3.36\) & \(95.36\pm 3.29\) & \(\textbf{95.89}\pm\textbf{4.49}\) & \(92.85\pm 6.16\) \\ (b) & \(89.46\pm 0.47\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(44.02\pm 1.6\) & \(88.3\pm 3.46\) & \(88.31\pm 3.13\) & \(88.36\pm 5.52\) & \(\textbf{89.77}\pm\textbf{3.86}\) & \(42.24\pm 9.23\) \\ (c) & \(94.08\pm 0.86\) & \(100\pm 0\) & \(100\pm 0\) & \(72.9\pm 2.08\) & \(93.66\pm 3.41\) & \(\textbf{97.79}\pm\textbf{2.15}\) & \(87.94\pm 4.84\) & \(89.11\pm 7.7\) & \(88.31\pm 4.52\) \\ (d) & \(91.42\pm 1.15\) & \(100\pm 0\) & \(100\pm 0\) & \(64.57\pm 5.01\) & \(88.47\pm 4.4\) & \(\textbf{94.73}\pm\textbf{4.95}\) & \(88.7\pm 5.84\) & \(91.8\pm 3.17\) & \(83.9\pm 4.3\) \\ (e) & \(87.93\pm 1.36\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(83.65\pm 1.4\) & \(79.48\pm 2.85\) & \(\textbf{86.79}\pm\textbf{8.43}\) & \(78.89\pm 1.12\) & \(80.29\pm 5.84\) & \(79.03\pm 8.03\) \\ (f) & \(98.33\pm 0.25\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(\textbf{98.33}\pm\textbf{1.0}\) & \(\textbf{98.33}\pm\textbf{1.13}\) & \(\textbf{98.33}\pm\textbf{1.13}\) & \(68.35\pm 4.38\) \\ (g) & \(95.04\pm 0.34\) & \(100\pm 0\) & \(100\pm 0\) & \(75.05\pm 7.08\) & \(95.04\pm 2.08\) & \(\textbf{100}\pm\textbf{0}\) & \(\textbf{99.8}\pm 0.28\) & \(\textbf{99.8}\pm 0.45\) & \(94.81\pm 1.08\) \\ (h) & \(96.16\pm 0.61\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(95.65\pm 0.71\) & \(96.18\pm 2.45\) & \(96.63\pm 1.52\) & \(96.65\pm 2.25\) & \(\textbf{96.78}\pm\textbf{1.84}\) & \(90.05\pm 3.13\) \\ \hline \hline \end{tabular} * \({}^{1}\), \({}^{2}\), \({}^{3}\) and \({}^{4}\) with the influence function 5, 6, 7 and 8. \end{table} TABLE II: Testing and training accuracy of MIMM and LSSVM using memory effects. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline ID & \begin{tabular}{l} SVM\({}^{m}\) \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} HGMM \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} WIMM \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} SVM\({}^{m}\) \\ test(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} HGMM \\ test(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} WIMM \\ test(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM \\ test(\(\%\)) \\ \end{tabular} \\ \hline (i) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(100\pm 0\) & \(76.1\pm 1.7\) & \(78.33\pm 3.53\) & \(76.64\pm 1.43\) & \(72.06\pm 2.55\) \\ & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & & \\ \hline \hline \end{tabular} * \({}^{1}\), \({}^{2}\), \({}^{3}\) and \({}^{4}\) with the influence function 5, 6, 7 and 8. \end{table} TABLE IV: Accuracy and time to train and test linear classifiers on benchmark datasets. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline ID & \begin{tabular}{l} LSSVM \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{1}\) \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{2}\) \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{3}\) \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{4}\) \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} LSSVM \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{1}\) \\ train(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} WIMM\({}^{2}\) \\ test(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{3}\) \\ test(\(\%\)) \\ \end{tabular} & \begin{tabular}{l} MIMM\({}^{4}\) \\ test(\(\%\)) \\ \end{tabular} \\ \hline (a) & \(96.39\pm 0.47\) & \(96.4\pm 0.79\) & Figure 4: Training (left)/Testing (right) accuracy at different noise points Figure 3: Training (left)/Testing (right) accuracy at different noise points models, and its performance improves as more training data is used. iii) The test performance of our models (WIMM and MIMM) demonstrates a steady improvement as the number of training samples increases. These findings suggest that our WIMM and MIMM models outperform LSSVM, especially when training data is limited, and they consistently achieve higher test accuracy as the number of training samples grows. To compare the impact of different memory influence functions on the WIMM and MIMM models, we analyze the effect of memory kernel parameters on the models. Figures (7) and (8) illustrate this impact, with the LSSVM model used as a benchmark for comparison. In this experiment, we consider the models with parameters ranging from \(\{0.1,0.2,...,2\}\). Specifically, we focus on the Segment and Sonar datasets from Table (I). For these datasets, \(80\%\) of the training samples are randomly selected as the training set, and the remaining samples form the test set. The process is repeated five times, recording the highest test accuracy and its corresponding average training accuracy for comparison with LSSVM. As the WIMM model with the memory influence function (8) demonstrated poorer results in Table (II), we consider the influence case of this model. From Figures (7) and (8), we can make the following observations: i) The WIMM model is more sensitive to the parameters, and its ability to memorize is contingent on selecting the appropriate parameters. ii) The MIMM model, particularly with the memory influence function (8), exhibits greater stability, consistently memorizing the training samples while ensuring that the test performance remains superior to that of the LSSVM model. iii) Overall, our models consistently outperform the LSSVM model, provided that we select the right parameters. These findings emphasize the importance of parameter selection in the WIMM model, while the MIMM model offers a more robust performance with the chosen memory influence function. In general, our models demonstrate superior performance compared to the LSSVM model when the appropriate parameters are employed. ## VI Conclusion We have presented two novel innovations in the traditional LSSVM framework: (i) We have proposed a replacement for the objective function of LSSVM, leading to improved performance. (ii) We have introduced a new memory generalization kernel that effectively incorporates the complete memory of the training data, achieving zero training error. As a result of these innovations, the MIMM and WIMM models have demonstrated superior generalization accuracy while maintaining the same computational complexity. Specifically, they still have involved solving a system of linear equations with a corresponding dimension, just like the current LSSVM implementation. Furthermore, our models have exhibited higher classification accuracy and have enhanced noise tolerance on certain datasets. Additionally, they have required less time and have cost to memorize training samples compared to existing memory models. In future work, we plan to extend our memory enhancement mechanism to other models and explore its applicability to a variety of other problems. In addition, we intend to consider multiple memory patterns in our memory model and introduce forgetting mechanisms to enrich the memory capacity to effectively solve a wider range of tasks. Fig. 5: Training(left)/Test(right) accuracy for different sample sizes. Figure 6: Training(left)/Test(right) accuracy for different sample sizes. Figure 7: Training (left)/testing (right) accuracy with different influence functions. ## Acknowledgement This work is supported in part by National Natural Science Foundation of China (Nos. 12271131, 62106112, 61866010,61966024 and 11871183), in part by the Natural Science Foundation of Hainan Province (No.120RC449), and in part by the Key Laboratory of Engineering Modeling and Statistical Computation of Hainan Province.
2305.19941
Radio continuum tails in ram pressure-stripped spiral galaxies: experimenting with a semi-empirical model in Abell 2255
Wide-field radio continuum observations of galaxy clusters are revealing an increasing number of spiral galaxies hosting tens of kpc-long radio tails produced by the nonthermal interstellar medium being displaced by the ram pressure. We present a semi-empirical model for the multi-frequency radio continuum emission from ram pressure stripped tails based on the pure synchrotron cooling of a radio plasma moving along the stripping direction with a uniform velocity. We combine LOFAR and uGMRT observations at 144 and 400 MHz to study the flux density and spectral index profiles of the radio tails of 7 galaxies in Abell 2255, and use the model to reproduce the flux density and spectral index profiles, and infer the stripped radio plasma velocity. For 5 out of 7 galaxies we observe monotonic decrease in both flux density and spectral index up to $~30$ kpc from their stellar disk. Our model reproduces the observed trends with a radio plasma bulk projected velocity between 160 and 430 km s$^{-1}$. This result represents the first indirect measure of the stripped, nonthermal interstellar medium velocity. The observed spectral index trends indicate that the synchrotron cooling is faster than the adiabatic expansion losses, thus suggesting that the stripped radio plasma can survive for a few tens of Myr outside of the stellar disk. This provides a lower limit for the lifetime of the stripped ISM outside of the disk. As a proof of concept, we use the best-fit velocities to constrain the galaxies' 3D velocity in the cluster to be in the 300-1300 km s$^{-1}$. We estimate the ram pressure affecting these galaxies to be between 0.1 and 2.9 $\times10^{-11}$ erg cm$^{-3}$, and measure the inclination between their stellar disk and the ram pressure wind.
A. Ignesti, B. Vulcani, A. Botteon, B. Poggianti, E. Giunchi, R. Smith, G. Brunetti, I. D. Roberts, R. J. van Weeren, K. Rajpurohit
2023-05-31T15:25:11Z
http://arxiv.org/abs/2305.19941v1
Radio continuum tails in ram pressure-stripped spiral galaxies: experimenting with a semi-empirical model in Abell 2255 ###### Abstract Context:Wide-field radio continuum observations of galaxy clusters are revealing an increasing number of spiral galaxies hosting tens of kpc-long radio tails produced by the nonthermal interstellar medium being displaced by the ram pressure; Aims:We present a semi-empirical model for the multi-frequency radio continuum emission from ram pressure stripped tails based on the pure synchrotron cooling of a radio plasma moving along the stripping direction with a uniform velocity; Methods:We combine LOFAR and uGMRT observations at 144 and 400 MHz to study the flux density and spectral index profiles of the radio tails of 7 galaxies in Abell 2255, and use the model to reproduce the flux density and spectral index profiles, and infer the stripped radio plasma velocity. Results:For 5 out of 7 galaxies we observe monotonic decrease in both flux density and spectral index up to 30 kpc from their stellar disk. Our model reproduces the observed trends with a radio plasma bulk projected velocity between 160 and 430 km s\({}^{-1}\). This result represents the first indirect measure of the stripped, nonthermal interstellar medium velocity. The observed spectral index trends indicate that the synchrotron cooling is faster than the adiabatic expansion losses, thus suggesting that the stripped radio plasma can survive for a few tens of Myr outside of the stellar disk. This provides a lower limit for the lifetime of the stripped ISM outside of the disk. As a proof of concept, we use the best-fit velocities to constrain the galaxies' 3D velocity in the cluster to be in the 300-1300 km s\({}^{-1}\). We estimate the ram pressure affecting these galaxies to be between 0.1 and 2.9 \(\times 10^{-11}\) erg cm\({}^{-3}\), and measure the inclination between their stellar disk and the ram pressure wind. Conclusions: ## 1 Introduction Galaxies in clusters can evolve from star-forming into passive systems (e.g., Dressler 1980; Blanton et al. 2009; Fasano et al. 2000; Cortese et al. 2021). One of the main drivers of this 'environmental processing' is the interaction between the galaxy interstellar medium (ISM) and the intracluster medium (ICM) filling the cluster volume. This physical interaction manifests as an external pressure exerted by the ICM, the so-called 'ram pressure' \(P_{\rm Ram}\) that can overcome the stellar disk binding force and strip the ISM components from the galaxy (Gunn & Gott 1972). This ram pressure stripping (RPS) can be crucial for the galaxy, as the gas loss can effectively quench the star formation in the stellar disk (Boselli et al. 2022, for a review), thus making it an important quenching pathway for satellite galaxies (e.g., Vollmer et al. 2001; Tonnesen et al. 2007; Vulcani et al. 2020; Watts et al. 2023). RPS is a frequent phenomenon, as almost all galaxies in clusters undergo a RPS event visible at optical wavelengths in their life (Vulcani et al. 2022). The ram pressure action is not limited to displacing the ISM outside of the disk. The impact of ram pressure can result in many effects, including compression of gas along the leading edge of the disk (e.g., Rasmussen et al. 2006; Poggianti et al. 2019b; Roberts et al. 2022a), disturbed galaxy morphologies and trailing tails of stripped gas (e.g., Kenney et al. 2004; van Gorkom 2004; Fumagalli et al. 2014; Poggianti et al. 2017b), and condensation of star-forming knots in the tails (Kenney et al. 2014; Poggianti et al. 2019a). It can also temporarily enhance the global star formation (e.g., Poggianti et al. 2016; Vulcani et al. 2018; Roberts & Parker 2020) and trigger the activity of the central nuclei (e.g., Poggianti et al. 2017a; Peloso et al. 2022). Indeed it has been observed that it can affect the microphysics of the ISM on small scales, for examples by stimulating the conversion from atomic to molecular hydrogen (Moretti et al. 2020), enhancing the cosmic rays' diffusivity (Farber et al. 2022; Ignesti et al. 2022b), or inducing the mixing between ISM and ICM (Sun et al. 2021; Franchetto et al. 2021). The most extreme examples of galaxies undergoing strong RPS are the so-called jellyfish galaxies (Fumagalli et al. 2014; Smith et al. 2010; Ebeling et al. 2014; Poggianti et al. 2017b). In the optical/UV band, these objects show extraplanar, unilateral debris extending beyond their stellar disks, and striking tails of ionised gas. Jellyfish galaxies mostly reside in galaxy clusters and are a transitional phase between infalling star-forming spirals and quenched cluster galaxies, hence they provide a unique opportunity nity to understand the impact of gas removal processes on galaxy evolution. A number of jellyfish galaxies have been observed showing tails of radio continuum emission extending for tens of kpc from their stellar disk (e.g., Gavazzi & Jaffe 1987; Murphy et al. 2009; Vollmer et al. 2013; Chen et al. 2020; Muller et al. 2021; Roberts et al. 2021, 2022b; Ignesti et al. 2022b). These radio tails develop, typically, steep-spectrum emission (\(\alpha<-0.9\) at GHz frequencies) within few tens of kpc from the stellar disk (e.g., Vollmer et al. 2004; Chen et al. 2020; Ignesti et al. 2022c; Muller et al. 2021; Lal et al. 2022; Venturi et al. 2022). Therefore, they are best observed below GHz frequencies and, for this reason, they are being observed more and more frequently by the LOFAR Two-metre Sky Survey (LoTSS Shimwell et al. 2017, 2019; Shimwell et al. 2022), which provide high resolution (6") and highly sensitive (\(\sim 100\)\(\mu\)Jy beam\({}^{-1}\)) images of the Northern sky at 120-168 MHz. Currently, about a hundred spiral galaxies with RPS radio tails have been reported, and thanks to this large statistic, it is now possible to conduct statistical studies of the RPS tail development in clusters (e.g., Smith et al. 2022b, for a recent example). These 'radio tails' are produced by cosmic ray electrons (CRe), accelerated to energies of a few GeV by supernovae explosions in the stellar disk (Condon 1992). The relativistic plasma, together with the ISM, is then stripped from the disk by the ram pressure. The CRe cool down by emitting radio waves via synchrotron radiation until the stripped clouds evaporate in the ICM. The stripped tail magnetic field, which is responsible for the CRe synchrotron losses, can be further amplified by the ICM magnetic draping (Dursi & Pfrommer 2008; Pfrommer & Dursi 2010; Ruszkowski et al. 2014; Muller et al. 2021). This qualitative scenario is supported by multi-frequency studies that observed a spectral index steepening with the radial distance along these tails (e.g., Vollmer et al. 2004; Chen et al. 2020; Muller et al. 2021; Ignesti et al. 2022c; Roberts et al. 2022b; Venturi et al. 2022). In this framework, the radio tail length would be mainly driven by two factors, that are the synchrotron cooling time (Pacholczyk 1970), and the radio plasma bulk velocity, \(V\), along the stripping direction. An interesting implication of this scenario is that, for the radio plasma to cool down mainly via synchrotron emission, the stripped ISM clouds should be able to survive in the ICM for a timescale that is, at least, as long as the radiative time. Constraining the stripped ISM clouds lifetime outside of the stellar disk is valuable for investigating the origin of the extraplanar star formation of jellyfish galaxies, or investigate general astrophysical problems such as the evolution of cold gas clouds in a hot wind. In this work we develop a semi-empirical model to reproduce the radio tail of RPS galaxies and we test it on a sample of galaxies in the galaxy cluster Abell 2255 imaged with deep observations at 144 and 400 MHz. The manuscript is organized as follows. In Section 2 we present the sample and the data used for this analysis, and the model developed to reproduce them. The results are presented in Section 3 and discussed in Section 4, where we present also the caveats of our work. Throughout the paper, we adopt a \(\Lambda\)CDM cosmology with \(\Omega_{\rm DM}=0.7\), \(\Omega_{\rm matter}=0.3\), and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), which yields \(1^{\prime\prime}=1.512\) kpc at the cluster redshift (\(z=0.08012\), Golovich et al. 2019). We describe the radio synchrotron spectrum as \(S\propto v^{\sigma}\), where \(S\) is the flux density, \(\nu\) the frequency, and \(\alpha\) is the spectral index. ## 2 Data analysis ### Cluster properties and data preparation We analyze the radio emission of a sample of galaxies in the galaxy cluster Abell 2255 (Abell 1958, hereafter A2255), a nearby system (\(z=0.08012\), Golovich et al. 2019) with a complex merger dynamics. Optical analysis suggested a merger along the line of sight, whereas the X-ray morphology, elongated along the E-W axis, points to a second merger in that direction (Yuan et al. 2003; Golovich et al. 2019). In the radio band, A2255 shows a diffuse radio halo at its center (reported for the first time by Jaffe & Rudnick 1979), and a large number of head-tail radio galaxies (e.g., Harris et al. 1980; Feretti et al. 1997; Miller & Owen 2003; Pizzo & de Bruyn 2009; Botteon et al. 2020a). Thanks to deep observations provided by the LOw Frequency ARay (LOFAR, van Haarlem et al. 2013), it has been revealed that A2255 hosts diffuse radio emission extended beyond \(R_{200}\) (Botteon et al. 2022). We summarize in Table 1 the properties of A2255. This cluster is the ideal candidate to test our model due to the availability of deep LOFAR and upgraded Giant Metrewave Radio Telescope (uGMRT) observations at, respectively, 120-168 MHz and 400 MHz. We make use of the 75 hrs LOFAR observation presented in Botteon et al. (2022, for a detailed description of the data calibration) to produce a new image at a central frequency of 144 MHz with a lower uvcut of 2000 \(A\), ROBUST\(=\)0.5 (Briggs & Cornwell 1994), and a taper of 5 arcsec. The image is centered on the cluster and covers an area of 1 deg\({}^{2}\). The imaging was carried out with the software WSCLEAN (Offringa et al. 2014). The resulting image, shown in Figure 1, has an angular resolution of \(10.3^{\prime\prime}\times 6.6^{\prime\prime}\), and RMS\(=\)55 \(\mu\)Jy beam\({}^{-1}\). The lower uvcut permits us to remove the radio emission diffused on scales larger than \(\sim 100^{\prime\prime}\) (\(\sim 150\) kpc) and, thus, to reveal the smaller sources beneath, such as the radio tails. A2255 was observed with the uGMRT in band 3 (300-500 MHz) for 40h (project code: 39_032, PI: A. Botteon), and presented here for the first time. The observations were divided into 4 runs of 10 h each, carried out on 2021 February 20, 26, March 13, and April 01, bookended by two 8 m scans on the flux density calibrators 3C286 and 3C48. The data were recorded in 2048 frequency channels with an integration time of 5.3 s using both the narrow-band (bandwidth of 33.3 MHz) and wide-band (bandwidth of 200 MHz) backends. The data were processed with the Source Peeling and Atmospheric Modeling (SPAM) package (Intema et al. 2009), which is a widely used software to reduce GMRT observations that performs calibration of the flux density scale, correction for the bandpass, data averaging and flagging, and direction-dependent calibration. In order to pro \begin{table} \begin{tabular}{c r} \hline \hline Property & Value \\ \hline RA [\({}^{\circ}\)] & 258.216 \\ DEC [\({}^{\circ}\)] & +64.063 \\ \(z\) & 0.08012\(\pm\)0.00024 \\ \(\sigma_{cl}\) [km s\({}^{-1}\)] & 1137\(\pm\)50 \\ \(M_{500}\) [\(\times 10^{14}\)\(M_{\odot}\)] & 5.38\(\pm\)0.06 \\ \(R_{500}\) [Mpc] & 1.196 \\ \(R_{200}\) [Mpc] & 2.033 \\ \hline \end{tabular} \end{table} Table 1: Summary of the properties of A2255. From top to bottom: equatorial coordinates (Eckert et al. 2017); redshift \(z\) and velocity dispersion \(\sigma_{cl}\) (Golovich et al. 2019); \(M_{500}\), \(R_{500}\) and \(R_{200}\) (Planck Collaboration et al. 2016);. duce deep and wide-band images of A2255 at the central frequency of 400 MHz, we proceed as follows. First, we processed the 4 narrow-band datasets independently to assess the quality of the 4 observing runs. Second, we split each wide-band dataset into 6 slices with equal bandwidth of 33.3 MHz, from 300 to 500 MHz. This step is necessary for SPAM to handle the wide-band of the new uGMRT backend and it has already been adopted in previous studies (e.g., Botteon et al. 2020b; Di Gennaro et al. 2021; Schellenberger et al. 2022). Third, we obtained a global sky model from the best narrow-band image (run of 2021 April 01) to jointly calibrate and merge the slices centered at the same frequency of the 4 observing runs. Fourth, we jointly deconvolved the 6 resulting calibrated slices with WSClean enabling the multiscale multifrequency deconvolution (Offringa & Smirnov 2017). The resulting image, which was corrected for the primary beam response, has a resolution of \(9.8^{\prime\prime}\times 9.0^{\prime\prime}\) and RMS=20 \(\mu\)Jy beam \({}^{-1}\). In this paper we focus on the jellyfish galaxies, while a forthcoming publication will be focused on the diffuse radio emission (Rajpurohit et al., in prep) We also make use of images and spectroscopy from the 12\({}^{th}\) release of the Sloan Digital Sky Survey (SDSS, York et al. 2000). Figure 1: LOFAR image at 144 MHz of A2255 obtained with a lower uvcut of 2000 \(\lambda\) (RMS=55 \(\mu\)Jy beam\({}^{-1}\), resolution \(10.3^{\rm\,m}\times 6.6^{\rm\,m}\)). Inner and outer dashed circles point out \(R_{500}=1.196\) Mpc and \(R_{200}=2.033\) Mpc, respectively. The blue boxes point out the 7 galaxies analysed in this work, shown in the panels. The box size corresponds to the FOV of the corresponding panel. We combine SDSS images1 in the g-, r- and i-filter to produce color images of the cluster galaxies. Footnote 1: From [https://drl2.sdss.org/mosaics/](https://drl2.sdss.org/mosaics/) ### Galaxy sample selection We select a sample of suitable galaxies for this analysis solely on the basis of their radio emission morphology. Specifically, we focus on those galaxies with an evident spiral/disk morphology in the SDSS image, and radio tails that are detected above the 3\(\times\)RMS level and resolved by more than 3 resolution elements at 144 MHz. This latter condition is necessary to reliably sample their flux density profiles. We end up with a sample of 7 galaxies which we show in Figure 1. Their properties are summarised in Table 2. According to SDSS 16 classification (Ahumada et al. 2020), these galaxies are star-forming spirals without evidence of AGN activity. Hence, their radio tails should be produced solely by the interactions between the ram pressure winds and the CRe accelerated by the supernovae. All the galaxies have the radio tail pointing away from the cluster center, and, with the only exception of #6, they are remarkably aligned along the cluster NW-SE axis, which may suggest that they are collectively falling toward A2255 along a privileged direction. This piece of evidence might indicate that the NW-SE axis is critical for the ongoing merger, thus adding another piece of the puzzle to the complex dynamics of this cluster. Finally, we note that they are all blue-shifted with respect of the cluster (\(z<z_{el}\)). ### Analysis of the radio tails #### 2.3.1 Flux density and spectral index profiles In order to evaluate the properties of the radio tails, we sample their radio emission to infer the flux density decline with the distance. We define a grid composed of a set of aligned elliptical regions with a size of \(11\times 7\) arcsec\({}^{2}\), so larger than the radio images' beam sizes, that start from the galaxy center and extend to the end of the tail as observed at 144 MHz. The first bin, that is the one located over the stellar disk, serves solely to define the galaxy position in the cluster to compute the projected length of the tail. For each radio map, we proceed measuring the flux density in each elliptical region, beyond the first one, located over the corresponding \(3\sigma\) contour, and we compute the corresponding uncertainty \(\sigma=\sqrt{A/A_{\rm beam}}\times\)RMS (where \(A\) and \(A_{\rm beam}\) are, respectively, the area of the sampling bin and the beam). Then we measure the spectral index in each bin in which we observe emission both at 144 and 400 MHz as: \[\alpha=\frac{\log\left(\frac{S_{144}}{S_{400}}\right)}{\log\left(\frac{144}{ 400}\right)}\pm\frac{1}{\log\left(\frac{144}{400}\right)}\sqrt{\left(\frac{ \sigma_{144}}{S_{144}}\right)^{2}+\left(\frac{\sigma_{400}}{S_{400}}\right)^{2 }}, \tag{1}\] where \(\sigma_{144}\) and \(\sigma_{400}\) are the flux density uncertainties at 144 and 400 MHz. Finally, in order to define the distance of each bin from the stellar disk, we consider the projected physical distance between the center of the each bin and that of the previous one. This approach permits us to compute the position of each bin with respect to the stellar disk as the sum of the distances of the previous bins. The radio tail projected length lies between 35 and 60 kpc, with galaxies showing different declining slopes. For galaxy #2 we adopt special considerations because the radio tail connects with another galaxy located toward the north-west (Figure 1). The current image resolution does not allow us to discriminate if the connection is real or just a blend of multiple sources. Regardless, we decide to sample its tail up to \(\sim 40\) kpc from the disk. #### 2.3.2 Modeling the multi-frequency radio tail We model the expected flux density and spectral index profiles by assuming that they are produced by radio plasma clouds, accelerated in the stellar disk, that move along the stripping direction with a uniform bulk velocity and that the CRe cooling is dominated by synchrotron and Inverse Compton energy losses due to the Cosmic microwave background (CMB) (e.g., Longair 2011). This assumption is motivated by the fact that the stripped tails are mostly composed of ionized gas (e.g., Boselli et al. 2022), thus the CRe perceive a local gas density that is lower than within the stellar disk (\(n_{\rm gas}\sim 10^{-2}-10^{-1}\) cm\({}^{-3}\)). Under such conditions, the time scales of the other energy loss mechanisms, such as ionization losses, bremstrahlung radiation, or Inverse Compton with the galactic radiation field, are of the order of a few Gyr (e.g., Basu et al. 2015, Equation 5-8), and therefore they are negligible with respect to synchrotron losses (\(t\simeq 10^{7}-10^{8}\) yr). Our estimate is conservative because we assume a uniform magnetic field and we neglect adiabatic losses. The latter assumption is partially supported by the fact that multi-frequency studies of ram pressure stripped galaxies revealed a spectral index decline with the distance (Vollmer et al. 2013; Muller et al. 2021; Roberts et al. 2022b; Ignesti et al. 2022a), thus suggesting that the synchrotron losses time-scale is shorter than the one of the adiabatic losses. Under these conditions, the synchrotron emissivity spectrum \(j(\nu/\nu_{\rm br})\) is defined as: \[j\left(\frac{\nu}{\nu_{\rm br}}\right)=\sqrt{3}\frac{e^{3}}{m_{e}c^{2}}\int_{0 }^{\frac{5}{4}}\sin^{2}\!\theta{\rm d}\theta\int_{0}^{+\infty}n(\gamma)F \left(\frac{\nu}{\nu_{\rm br}}\right){\rm d}\gamma\, \tag{2}\] where \(\theta\) is the pitch angle, \(\nu_{\rm br}=\frac{3}{2}\frac{eB\sin\theta}{m_{e}c}\gamma^{2}\) is the break frequency, \(F(\nu/\nu_{\rm br})\) is the synchrotron kernel function (Rybicki & Lightman 1979), \(e\) and \(m_{e}\) are the electron charge and mass, respectively, and \(n(\gamma)\propto\gamma^{\delta}\) is the CRe energy distribution. We compute a sampled spectrum for \(j(\nu/\nu_{\rm br})\) by solving numerically Equation 2 under the assumptions of \(\delta=-2.2\), which entails an injection index \(\alpha=(\delta+1)/2=-0.6\), and the favorable minimal energy loss magnetic field \(B=B_{\rm CMB}/\sqrt{3}\simeq 2.2\ \mu\)G, where \(B_{\rm CMB}=3.25(1+z)^{2}\simeq 3.8\ \mu\)G is the CMB equivalent magnetic field. The latter entails that we are assuming the maximum CRe radiative time. The implications of our assumptions are discussed in Section 4.3. In order to associate the emissivity spectrum to the observed flux density profiles, it is necessary to assume a 'bulk velocity' for the radio plasma along the stripping direction. The velocity defines the radio plasma 'dynamic age', that is the time elapsed since it left the stellar disk. For simplicity, we assume that the radio plasma moves with a uniform velocity \(V\) along the stripping direction, and that it leaves the stellar disk immediately after being injected in the ISM. Consequently, the time elapsed since the CRe injection can be estimated as \(\tau=D/V\), where \(D\) is the (projected) distance from the stellar disk, and \(V\) is the (projected) CRe bulk velocity with respect to the galaxy. We note that both \(D\) and \(V\) are projected quantities, but their ratio is equivalent to the ratio of the deprojected values. Therefore, the observed projected distances can be associated with a corresponding \(\tau\). Then, to derive the corresponding model emissivity we make use of the radiative time definition: \[t_{\rm rad}\simeq 3.2\times 10^{10}\frac{B^{1/2}}{B^{2}+B_{\rm CMB}^{2}}\frac{1}{ \sqrt{\nu_{\rm br}(1+z)}}\ {\rm yr}, \tag{3}\] where the magnetic fields are expressed in \(\mu\)G and the observed frequency \(\nu_{\rm br}\) is in units of MHz (Miley 1980) to compute the corresponding \(t_{\rm rad}\). Under the assumption that the radiative age of the plasma coincides with the time elapsed since the injection, i.e. \(t_{\rm rad}\simeq\tau=D/V\), for a given \(V\) Equation 3 associates to each spatial bin a corresponding \(\nu_{\rm br}(D)\) that, in turn, defines a value of \(\nu/\nu_{\rm br}(D)\) for a given \(\nu\). Finally we compute the emissivity by interpolating the corresponding \(j(\nu/\nu_{\rm br}(D))\) from the sampled emissivity spectrum derived from Equation 2. We also introduce a normalization factor \(A\) that incorporates the conversion from emissivity to observed flux density. This procedure provides us with a flux density model profile as function of the distance that we can use to fit the observed flux density profiles to constrain \(V\). By combining the interpolated emissivity values for two different frequencies at each distance, Equation 1 permit us to model also the expected spectral index profile \(\alpha(D)\) which we used to fit the observed spectral indexes. In general, this model predicts that flux density decreases monotonically with the distance, with a consequent spectral index steepening. The trend is tuned by the velocity, where the lower (higher) the value of \(V\), the steeper (flatter) are the resulting profiles. As a consequence of our assumption \(\delta=-2.2\), the flattest spectral index value allowed by this model is \(\alpha=-0.6\). For reference, we show in Figure 2 how the resulting flux density and spectral index profiles change for different values of \(V\) for a fixed \(B=B_{\rm min}\). We use a least-square fit to constrain the two free parameters, \(V\) and \(A\), that adapt our sampled model profiles to the observed ones. Basically, the first one tunes the model steepening along the x-axis, whereas the second one matches model and observations along the y-axis. For each galaxy, we fit the 144, 400 MHz flux density, and spectral index profiles independently to see whether the model can infer consistent velocities. We take into account both the uncertainties for the flux density and spectral index to compute the error on the velocity and the \(1\sigma\) confidence interval for the best-fitting model. \begin{table} \begin{tabular}{c c c c c} \hline ID & Name & RA, DEC & \(z\) & \(R_{CL}\) \\ & & [\({}^{\rm s}\), \({}^{\rm s}\)] & & [kpc] \\ \hline 1 & LEDA 2667121 & 257.823, 64.342 & 0.0768 & 1781.66 \\ 2 & LEDA 3138983 & 258.141, 64.098 & 0.0762 & 259.13 \\ 3 & [PVK2003] 258.09561 +64.14133 & 258.095, 64.141 & 0.0752 & 514.6 \\ 4 & [YZJ2003] 2- 63 & 258.334, 64.009 & 0.0799 & 406.32 \\ 5 & LEDA 59848 & 258.213, 64.073 & 0.0738 & 55.93 \\ 6 & [YZJ2003] 2-130 & 257.894, 63.894 & 0.0767 & 1198.62 \\ 7 & LEDA 2665175 & 257.914, 64.169 & 0.0768 & 920.2 \\ \hline \end{tabular} \end{table} Table 2: Sample of galaxies studied in this work. From left to right: identification number used in this paper; Galaxy ID; Coordinates of the first sampling bin; SDSS redshift; Projected clustercentric distance. \begin{table} \begin{tabular}{c c c c} \hline ID & \(V_{144}\) & \(V_{400}\) & \(V_{a}\) \\ \hline 1 & 667\(\pm\)316 & 429\(\pm\)83 & 206\(\pm\)33 \\ 2 & 654\(\pm\)407 & 3519 & 295\(\pm\)93 \\ 3 & 149\(\pm\)16 & 215\(\pm\)32 & 136\(\pm\)15 \\ 4 & 307\(\pm\)65 & 325\(\pm\)38 & 718 \\ 5 & 3087 & 174\(\pm\)28 & 718 \\ 6 & 304\(\pm\)132 & 160\(\pm\)32 & 993 \\ 7 & 402\(\pm\)104 & 251\(\pm\)44 & 172\(\pm\)26 \\ \hline \end{tabular} \end{table} Table 3: Best-fitting velocities in units of km s\({}^{-1}\) derived from the different profiles and the \(1\sigma\) uncertainties for the systems with sufficient statistics. Figure 2: Flux density (left) and spectral index (right) model profiles for different values of the velocity \(V\) given a magnetic field \(B_{\rm min}\). ## 4 Results Figure 3: From top to bottom: velocity fit for galaxy #1, #2, and #3. Left: SDSS RGB image overlapped with the 3, 9, 15\(\times\)RMS surface brightness contours at 144 (blue) and 400 MHz (orange) and the sampling regions (green ellipses). The dashed region marks the reference center of the galaxy; Right: Flux density (top) and spectral index (bottom) trends with the distance from the stellar disk, and the corresponding best-fit profiles. The color-filled area indicates the \(1\sigma\) confidence region. The horizontal dashed line points \(\alpha=-0.6\). Figure 3: **(continued).** From top to bottom: velocity fit for galaxy #4, #5, and #6. ## 3 Results We report in Figure 3, for each galaxy, the sampling grid over-imposed on the SDSS image and the 144 MHz (blue) and 400 MHz (red) contours, the corresponding flux density and spectral index profiles, and the best fit with the computed values of \(V\). The results are summarized in Table 3 and Figure 4. Due to the low number of bins of some galaxies, the best-fit uncertainties cannot be computed for every system. Concerning the observed profiles, we note that, in general, every galaxy shows a decreasing flux density profile at both frequencies. The 400 MHz profiles are systematically shorter than the 144 MHz one, which is consistent with the fact that the CRe emitting at higher frequency, in a uniform magnetic field, have shorter radiative times of those emitting at lower frequencies (Equation 3), and therefore they can travel for shorter distances. The spectral index profiles seem to steepen with the distance in 4 out of 7 galaxies. About the model fitting, we observe that, albeit the 144 MHz profiles have a higher number of bins than the 400 MHz one and, thus, they should result in more solid results, the 400 MHz fit tend to have smaller uncertainties. We conclude that this is due to the fact that the curvature of the 400 MHz profiles is more evident of that of the 144 MHz ones, thus easing the fit convergence. The outcomes of the model fitting are varied: * For galaxies #1, #3, and #7 the velocities derived from the three tracers are consistent within 2\(\sigma\); * For galaxies #4 and #6 the results are less clear. On the one hand, the flux density fit produces consistent results for the 144 and 400 MHz profiles. On the other hand, the spectral index fitting does not converge due to the fact that the values in the first bins are flatter than -0.6, which is the upper limit permitted by our model. Observing such a flat value of spectral index suggests that, inside those bins, the radio emission at 144 MHz either has an injection index flatter than -0.6, or it has been affected by ionization losses, which are not included in our simple model. Nevertheless, the spectral index profiles steepen with the distance, that is in agreement with our predictions; * For galaxies #2 and #5 results are inconclusive because either the flux density or the spectral index profiles are not decreasing monotonically. In both cases, we conclude that this is due to the fact that the radio tails are not real and/or induced by the RPS. For galaxy #2, we suggest that the putative radio tail is instead the result from the emission from two galaxies that is blended due to the insufficient resolution of our images. For galaxy #5, we conclude that the 'tail' is actually mostly composed of emission coming from the near, bright radio galaxy. Consequently, the fit could not converge and it returns nonphysical and not consistent values of \(V\). Therefore, we exclude these two galaxies from the following Discussion. ## 4 Discussion ### Insights into the properties of the stripped tails #### 4.1.1 The lifecycle of the radio plasma The semi-empirical model introduced in Section 2.3.2 reproduces the radio tails' profiles of 5 out of 7 radio tails in A2255. This result permits us to investigate the physical properties of the radio-emitting CRe and the stripped ISM. To begin with, the best-fit velocities are of the order of the hundreds of km s\({}^{-1}\) (see Table 3). These values support the previous results that constrained the CRe bulk velocity to be of the order of the ram pressure winds (Ignesti et al., 2022). Our semi-empirical model provides an accurate measure of the CRe bulk velocity in these extraplanar fields that can be used to constrain numerical simulations of the nonthermal ISM components subjected to ram pressure (e.g., Tonnesen and Stone, 2014; Muller et al., 2021; Farber et al., 2022). Concerning the radio plasma properties, in general we observe that the flux density and spectral index profiles decline monotonically with the distance, in agreement with the simple, pure cooling model. The emerging picture is that the radio plasma stripped from the stellar disk can cool down within the first tens of kpc via synchrotron emission before being dissipated by adiabatic expansion or mixing with the ICM. This confirms that the adiabatic losses timescale \(t_{\rm ad}\) is larger than the CRe radiative time \(t_{\rm rad}\) at 144 MHz, thus it tentatively constrains \(t_{\rm ad}>100\) Myr (Equation 3). Figure 3: **(continued).** Velocity fit for galaxy #7. The observed profiles can constrain the action of're-energization' processes that would induce deviations from the monotonic decline. These include star formation taking place outside of the stellar disk, and (re-)acceleration processes induced by shocks and turbulence within the tail. The first one is expected to interfere with the cooling by injecting fresh electrons along the tail, thus forming a 'bump' in the flux density profile and a flattening in the spectral index one (e.g., the case of JO206, see Muller et al. 2021). In our sample, only galaxy #7 shows a potential signature of this process for \(D>30\) kpc, beyond which the observations slightly diverge from the model. This feature could be signature of either presence of 'fresh' CRe produced in-situ by extraplanar star formation, or that the CRe losses may be dominated by adiabatic losses instead of synchrotron/IC (e.g., due to increasingly longer radiative times as consequence of a decrease of the magnetic field with the distance). However, we note that not observing a clear bump for the other galaxies does not rule out the possibility of having star formation in the stripped tail. It may be possible that the signal due to the supernovae exploding in the tail is simply overcome by that of the CRe coming from the disk (see Ignesti et al. 2022c,b). In order to rule out the presence of extra-planar star formation, future optical and UV studies of the stripped tails are required (e.g., Poggianti et al. 2019a; Giunchi et al. 2023; Waldron et al. 2023). The CRe re-acceleration, instead, by extending the lifetime of the CRe beyond their supposed radiative time (Equation 3), should act on the flux density profile by extending it with an additional component with a characteristic uniform surface brightness. These re-accelerated tails have been observed more and more frequently in radio galaxies thanks to LOFAR, and it has been explained as the consequence of the CRe being 'gentle re-accelerated' by the ICM turbulence for \(t>t_{\rm acc}\), where \(\tau_{\rm acc}>100\) Myr is the re-acceleration timescale (e.g., de Gasperri et al. 2017; Botteon et al. 2021; Ignesti et al. 2022a; Edler et al. 2022). Consequently, not observing low-brightness tails in these star-forming galaxies may imply that the re-acceleration is not efficient enough to compensate the energy losses, either due to synchrotron losses at 144 MHz or adiabatic expansion of the relativistic plasma (i.e., \(t_{\rm rad}<t_{\rm sd}<t_{\rm acc}\)). These broad constraints might suggest that, for those CRe emitting at 144 MHz, \(t_{\rm rad}<100\) Myr, thus, according to Equation 3, that the magnetic field is, at least, \(\sim\)7 \(\mu\)G. #### 4.1.2 Implications for the stripped ISM Our results can provide constraints for the velocity of the stripped material due to RPS and, thus, they can help in the study of the evolution of the stripped clouds outside of the stellar disks (e.g., Sparre et al. 2020; Tonnesen & Bryan 2021; Farber et al. 2022). Albeit the order of magnitude is consistent with the previous numerical simulations, they predict that the cloud should decelerate, as a consequence of the mixing with the ICM, within \(\sim 100\) kpc from the disk (e.g., Tonnesen & Bryan 2021, and references therein). In this context, our model shows that the average velocity of the clouds within the first tens of kpc is relatively constant, thus the ICM mixing, and the consequent deceleration, have not yet significantly affected the clouds. Similarly, the observed spectral steepening indicates that, at least within the first tens of kpc from the stellar disk, the radio plasma can cool down undisturbed. This implies that the radiative time is shorter than the timescales of those processes that would eventually lead to the destruction of the stripped clouds, such as adiabatic expansion or mixing. Therefore, this piece of evidence tentatively constrains the order of magnitude of the stripped ISM clouds lifetime outside of the disk to be, at least, of the order of tens of Myr. ### A potential constrain on the 3D galaxy motion In the RPS framework, the radio plasma, together with the ISM, is being displaced by the ram pressure wind. Consequently, the stripped plasma bulk velocity, at least within the first tens of kpc from the disk, should be comparable with the galaxy velocity with respect to the ICM. The observed flux density profile of a radio tail should keep track of this information, and so we speculate that the velocity estimated by our model can be used to constrain the galaxy velocity in the cluster. Specifically, the modulus of the projected velocity \(V\) would represent a lower limit2 of the projected galaxy velocity \(V_{\rm sky}\) at which the galaxy is moving with respect to the ICM along the plane of the sky. The velocity component along the line of sight can be derived from the galaxy redshift \(z\) by following the method described in Davis & Scrimgeour (2014) to compute the peculiar velocities: Footnote 2: Due to our assumptions on the magnetic field, see Section 2.3.2. \[V_{\rm los}=c\left(\frac{z-z_{cl}}{1+z_{cl}}\right). \tag{4}\] Figure 4: Best-fitting \(V\) for each galaxy measured from the 144 (blue), 400 MHz (red) and spectral index (grey) profiles. The thick and thin error bars represent, respectively, the 1 and 2\(\sigma\) confidence intervals. Therefore, the galaxy total 3D velocity would be \(V_{\rm tot}=\sqrt{V_{\rm sky}^{2}+V_{\rm los}^{2}}\). Following this approach, we estimate the galaxies' 3D velocity by adopting the velocity measured at 400 MHz as \(V_{\rm sky}\). The resulting total velocities, summarized in Table 4, span from 1 to 23\(\sigma_{el}\). The 3D velocities, with the only exception of galaxy #3, seems to be dominated by the line-of-sight velocity. As a _proof-of-concept_, in the following we explore the potentiality of using the CRe bulk velocity as indicator of the galaxy velocity along the plane of the sky. #### 4.2.1 Comparing different estimators of \(V_{\rm sky}\) First, we compare the velocity inferred from the radio tails' properties with other methods previously adopted in the literature. This comparison should be taken carefully, due to the small size of our sample, and the fact that they are all part of the same cluster. Nevertheless, this exercise can provide some insights to better evaluate the limits of each method in future studies. The two main methods adopted in previous works are: * The 45\({}^{\circ}\)_approximation_: observing RPS-induced tails, at any wavelength, is evidence that the galaxy has a significant component of its velocity directed along the plane of the sky. Thus, at the zeroth-order approximation, we can assume that, at least, \(V_{\rm sky}=V_{\rm los}\), hence \(V_{\rm tot}=V_{\rm los}\sqrt{2}\). This is equivalent to assuming that the galaxy motion is inclined of 45\({}^{\circ}\) with respect to the line-of-sight. This method has been adopted to constrain the order of magnitude of the ram pressure given the ICM density (e.g., Poggianti et al. 2019b; Camptiello et al. 2021; Bartolini et al. 2022); * The _cooling length_: this method is based on the same physical assumption of our work, i.e. that the radio tail length depends solely on the CRe bulk velocity and the radiative time. However, in this case the hypothesis is that the CRe emit all of their energy within the observed radio tail. The average CRe velocity, \(V_{\rm avg}\), is directly computed as the ratio between the total radio tail length and the radiative time. This method has been applied when the radio data did not allow a detailed sampling of the flux density decline (Ignesti et al. 2022c; Muller et al. 2021). Its strongest caveat is that the resulting velocity mostly depends on the observed (projected) tail length, that ultimately depends on the image sensitivity. We compute \(V_{\rm sky}\) for each galaxy by adopting the two methods described above. For the latter one, we use the distance of the last bin of each galaxy as a measure of the radio length, and a \(r_{\rm rad}\simeq 2\times 10^{8}\) yr derived from Equation 3 under the same assumptions of our fit ( \(B=B_{\rm min}\), \(z=0.08012\), and \(\nu=144\) MHz). The results are shown in Figure 5, where, for each galaxy, we report the 3 different estimates of \(V_{\rm sky}\) (bottom row, diamonds), and the corresponding \(V_{\rm tot}\) (upper row, hexagons). The cooling length method generally produces the lowest values of \(V_{\rm sky}\) compared with the two other methods, with the only exception being galaxy #3 for which \(V_{\rm avg}\simeq V\). This result suggests that assuming the CRe have exhausted their energy within the observed length may not be correct, and not taking into account the intrinsic curvature of the radio emissivity with the time (distance) leads to underestimating \(V_{\rm sky}\) by a factor \(\leq 3\), due to the nonlinearity of the \(t_{\rm rad}-\nu\), and hence \(t_{\rm rad}-D\), relation. Concerning the other method, the results are varied. For galaxy #4, the \(V_{\rm los}\) is extremely low (64 km s\({}^{-1}\)), thus suggesting that using the 45\({}^{\circ}\) approximation may have lead to underestimate the Figure 5: For each galaxy: comparison between the \(v_{\rm sky}\) derived with 3 different methods (lower row, diamonds) and the corresponding total velocities (upper row, hexagons), connected by the dashed lines. The 3 methods are: the best-fit velocity \(V\) derived from the 400 MHz profile (blue), \(v_{\rm sky}=v_{\rm max}\) (red), and \(v_{\rm sky}=v_{\rm avg}\) (green). The vertical dashed line points out the value of \(\sigma_{\rm el}\). velocity. In this case, the independent estimate provided by the radio emission permits to constrain a more realistic value of the velocity. In the other galaxies the velocity estimates are more similar. As mentioned above, this could be due to the sample bias due to the physical properties of the cluster and the galaxies within. However, albeit the results are similar, computing the velocity on the basis of the radio emission decline permits us to investigate also the geometry of the galaxy-wind interaction (see Section 4.2.3), which is not possible otherwise. Thus this methodology would provide an advantage with respect to the 45\({}^{\circ}\) approximation. #### 4.2.2 Measuring the effective ram pressure Given the galaxy velocity and position in the cluster, the corresponding ICM ram pressure can be computed as \(P_{\rm Ram}=\rho_{\rm ICM}V_{\rm ice}^{2}\), where \(\rho_{\rm ICM}=1.19\mu m_{p}n_{e}\) and \(\mu\), \(m_{p}\) and \(n_{e}\) are, respectively, the average molecular weight, the proton mass, and the electron density (e.g., Gitti et al. 2012). To compute the latter, we use the azimuthally-averaged electron density profile3 measured by the X-COP survey (Ghirardini et al. 2019) to evaluate the proper \(n_{e}\) at the projected clustercentric distance of each galaxy (Figure 6). The corresponding uncertainties are derived by propagating the error on the fit and the uncertainty on \(n_{e}\). We report the results in Table 4. The caveats, limitations, and assumptions of this method are discussed in Section 4.3. Footnote 3: [https://dominiqueeckert.wixsite.com/xcop/a2255](https://dominiqueeckert.wixsite.com/xcop/a2255) In general, the resulting values of \(P_{\rm Ram}\) lie in the \(10^{-12}-10^{-11}\) erg cm\({}^{-3}\) range, which is in line with the previous predictions (Roediger & Hensler 2005; Bruggen & De Lucia 2008; Jaffe et al. 2018; Boselli et al. 2022). #### 4.2.3 Computing the disk-wind angle Measuring the two components of the 3D velocity independently allows us to constrain the inclination of the galaxy disk with respect to the direction of its motion, which corresponds to the geometry of the ram pressure wind with respect to the stellar disk. Indeed several studies have shown that the evolution of ram pressure-stripped galaxies can be affected by the inclination of the ram pressure wind, which is opposite to the galaxy motion, and the stellar disk (e.g., Roediger & Bruggen 2006; Jachym et al. 2009; Bekki 2014; Steinhauser et al. 2016; Farber et al. 2022; Akerman et al. 2023). The derivation of the disk-wind angle has been done previously for individual galaxies (e.g., Vollmer et al. 2012; Merluzzi et al. 2013). Here we derive the disk-wind angles for the 5 galaxies, to increase the number of Figure 6: Electron density profile reported in X-COP. The blue points mark the galaxy positions, and the vertical dashed lines indicate, respectively, \(R_{200}\) and \(R_{200}\). Figure 7: Example of the 3D projections in the cartesian system for galaxy #1. The blue disk and segment represent the stellar disk and the observed radio tail. The red and black vectors are, respectively, the projections of \(\hat{v}\) and \(\hat{h}\). From top to bottom: projection along the LOS (which corresponds to the observed projection), along the E-W axis, and along the N-S axis. In the top panel we show the \(\hat{v}_{V}\) (white, filled) and the \(\theta_{\rm tail}\) (white, dashed) angles. systems with this crucial information, and show the geometrical model that takes advantage of the results of our radio analysis. We compute the disk-wind angle, \(\Theta\), starting from our estimates of \(V_{\rm sky}\) and \(V_{\rm los}\). We adopt a reference system in which the \(x\)- and y- axis are on the plane of the sky, respectively along the East-West and the North-South directions, and the z-axis coincides with the line-of-sight and points towards the observer. For each galaxy, we use the inclination of the stellar disk with respect to the _line-of-sight_, \(\phi_{\rm disk}\), and its position angle, \(PA\), reported in the HyperLeda4 database (Makarov et al., 2014). With them, we define the polar version \(\hat{n}\) in a cartesian coordinate system centered on the galaxy with the 3 components aligned, respectively, along the directions north-south, east-west, and the _line-of-sight_ (Figure 7). Similarly, with \(V_{\rm sky}\) and \(V_{\rm los}\), where \(V_{los}>0\) indicates a galaxy moving away from the observer, and the direction of the motion, which we estimate as \(\theta_{V}=\theta_{\rm tail}-\pi\) where \(\theta_{\rm tail}\) is the North-to-East angle between the galaxy center and the direction of the tail in the sky, we define the velocity versor \(\hat{v}\). Therefore, Footnote 4: [http://leda.univ-lyon1.fr/](http://leda.univ-lyon1.fr/) \[\begin{split}\hat{n}=&[\cos{(PA)}\sin{(\phi_{\rm disk })},\\ &\sin{(PA)}\sin{(\phi_{\rm disk})},\\ &\cos{(\phi_{\rm disk})}]\\ \hat{v}=&[V_{\rm sky}\cos{(\theta_{V}+\pi/2)},\\ & V_{\rm sky}\sin{(\theta_{V}+\pi/2)},\\ &-V_{\rm los}]/V_{\rm tot}\end{split} \tag{5}\] Then the disk-wind angle can be computed as the angle between \(\hat{v}\) and \(\hat{n}\), hence: \[\Theta=\arccos{(\hat{v}\cdot\hat{n})}. \tag{6}\] In this reference system, \(\Theta=90\) indicates a wind impacting the galaxy edge-on, whereas \(\Theta=0\) or \(\Theta=180\) indicates that the galaxy is facing the wind face-on. We can measure also the inclination of the wind with respect to the _line-of-sight_, \(\phi_{\rm V}\), as: \[\phi_{\rm V}=\arctan{\left(\frac{V_{\rm sky}}{V_{\rm los}}\right)}. \tag{7}\] We report the results for each galaxy in Table 5. We compare the values of \(\phi_{\rm V}\) inferred by our analysis vs. those derived from an independent method that is the phase-space analysis presented in Bellhouse et al. (2021, Section 3). The probable angle of a galaxy's velocity between _line-of-sight_ and plane of the sky is estimated by comparing the observed \(R_{CL}/R_{200}\) and \(V_{\rm los}/\sigma_{cl}\) with a phases-space diagram composed by stacking the galaxies of 42 simulated galaxy clusters observed from random angles (e.g., Smith et al., 2022; Canducci et al., 2022; Smith et al., 2022; Awad et al., 2023, for different applications of the phase-space analysis). This method provides us with a distribution of \(\phi_{V}\) for each position in the phase-space diagram (i.e., for each couple of \(R_{CL}/R_{200}\) and \(V_{\rm los}/\sigma_{cl}\)). In Figure 8 we report the values of \(\phi_{\rm V}\) estimated by our analysis vs. the value range inferred from the phase-space analysis. We remind the reader that, due to our assumption on the magnetic field, we can derive only a lower limit for \(V\), and hence \(V_{\rm sky}\). Correspondingly, the values of \(\phi_{\rm V}\) have to be considered as lower limits for the real \(\phi_{\rm V}\). For three galaxies, #1, #7, and #6 the two predictions are in broad agreement, in the sense that they lay in the same quadrant \(\phi_{\rm V}<45^{\circ}\), and for galaxy #1 and #7 they are actually consistent. For galaxy #4 the phase-space prediction is consistent within the third quartile with \(\phi_{\rm V}>45^{\circ}\). Therefore, for 3(+1) out of 5 galaxies the two independent analysis are in agreement. However, we note that the phase-space analysis is a statistical tool intended for large numbers of galaxies, thus this current comparison, albeit instructive, is not conclusive. ### Caveats We summarize here the caveats, limitations, and assumptions of our model. 1. In order to obtain a reliable fit of the flux density decline it would be best to sample the radio tails with, at least, 3 spatial bins because the velocity fit aims to constrain the curvature of the profile. This is possible only with the correct combination of sensitivity and resolution of the radio images; 2. The best-fit \(V\) depends on the shape of the underlying emissivity spectrum, which ultimately depends on the assumptions on \(\delta\) and \(B\). Assuming a steeper CRe distribution than the one we adopt (\(\delta=-2.2\)) will produce a steeper initial spectral index than -0.6. In principle, the initial spectral index could be measured directly from the synchrotron spec Figure 8: A comparison between the values of \(\phi\) inferred in our work (x-axis) vs. the prediction based on the phase-space analysis (y-axis). The vertical errorbars indicate the first and third quartile of the \(\phi\) distributions predicted by the phase-space analysis., The dashed lines separates the two regimes \(\phi_{\rm V}>45^{\circ}\) (\(V_{\rm sky}>V_{\rm los}\)) and \(\phi_{\rm V}<45^{\circ}\) (\(V_{\rm sky}<V_{\rm los}\)), and the continuous line indicates the 1:1 identity. \begin{table} \begin{tabular}{c c c c c c} \hline ID & \(\theta_{\rm tail}\) & \(\phi_{\rm disk}\) & \(PA\) & \(\Theta\) & \(\phi_{\rm V}\) \\ \hline 1 & 328.38 & 64.3 & 134.5 & 76.85 & 35.5 \\ 3 & 333.69 & 52.1 & 153.1 & 52.43 & 6.27 \\ 4 & 131.22 & 41.5 & 58.6 & 39.45 & 78.2 \\ 6 & 227.18 & 55.8 & 178.5 & 43.68 & 17.55 \\ 7 & 351.0 & 90.0 & 154.2 & 96.6 & 23.43 \\ \hline \end{tabular} \end{table} Table 5: From left to right: galaxy ID; North-to-East angle between the galaxy center and the tail direction in the sky trum within the stellar disk. However, in our case this measure may not be reliable because a low-frequency spectral index in presence of high density, star forming regions, can be flattened by ionization losses, and thus it does not reflect the real CRe energy distribution (e.g. Basu et al. 2015; Ignesti et al. 2022c). Concerning the magnetic field, due to the fact that there are no methods to reliably measure its intensity in the tails, we assumed the minimal energy loss field \(B_{\rm min}\). Assuming higher values of \(B\) would result also in a steeper emissivity decline for \(\nu>\nu_{\rm br}\). The magnetic field assumption defines the conversion from radiative time to projected distance. Using \(B_{\rm min}\) entails that we are working under the favorable hypothesis of maximum CRe radiative time, hence, for a given velocity, we are maximizing the distance that they can travel. Therefore, the \(V\), and hence \(V_{\rm sky}\), derived under this assumption is a lower limit of the real velocity. Using different values of \(B\) would entail shorter \(t_{\rm rad}\) (Equation 3), thus higher velocities are required to reproduce the observed projected distances. We quantify this behavior in Figure 9, in which we show how the ratio of \(t_{\rm rad}\), and \(V_{\rm sky}\propto 1/t_{\rm rad}\) changes with respect to the case \(B=B_{\rm min}\) for different values of \(B/B_{\rm min}\). We observe that the ratio exceeds by a factor \(2\times\) for \(B\geq 3.5\times B_{\rm min}\simeq 7.7\)\(\mu\)G; 3. We assume that \(B\) and the CRe bulk velocity \(V\) are uniform along the tail. Different conditions would divert the observed flux density profiles from the prediction of pure synchrotron cooling. For instance, a magnetic field decreasing along the tail (e.g., as consequence of adiabatic expansions) could entail both a lower synchrotron emissivity but also the fact that, at a given frequency, the emission would be provided by CRe with increasing energy, and hence lower radiative times. Moreover, we assume that the CRe velocity along the tail is defined by the wind velocity, thus we neglected possible effect of the CRe stream along the magnetic field lines (e.g., Armillotta et al. 2022); 4. By fitting the emissivity spectrum directly to the observed flux density we are also inherently assuming that, in each bin, the radio-emitting plasma has the same geometrical properties, such as the volume, the curvature along the line of sight, and the filling factor. A decreasing volume/filling factor would induce an additional decline in flux density not included by our model; 5. The exact ram pressure, \(P_{\rm Ram}\), is derived from the azimuthally-averaged \(n_{e}\) profile (Figure 6), which is computed under the assumption of spherical geometry of the ICM. To infer the appropriate \(n_{e}\) for each galaxy we use their projected cluster-centric distance, that is a lower limit of their real distance. Moreover, the assumption of spherical geometry may not hold in a complex, merging cluster such as A2255. Therefore the values of \(n_{e}\) reported in Table 4 should be considered upper limit of the real density, and so \(P_{\rm Ram}\). ## 5 Conclusions In this work we present a semi-empirical model to reproduce the multi-frequency radio emission of ram-pressure stripped tails, and its application. In order to test the model, we investigate the properties of the radio tails of 7 spiral galaxies in A2255. We combined LOFAR and uGMRT observations at 144 and 400 MHz to infer the radio properties within few tens of kpc from the stellar disk. We observe a monotonic decrease in flux density associated with a spectral steepening along the stripping direction. Then we modeled the observed profiles with a semi-empirical model where the radio plasma moves with a uniform velocity \(V\) along the stripping direction and cools down via synchrotron radiation. The model reproduces the observed profiles for 5 out of 7 galaxies, and constrains the projected radio plasma velocity along the tail to be of the order of 100-500 km s\({}^{-1}\). This result confirms the qualitative scenario built up over the years in the literature, and provides the first estimate of the radio plasma bulk velocity. Moreover, observing a monotonic spectral steepening entails that, at least within the first \(\sim 30\) (projected) kpc from the stellar disk, the radiative time, which is of the order of \(\sim 100\) Myr, is shorter than the adiabatic losses timescale. The best-fit velocity order of magnitude supports the idea that the radio plasma clouds are transported by the ram pressure winds. Therefore, we speculate that this approach, in addition to measure the CRe bulk velocity, can constrain the galaxy velocity along the plane of the sky and provide us the first estimate of the 3D velocity of these galaxies. As a _proof-of-concept_, we estimate the total velocity of these galaxies with respect to the ICM to be between 300 and 1300 km s\({}^{-1}\). We also infer the corresponding ram pressure exerts by the ICM to be between 0.1 and 2.9 \(\times 10^{-11}\) erg cm\({}^{-13}\), and the angle between the stellar disk and the ram pressure. These results represent the first estimates of these quantities for cluster galaxies with this method, thus they could be used to constrain future studies of these systems. The proposed model should be now tested and refined on a larger sample of RPS galaxies, and by using multi-frequency observations spanning a wider wavelength range. It would greatly benefit from independent estimates of the extraplanar magnetic field, potentially provided by polarimetry studies. Moreover, our results should be complemented by tailored numerical MHD simulations of RPS. On the upside, the method presented in this manuscript can expand the applications of radio observations of RPS galaxies, whose availability is destined to increase in the next years with the advance of the all-sky surveys. The combination with deep radio, X-ray and optical will permit to quantitatively characterize the RPS affecting galaxies in dense environment. Figure 9: Expected ratio of \(t_{\rm rad}\) and \(V\) with respect to the case \(B=B_{\rm min}\) for different values of \(B/B_{\rm min}\). ## Acknowledgments We thank the referee for the suggestions which improved the quality of the manuscript. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 833824). AI acknowledges the INAF founding program 'Ricerca Fondamentale 2022' (PI A. Ignesti). This work is the fruit of the collaboration between GASP and the LOFAR Survey Key Project team ("MoU: Exploring the low-frequency side of jellyfish galaxies with LOFAR", PI A. Ignesti). R.J.vW acknowledges support from the VIDI research programme with project number 639.042.729, which is financed by the Netherlands Organisation for Scientific Research (NWO). I.D.R. acknowledges support from the ERC Starting Grant Cluster Web 804208. KR acknowledges support from Chandra grant GOO-21112X. AI thanks the music of Vulpeck for providing inspiration during the preparation of the draft. LOFAR (van Haarlem et al., 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universite d'Orleans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the Dutch national e-infrastructure with support of the SURF Cooperative (e-infra 180169) and the LOFAR e-infra group. The Julich LOFAR Long Term Archive and the German LOFAR network are both coordinated and operated by the Julich Supercomputing Centre (JSC), and computing resources on the supercomputer JUWELS at JSC were provided by the Gauss Centre for Supercomputing e.V. (grant CHTB00) through the John von Neumann Institute for Computing (NIC). This research made use of the University of Hertfordshire high-performance computing facility and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the Italian LOFAR IT computing infrastructure supported and operated by INAF, and by the Physics Department of Turin university (under an agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy. We thank the staff of the GMRT that made these best-vations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018), and APLpy, an open-source plotting package for Python (Robitaille & Bessert, 2012). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France (Wenger et al., 2000).
2309.08541
When do Generative Query and Document Expansions Fail? A Comprehensive Study Across Methods, Retrievers, and Datasets
Using large language models (LMs) for query or document expansion can improve generalization in information retrieval. However, it is unknown whether these techniques are universally beneficial or only effective in specific settings, such as for particular retrieval models, dataset domains, or query types. To answer this, we conduct the first comprehensive analysis of LM-based expansion. We find that there exists a strong negative correlation between retriever performance and gains from expansion: expansion improves scores for weaker models, but generally harms stronger models. We show this trend holds across a set of eleven expansion techniques, twelve datasets with diverse distribution shifts, and twenty-four retrieval models. Through qualitative error analysis, we hypothesize that although expansions provide extra information (potentially improving recall), they add additional noise that makes it difficult to discern between the top relevant documents (thus introducing false positives). Our results suggest the following recipe: use expansions for weaker models or when the target dataset significantly differs from training corpus in format; otherwise, avoid expansions to keep the relevance signal clear.
Orion Weller, Kyle Lo, David Wadden, Dawn Lawrie, Benjamin Van Durme, Arman Cohan, Luca Soldaini
2023-09-15T17:05:43Z
http://arxiv.org/abs/2309.08541v2
# When do Generative Query and Document Expansions Fail? ###### Abstract Using large language models (LMs) for query or document expansion can improve generalization in information retrieval. However, it is unknown whether these techniques are universally beneficial or only effective in specific settings, such as for particular retrieval models, dataset domains, or query types. To answer this, we conduct the first comprehensive analysis of LM-based expansion. We find that there exists a strong negative correlation between retriever performance and gains from expansion: expansion improves scores for weaker models, but generally harms stronger models. We show this trend holds across a set of eleven expansion techniques, twelve datasets with diverse distribution shifts, and twenty-four retrieval models. Through qualitative error analysis, we hypothesize that although expansions provide extra information (potentially improving recall), they add additional noise that makes it difficult to discern between the top relevant documents (thus introducing false positives). Our results suggest the following recipe: use expansions for weaker models or when the target dataset significantly differs from training corpus in format; otherwise, avoid expansions to keep the relevance signal clear.1 Footnote 1: Code and data are available at [https://github.com/orions/LM-expansions](https://github.com/orions/LM-expansions) \({}^{\,\ast}\) Work performed during internship at AI2. ## 1 Introduction Neural information retrieval (IR) systems routinely achieve state-of-the-art performance on tasks where labeled data is abundant Karpukhin et al. (2020); Yates et al. (2021). When limited or no data is available, neural models fine-tuned on data-rich domains are used in zero-shot manner Thakur et al. (2021); Rosa et al. (2022). However, shifts in distribution of queries and documents can negatively impact their performance Lupart et al. (2023). To mitigate this effect, large-scale Language Models (LMs) can be used to _expand_ queries or documents from unseen domains Gao et al. (2022); Wang et al. (2023); Dai et al. (2022); Jeronymo et al. (2023); Jagerman et al. (2023). These methods generally work by providing either the original documents or queries to the LM, which then generates additional expanded information to facilitate relevance matching. For example, HyDE Gao et al. (2022) uses an LM to generate a fictitious relevant document for a user query; the document is then used alongside of the user query to retrieve similar, and thus hopefully relevant, real documents. As another example, Doc2Query Nogueira et al. (2019) uses an LM to generate likely queries for documents in the collection; queries are appended to documents to increase their likelihood to match real user queries. As the LMs doing the expansion are typically slower but more capable than ranking models, they can provide additional context and Figure 1: Methods like query expansion and document expansion typically improve performance when used with weaker models but not for stronger models; more accurate models generally lose relevance signal when expansions are provided. Best expansion and model results taken from those in Table 1. connections that the IR models could not (e.g. providing specialized vocabulary, etc.). This property is particularly desirable when ranking models are used in unseen domains, as LMs can help close distribution shift gaps. Although many works have shown that LM-based expansions provide improvements, proposed approaches are generally tested only a small subset of retrieval techniques, such as small bi-encoder models or BM25 [14, 15, 16]. Further, as new models continue to be developed in IR and natural language processing (NLP), there is a pressing need to comprehensively analyze the relationship between expansion techniques, ranking models, and distribution shifts. We seek to fill this gap and aim to answer the following questions: RQ1: How do different models impact query and document expansion (SS3)?Across all types of IR models and architectures, performance is negatively correlated with gains from expansion: after a certain score threshold these expansions generally hurt performance (as they blur the relevance signal from the original documents). RQ2: How do different distribution shifts impact these results (SS4)?Our main results hold for all types of shift - better models are harmed by expansion - except for long query shift, where expansions generally help most-to-all models. RQ3: Why do expansions hurt stronger IR models (SS5)?We find that query and document expansions change the keywords that the retrieval models focus on, obscuring the relevance signal of the original texts. Overall, this work aims at answering the following question: **when should one use LM-based expansions?** Through our investigation, we provide evidence to help practitioners answer this question. Our results run counter to the common intuition that query and document expansion are helpful techniques in all cases; instead, they show that LM expansions generally **benefit weaker rankers**, but **hurt more accurate rankers**. Further, analysis over twelve datasets shows that whether a given model benefits from expansion varies dramatically depending on task; datasets with significant distributional shifts (_e.g._, very long queries) are more likely to benefit from expansion. ## 2 Experimental Settings In this section, we provide an overview of document and query expansion methods used in the reminder of the manuscript, as well as key aspects of our experimental setup. We choose expansion techniques according to two criteria: (_i_) their overall performance, as claimed in the paper introducing them, and (_ii_) their applicability to a large set of retrieval models. We note that there exists more specific expansion techniques for particular architectures, such as ColBERT PRF [13, 14]. However, for generality we use text-based expansions from LMs only and avoid model-specific techniques. We generate expansions from gpt-3.5-turbo2 as it is inexpensive and shows strong performance in previous work [13, 15]. Since using LMs to generate expansions for large collections would be prohibitive, we restrict our expansions to the reranking setting, e.g. the top 100 documents per query found from BM25 following [15].3 Footnote 2: We use version gpt-3.5-turbo-0613. To show that our results generalize beyond this specific language model, we include results using alternative LMs (such as gpt-4-0613) in Appendix A that show the same conclusion. Prompts and example input/output can be found in Appendix D and C. We also explore the placement of these augmentations (should we prepend/append/replace the original query?) in Appendix B and show that this also makes little difference. Footnote 3: Using gpt-3.5-turbo for just Doc2Query on the MS-Marco collection would cost roughly $4,000 USD (8 million does at 250 tokens each) as of September 2023. Thus we adopt the reranking setting (top 100 docs per query) in order to evaluate on many datasets. ### Query Expansion We use three types of query expansion, selecting the best methods from previous work. We note that although there are infinite strategies for prompting LMs to develop terms for search, these three provide the strongest candidates from the literature. HyDE from Gao et al. (2022)HyDE provides task-specific instructions for the LM to generate a document that would answer that question. We use the prompts from their work when available.4 Footnote 4: We use similar styled prompts for datasets not evaluated on in the original HyDE paper. We also append a phrase asking ChatGPT to be concise to match the original HyDE method which used the much more concise Davinci-003 model (see Appendix D for the full text of the prompts). Chain of Thought from Wang et al. (2023)Chain of Thought (CoT) for query expansion was inspired by Wei et al. (2022) and asks the model to reason before giving the answer. As the reasoning includes relevant information to the query, this additional text is used as the query expansion. Similar techniques have been shown to be effective in multiple works [11, 12, 13]. LM-based Pseudo Relevance Feedback (Q-LM PRF)PRF is a classical technique that shows retrieved documents to the model doing the expansion. We provide the top 3 relevant documents found using a bi-encoder model (Contriever) to the LM. It produces a list of expansion terms and then updates the original question to include those terms in a new fluent question. LM-aided PRF has been shown broadly effective [13, 12, 13]. ### Document Expansion Doc2QueryThere are fewer widespread LM document expansion techniques, with the main one being Doc2Query [14]. Work has found that improving the question generation model results in higher scores, hence we use Chat \begin{table} \begin{tabular}{l l|c c c c|c c c c|c c c} \hline \hline & & \multicolumn{3}{c|}{**DL Track 2019**} & \multicolumn{3}{c}{**FQA**} & \multicolumn{3}{c}{**Arguana**} \\ **Type** & **Model** & **base** & **QE** & **DE** & **Both** & **Base** & **QE** & **DE** & **Both** & **Base** & **QE** & **DE** & **Both** \\ \hline \multirow{9}{*}{\begin{tabular}{} \end{tabular} } & DPR & 38.4 & 46.6 & +3.1 & +10.8 & 14.4 & +4.7 & +1.7 & +5.7 & 34.9 & -7.1 & +1.6 & -4.4 \\ & Contiever & 49.0 & +3.5 & +4.0 & +8.1 & 21.3 & +3.6 & +1.6 & +5.1 & 45.8 & -0.1 & +2.9 & -3.2 \\ & Contiever FT & 62.3 & +1.6 & -0.2 & +0.6 & 29.6 & +3.2 & +0.6 & +3.8 & 48.8 & -3.6 & +2.0 & -2.5 \\ & E5 Base v2 & 67.3 & -3.4 & -0.9 & -3.7 & 37.8 & -0.6 & -3.8 & -2.5 & 51.1 & -8.4 & +2.6 & -5.7 \\ & MPNet Base v2 & 68.3 & -6.0 & -2.9 & -6.8 & 44.5 & -4.1 & -3.5 & -5.7 & 47.6 & -5.1 & +5.3 & -0.7 \\ & E5 Small v2 & 69.1 & -4.8 & -1.9 & -6.8 & 36.4 & +0.4 & -2.9 & -0.6 & 46.1 & -8.7 & +2.7 & -9.8 \\ & GTE Large & 70.0 & -4.5 & -1.3 & -4.5 & 41.2 & -2.0 & -4.1 & -3.2 & 56.8 & -8.8 & -0.9 & -9.0 \\ & E5 Large v2 & 70.1 & -5.7 & -1.7 & -7.6 & 38.6 & -0.9 & -2.7 & -3.2 & 48.9 & -5.9 & +3.2 & -3.4 \\ \hline \multirow{9}{*}{ \begin{tabular}{} \end{tabular} } & MonTS-Small & 66.6 & -2.0 & -2.8 & -2.8 & 34.3 & +0.1 & -0.6 & -0.3 & 21.1 & +22.7 & -3.0 & +22.2 \\ & MiniLM-2v2 & 68.0 & -3.2 & -4.1 & -5.1 & 27.5 & -2.0 & +0.6 & **+15.8** & 15.2 & +11.4 & **+10.8** & +11.2 \\ & SPLADEv2 & 70.1 & -4.3 & -3.7 & -5.6 & 33.4 & +1.3 & -0.2 & +11.2 & 45.0 & -4.5 & -1.3 & -4.0 \\ & MonoBERT & 70.4 & -4.6 & -2.0 & -4.8 & 36.2 & +0.2 & -0.7 & +0.0 & 50.1 & -5.7 & +2.5 & -9.3 \\ & MiniLM-v2 & 70.6 & -3.0 & -2.5 & -4.9 & 33.8 & +1.5 & -0.3 & +1.2 & 43.4 & +0.4 & -1.0 & -0.8 \\ & MonoTS-Base & 71.5 & -3.2 & -1.4 & -5.2 & 39.2 & -1.2 & -1.2 & -0.9 & 27.0 & +20.0 & +0.7 & +18.7 \\ & MonoTS-3B & 71.7 & -2.8 & -2.0 & -5.0 & 45.9 & -3.8 & -3.2 & -5.6 & 42.4 & +6.8 & -1.9 & +5.2 \\ & ColonBERv2 & 71.8 & -4.2 & -2.8 & -6.4 & 33.8 & -0.4 & -0.3 & -0.7 & 47.4 & -5.2 & -0.6 & -4.8 \\ & MiniLM-12v2 & 72.0 & -4.3 & -4.5 & -5.6 & 35.5 & -0.4 & -0.5 & +0.0 & 33.2 & +12.0 & +1.1 & +9.8 \\ & MonoTS-Large & 72.2 & -4.0 & -1.8 & -5.6 & 42.8 & -2.3 & -2.3 & -3.1 & 31.2 & +14.8 & -2.0 & +14.8 \\ & LLAMA & 72.6 & -2.9 & -4.9 & -7.7 & 40.0 & -3.7 & +4.9 & -5.8 & 52.6 & -3.9 & -6.9 & -9.4 \\ & LLAMAv2 & 72.8 & -4.2 & -4.9 & -9.3 & 41.1 & -3.6 & -7.4 & -7.9 & 52.3 & -1.5 & -8.2 & -7.0 \\ & LLAMAv2-13B & 73.6 & -4.5 & -5.4 & -7.3 & 41.2 & -4.5 & -4.9 & -7.0 & 49.4 & -2.1 & -6.0 & -4.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for the best expansion strategies across different models. _QE_ stands for query expansion (Q-LM PRF), _DE_ for document expansion (Doc2Query), and _Both_ for the combination (Q-LM PRF + Doc2Query). Colors indicate a positive or negative delta from the non-augmented base score. Notice that models with higher base scores are generally harmed by expansions while weaker models benefit from them. Figure 2: Effect of expansion over twelve datasets. For each dataset, markers show base performance for models, while the boxplot indicates the range of changes in scores for document and/or query expansion. Across all datasets and models, we note a consistent trend: models with **lower base performance benefit** from expansion; **higher performing rankers** generally **suffer** when expansion techniques are used. GPT instead of T5 for our experiments (Nogueira et al., 2019). See Appendix A for results using alternative LMs for document expansion. LM-based Document PRF (D-LM PRF)Similar to the Q-LM PRF technique above, we propose a document expansion that draws pseudo-relevance from _related queries_ instead of related documents. In this setting, where there exists a set of unjudged user queries, we show the LM the top 5 relevant queries and ask it to expand the original document to better answer them. ## 3 RQ1: How do different models impact query and document expansion? Experimental SettingTo understand the effects of different models on the helpfulness of LM-based expansions, we employ a wide variety of models from all major IR architectures: DPR (Karpukhin et al., 2020), ColBERT v2 (Santhanam et al., 2022), SPLADE v2 (Formal et al., 2021), MonoBERT (Nogueira et al., 2019), the MonoT5 family of models (Nogueira et al., 2020), the E5 family of models (Wang et al., 2022), GTE (Li et al., 2023), several MiniLM models with varying sizes (Wang et al., 2020), all-mpnet-v2-base (Reimers and Gurevych, 2019) and Llama models (Touvron et al., 2023, 2023) we fine-tune on MSMarco.5 Footnote 5: Model information and weights are available at [https://github.com/orions/LLM-expansions/llama_for_ranking.md](https://github.com/orions/LLM-expansions/llama_for_ranking.md). Due to the exponential combination of models and datasets, we evaluate all models on three representative datasets in Table 1 (see SS5 for details on datasets and types of generalization) and use five representative models (DPR, Contriever, ColBERTv2, MonoT5-small, and MonoT5-3B) on a larger suite of datasets (see Figure 2). We show results in comparison to the "base" version (colored \(\overline{\text{grey}}\)), e.g. the version without any \begin{table} \begin{tabular}{l|l|c c c|c c} \hline \hline **Axis** & **Dataset** & **\# Queries** & **\# Documents** & **Avg. D / Q** & **Q Len** & **D Len** \\ \hline \multirow{2}{*}{In-Domain} & TREC DL Track 2019 (Craswell et al., 2020) & 43 & 8,841,823 & 212.5 & 5.4 & 56.6 \\ & TREC DL Track 2020 (Craswell et al., 2021) & 54 & 8,841,823 & 207.9 & 6.0 & 56.6 \\ \hline \multirow{3}{*}{Domain Shift} & FiQA-2018 (Maia et al., 2018) & 648 & 57,600 & 2.6 & 10.9 & 137.4 \\ & Gooaq Technical (Khashabi et al., 2021) & 1,000 & 4,086 & 1.0 & 8.3 & 44.5 \\ & NFCorpus (Boteva et al., 2016) & 323 & 3,633 & 38.2 & 3.3 & 233.5 \\ \hline \multirow{3}{*}{Relevance Shift} & Touché-2020 (Bondarenko et al., 2020) & 49 & 382,545 & 19.0 & 6.6 & 293.7 \\ & SciFact Refute (Wadden et al., 2020) & 64 & 5,183 & 1.2 & 12.1 & 214.8 \\ \hline \multirow{3}{*}{Long Query Shift} & Tip of My Tongue (Lin et al., 2023) & 2,272 & 1,877 & 1.0 & 144.3 & 100.5 \\ & TREC Clinical Trials ’21 (Roberts et al., 2021) & 75 & 375,580 & 348.8 & 133.3 & 919.5 \\ & ArguAna (Wachsmuth et al., 2018) & 1,406 & 8,674 & 1.0 & 197.1 & 170.3 \\ \hline \multirow{3}{*}{Short Doc Shift} & WikiQA (Yang et al., 2015) & 369 & 26,196 & 1.2 & 6.3 & 25.1 \\ & Quora (Jyer et al., 2017) & 10,000 & 522,931 & 1.6 & 9.5 & 12.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of datasets used by type of generalization shift. Avg. D/Q indicates the number of relevant documents per query. Length is measured in words. The TREC DL Track uses MSMarco data (Nguyen et al., 2016). \begin{table} \begin{tabular}{l l|c c c|c c c} \hline \hline & & & \multicolumn{3}{c|}{**DL 2019 Track**} & \multicolumn{3}{c}{**DL 2020 Track**} \\ expansion. Values above zero (e.g. greater than the no-expansion version) are colored \(\overline{\text{blue}}\) while values below the base are colored \(\overline{\text{red}}\). Colors are scaled linearly according to the difference between the base value and the min/max (_i.e._, the worst value in the column will be the max red, while the best value will be max blue, all others will be shaded in between). Effect of Different ModelsOur results with all models (Figure 1) shows a consistent pattern: as base performance on a task increases, the gains from expansion decrease. We also see this trend from Table 1 (note that ArguAna results are sorted by MSMarco performance, when sorted by ArguAna they appear as in Figure 1). Interestingly, these results do not depend on the model architecture: this is true for bi-encoders, late-interaction models, neural sparse models, and cross-encoders. However, do these results hold for other datasets? Figure 2 answers this and shows the distributions of scores changes for models when using expansions over a wide range of datasets. We find the same pattern: models that perform better (such as MonoT5-3B) get less from expansions. ## 4 RQ2: How do different distribution shifts impact these results? Experimental SettingWe evaluate how query and document expansion are impacted by different distribution shifts: in-domain/no shift (MSMarco), domain shift (e.g. medical, code, legal), relevance shift (finding the opposite or a counterargument), and format shift (queries that are long documents or documents that are short). The datasets we use and their descriptive statistics are in Table 2. We use three representative models for these experiments. In-DomainWe use two datasets that test performance on the MSMarco collection: TREC Deep Learning Tracks 2019 and 2020 (Craswell et al., \begin{table} \begin{tabular}{l l|l l|l l|l l|l l|l} \hline \hline & & & \multicolumn{3}{c|}{**FQA-2018**} & \multicolumn{3}{c|}{**GooAQ Technical**} & \multicolumn{3}{c|}{**NFCorpus**} \\ **Type** & **Model** & DPR & Contirever FT MonoT5-3B & DPR & Contirever FT MonoT5-3B & DPR & Contirever FT MonoT5-3B \\ \hline \multirow{3}{*}{\(\overline{\text{red}}\)} & Base & 14.4 & 29.6 & 45.9 & 42.5 & 71.0 & 80.2 & 24.1 & 34.6 & 39.1 \\ \hline \multirow{3}{*}{\(\overline{\text{red}}\)} & HyDE & +3.6 & -0.3 & -14.7 & +3.1 & +3.8 & -10.0 & +0.3 & +0.0 & -5.9 \\ & CoT & +3.6 & +0.4 & -13.2 & +2.0 & +2.1 & -9.7 & -0.7 & -0.6 & -4.5 \\ & Q-LM PRF & +4.7 & +3.2 & -3.8 & +6.4 & +1.9 & -3.4 & +0.2 & -0.4 & -2.7 \\ \hline \multirow{3}{*}{\(\overline{\text{red}}\)} & D2Q & +1.7 & +0.6 & -3.2 & +6.4 & +3.0 & -1.1 & +1.3 & **+0.6** & **-0.5** \\ & D-LM PRF & +3.3 & +1.6 & -12.5 & +3.8 & +0.6 & -11.4 & +0.3 & -0.3 & -0.7 \\ \hline \multirow{3}{*}{\(\overline{\text{red}}\)} & HyDE + D2Q & +4.5 & +0.4 & -14.8 & +8.2 & **+5.2** & -7.4 & **+1.6** & +0.1 & -7.2 \\ & CoT + D2Q & +4.4 & +0.2 & -13.4 & +7.2 & +3.8 & -6.9 & +0.8 & +0.0 & -5.6 \\ \cline{1-1} & Q-LM PRF + D2Q & +5.7 & +3.8 & -5.6 & **+10.9** & +4.2 & -4.1 & +1.4 & -0.1 & -3.0 \\ \cline{1-1} & HyDE + D-LM PRF & +5.8 & +1.2 & -14.8 & +5.3 & +2.7 & -14.2 & +0.8 & +0.1 & -6.3 \\ \cline{1-1} & CoT + D-LM PRF & +6.2 & +1.7 & -14.9 & +3.6 & +1.9 & -13.6 & -0.1 & -0.2 & -4.2 \\ \cline{1-1} & Q+LM PRF & **+7.3** & **+4.6** & -8.4 & +7.9 & +3.5 & -6.4 & +0.2 & +0.0 & -2.8 \\ \hline \hline \end{tabular} \end{table} Table 4: How different expansions affect results on datasets that measure **Domain Shift**. Colors indicate a positive or negative delta from the non-augmented base score. Notice that models with higher base scores are generally harmed by expansions while weaker models benefit from them. \begin{table} \begin{tabular}{l l|l||l l|l l|l l} \hline \hline & & & \multicolumn{3}{c|}{**Touche-2020**} & \multicolumn{3}{c}{**Scifact-Refute**} \\ **Type** & **Model** & DPR & Contirever FT MonoT5-3B & DPR & Contirever FT MonoT5-3B \\ \hline \multirow{3}{*}{\(\overline{\text{red}}\)} & Base & 23.0 & 24.8 & 32.6 & 33.9 & 76.4 & 82.1 \\ \cline{2-9} & HyDE & -0.3 & +4.8 & -5.9 & -9.1 & -0.9 & -12.3 \\ & CoT & +0.3 & **+5.1** & -7.4 & -7.6 & +0.3 & -8.8 \\ & Q-LM PRF & +0.6 & +3.9 & -1.3 & +6.5 & +11.1 & -1.7 \\ \hline \multirow{3}{*}{\(\overline{\text{red}}\)} & D2Q & -0.2 & +0.0 & **-0.9** & +2.0 & -1.8 & **+0.9** \\ & D-LM PRF & -0.2 & -1.2 & -8.3 & +2.5 & -4.6 & -16.5 \\ \cline{1-1} \cline{2-9} & HyDE + D2Q & -0.1 & +5.0 & -3.0 & -6.1 & -1.0 & -16.6 \\ \cline{1-1} & CoT + D2Q & +0.3 & +2.6 & -5.4 & -6.5 & -1.1 & -16.9 \\ \cline{1-1} & Q-LM PRF + D2Q & -0.1 & +1.0 & -2.0 & **+9.1** & **+13.3** & -1.1 \\ \cline{1-1} & HyDE + D-LM PRF & +0.5 & +1.4 & -10.1 & -5.2 & -2.9 & -17.6 \\ \cline{1-1} & CoT + D-LM PRF & -0.2 & +0.8 & -8.4 & -7.2 & -1.5 & -19.3 \\ \cline{1-1} & Q+LM PRF & +0.3 & +2.5 & -2.7 & **+7.6** & -2.5 & -4.0 \\ \hline \hline \end{tabular} \end{table} Table 5: How different expansions affect results on datasets that measure **Relevance Shift**. 2020, 2021)6. Nearly all retrieval models use MSMarco for training, hence these are _in-domain_. Footnote 6: Despite the different names, TREC DL 2019 and 2020 use the same document collection as MSMarco, albeit with new queries and relevance judgements. Domain ShiftIn this setting models must generalize from their training on standard web documents (e.g. MSMarco) to new domains, such as legal or medical text. This type of shift is made difficult by specialized vocabulary in these domains. We use NFCorpus (medical) (Boteva et al., 2016), GooAQ Technical (code) (Khashabi et al., 2021), and FiQA-2018 (finance) (Maia et al., 2018). Relevance ShiftThis setting is characterized by a difference in the way _relevance_ is defined. Standard retrieval models have learned to define relevance in terms of casual web searches. However, there are other situations where this differs, such as queries that are looking for opposites, counterarguments, or neutral information. We use two datasets that search for refutations or counterarguments: Touche-2020 (Bondarenko et al., 2020) and a subset of SciFact (Wadden et al., 2020) whose gold documents refute the queries claims. Format ShiftAnother type of shift is the length of inputs: generally, queries are short and documents are paragraph-sized. However, there are situations where queries could be document-sized or the documents could be short. This shift tests whether models can generalize new length formats. We consider two groups of datasets: for _shift to long query_ we use Tip of My Tongue (Lin et al., 2023), TREC Clinical Trials Track 2021 (Roberts et al., 2021), and ArguAna (Wachsmuth et al., 2018). For _shift to short document_, we use two \begin{table} \begin{tabular}{l l l l l|l l l l|l l} \hline \hline & \multicolumn{4}{c|}{**Tip of My Tongue**} & \multicolumn{4}{c|}{**TREC CT 2021**} & \multicolumn{4}{c}{**Arguana**} \\ **Type** & **Model** & & DPR & \multicolumn{1}{c}{Contiriever FT} & \multicolumn{1}{c}{MonoT5-3B} & \multicolumn{1}{c|}{DPR} & \multicolumn{1}{c}{Contiriever FT} & \multicolumn{1}{c}{MonoT5-3B} & \multicolumn{1}{c}{DPR} & \multicolumn{1}{c}{Contiriever FT} & \multicolumn{1}{c}{MonoT5-3B} \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & Base & –13.4 & 38.3 & 39.5 & 16.4 & 26.7 & 25.8 & 34.9 & 48.8 & 40.6 \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & HyDE & +3.0 & -9.4 & -26.8 & +0.3 & +2.1 & **+4.2** & -4.5 & -5.4 & **+15.8** \\ & CoT & +2.1 & -9.5 & -23.3 & **+2.3** & **+3.0** & +3.0 & -5.8 & -5.3 & +11.3 \\ & Q-LM PRF & -2.9 & -1.9 & **+6.4** & **+2.2** & +0.6 & -0.1 & -7.1 & -3.6 & +8.3 \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & D2Q & +1.6 & -3.2 & -8.5 & +0.3 & -1.3 & -1.8 & +1.6 & +2.0 & -2.1 \\ & D-LM PRF & **+5.5** & **+2.9** & +0.9 & -0.7 & -0.9 & +0.6 & **+2.3** & **+3.5** & -2.5 \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & HyDE + D2Q & +3.6 & -10.7 & -29.7 & +0.4 & +2.1 & +2.7 & -2.8 & -2.5 & +12.9 \\ & CoT + D2Q & +2.2 & -10.6 & **-25.3** & **+2.3** & +1.5 & -0.1 & -4.3 & -3.0 & +10.6 \\ \cline{1-1} & Q-LM PRF & -1.8 & -4.7 & +2.1 & +0.7 & -0.9 & -0.2 & -4.4 & -2.5 & +6.9 \\ \cline{1-1} & HyDE + D-LM PRF & **+5.0** & -7.2 & -32.6 & +0.0 & +1.0 & **+3.2** & -3.0 & +1.0 & +10.3 \\ \cline{1-1} & CoT + D-LM PRF & **+5.3** & -7.4 & -25.8 & **+1.9** & **+2.7** & +1.0 & -4.0 & +0.9 & +8.8 \\ \cline{1-1} & Q+LM PRF & +0.7 & +1.6 & **+6.4** & +0.6 & **-1.0** & +0.4 & -4.0 & -0.2 & +3.3 \\ \hline \hline \end{tabular} \end{table} Table 6: How different expansions affect results on datasets that measure **Long Query Format Shift**. Colors indicate a positive or negative delta from the non-augmented base score! Unlike previous results, notice that all model benefit from some type of expansions on all three datasets. \begin{table} \begin{tabular}{l l l||c c c|c c} \hline \hline & \multicolumn{4}{c|}{**WikiQA**} & \multicolumn{4}{c}{**Quora**} \\ **Type** & **Model** & & DPR & \multicolumn{1}{c}{Contiriever FT} & \multicolumn{1}{c}{MonoT5-3B} & \multicolumn{1}{c}{DPR} & \multicolumn{1}{c}{Contiriever FT} & \multicolumn{1}{c}{MonoT5-3B} \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & Base & –47.2 & 68.6 & 75.9 & 68.4 & 86.7 & 83.9 \\ \cline{2-7} & HyDE & +16.4 & **+3.6** & **-1.6** & -15.4 & -13.8 & -8.2 \\ & CoT & +9.8 & -0.9 & -6.1 & -32.3 & -31.5 & -35.4 \\ & Q-LM PRF & +11.9 & -2.2 & -4.2 & -13.8 & -11.4 & -7.0 \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & D2Q & +5.4 & -1.8 & -1.7 & **-6.2** & **-3.7** & **+0.0** \\ & D-LM PRF & **-2.8** & **-10.8** & -21.4 & -10.0 & -15.6 & -17.0 \\ \cline{1-1} \cline{2-7} & HyDE + D2Q & **+17.7** & +2.1 & -2.7 & -11.4 & -10.1 & -7.1 \\ \cline{1-1} & CoT + D2Q & +11.3 & -1.5 & -6.9 & -25.7 & -26.3 & -32.5 \\ \cline{1-1} & Q-LM PRF + D2Q & +13.0 & -1.1 & -6.2 & -9.4 & -8.7 & -6.9 \\ \cline{1-1} & HyDE + D-LM PRF & +12.6 & -6.2 & -18.0 & -21.1 & -22.1 & -20.2 \\ \cline{1-1} & CoT + D-LM PRF & +7.0 & -10.3 & -19.0 & -35.6 & -36.8 & +41.4 \\ \cline{1-1} & Q+D LM PRF & **+9.5** & -6.1 & -10.8 & -19.4 & -19.6 & -17.8 \\ \hline \hline \end{tabular} \end{table} Table 7: How different expansions affect results on datasets that measure **Short Document Format Shift**. Colors indicate a positive or negative delta from the non-augmented base score. Notice that models with higher base scores are generally harmed by expansions while weaker models benefit from them. datasets: Quora Iyer et al. (2017) and WikiQA Yang et al. (2015).7 Footnote 7: Due to the Twitter API restrictions, we could not use Signal from BEIR. ### Results by Type of Shift Table 3 shows results for in-domain data on the 2019 and 2020 Deep Learning TREC Tracks. We see that weaker models improve with different expansion types, with DPR improving for almost every expansion and the stronger Contriever showing minor improvements for some combinations. However, when we move to the stronger models (_e.g._, MonoT5-3B), we find that all of these gains disappear and expansions hurt the model. We find that this trend holds in most other categories of shift: Table 4 for domain shift, Table 5 for relevance shift, and Table 7 for short document shift. Note that Figure 2 also shows this visually. The exceptions to this pattern occur only in format shift: for Quora (Table 5) where all models are harmed with expansion and for long query shift (Table 6) where expansions generally help most models. When we examine why expansions help for long query shift, we find that it transforms the query to become more "standard" (_i.e._, short) for MSMarco trained models (_e.g._, for ArguAna the query changes from a long document of an argument to one shorter question that summarizes it). As no model evaluated in this work is fine-tuned on long queries, it is an open-question of whether additional training would make this category of generalisation easier for models and less reliant on expansions. ## 5 RQ3: Why do expansions hurt stronger IR models? Sections 3 and 4 show that strong IR models do not benefit from expansions. But why is this true? One suggestion might be that larger models are better able to take advantage of the information in the original documents. We test this hypothesis and provide an error analysis to answer these questions. Figure 4: Effect of scale on the impact of expansions (Table 1, MonoT5). Larger models use expansions less. Figure 3: An example of expansions obscuring the relevance signal. The non-relevant document in red was ranked higher than the relevant blue document due to the phrase “Home Equity Line of Credit” being added to the query. The left side indicates original query and documents while the right side shows the query and document expansions. ### Effect of Model Size To show whether it is solely model size that impacts the gains from expansion, we use two different families of models: MonoT5 and E5. If model size is the cause, we would expect to see larger models gain less from expansions for both families. However, Figure 4 shows that model scale is inversely correlated with gains from expansion for the MonoT5-family, but not the E5-family. The crucial difference between them8 can be attributed to the E5 models having similar performance scores across sizes whereas T5 has a much wider range: T5 differs by 21 nDCG@10 points on ArguAna from 3B to small while E5 differs by only 3 points from large to small. Thus, we see that model size impacts gains from expansions only in tandem with the correlation between model size and performance. Footnote 8: Another obvious difference is that E5 is a bi-encoder while MonoT5 is not. However, previous work (Muennighoff, 2022) has shown that bi-encoders also improve with scale. ### Error Analysis If model size is not the reason for this phenomena, what could be causing it? To gain an intuition on possible failures of LM-based expansion, we annotate 30 examples from three datasets where performance declines when expanding both queries and documents. We find that out of the 30 examples, two are false negatives, _i.e._, relevant documents that are unjudged and not labeled as relevant (both from FiQA). Of the remaining 28, all errors are due to the expanded version including keywords that hurt the ranking: deemphasizing pertinent keywords by shifting focus to less salient keywords that were already present or to new keywords added by the expansion. An example of this behavior is in Figure 3, where we can see how query expansion added the term _"Home Equity Line of Credit"_ and distracted from the main focus of the question (using bitcoins as collateral). On the other hand, when no irrelevant information is introduced by LMs, well tuned ranker models can accurately estimate relevance of subtly different documents. ## 6 Discussion Our results indicate three phenomena regarding query expansion using LMs: (\(i\)) expansion generally benefit weaker models, such as DPR, while better performing rankers, such as T5, are penalized; (\(ii\)) exceptions are observed in case of severe distribution shift, such with very long queries; finally, (\(iii\)) when model performance is negatively impacted, the cause is generally expansion weakening the original relevance signal. This implies that even though the LMs are orders of magnitude larger and more powerful than smaller rerankers, they should not be used to augment strong performing IR models without careful testing. The strong performance of reranker models for generalization confirms previous work by (Rosa et al., 2022). Further, Table 3 indicates this characterization of LM expansion also holds even when models are tested on in-domain collections (no distribution shift). Interestingly, our experiments find that the only distribution shift that consistently needs expansion is long query format shift; we found no equivalent result for domain, document, or relevance shift. Future work may examine whether improved training techniques on longer queries can overcome this limitation or whether longer queries are innately more difficult for ranking tasks. ## 7 Related Work Large Scale Analyses in Neural IRComprehensive analysis in retrieval have provided great insight into practical uses of retrieval. These include many aspects of information retrieval, including interpretability (MacAvaney et al., 2022), domain changes (Lupart et al., 2023), syntax phenomena (Chari et al., 2023; Weller et al., 2023), and relationship between neural models and classical IR approaches (Formal et al., 2021; Chen et al., 2022). Generalization in Neural IRAs retrieval models have become more effective, attention has turned to improving and evaluating the way that IR models generalize to out-of-distribution datasets (e.g. not MSMarco-like corpora). One prominent example of this is the BEIR dataset suite (Thakur et al., 2021), which is commonly used for retrieval evaluation. Much other work has proposed new datasets for types of shift (e.g. MTEB (Muennighoff et al., 2023) among others (Han et al., 2023; Ravfogel et al., 2023; Weller et al., 2023; Mayfield et al., 2023)), as well as many new modeling strategies for better zero-shot retrieval (Dai et al., 2022; Wang et al., 2022). We follow these works by showing different types of generalization and whether these type of shift change the results for LM-based expansion techniques. Effect of Scale on Neural IR ModelsAs in Natural Language Processing (NLP), IR models typically improve with scale (Nogueira et al., 2020) but are also more heavily constrained, due to the requirement of processing millions of documents in real-time for live search. Thus, most first-stage IR models typically use a BERT backbone (Santhanam et al., 2022; Izacard et al., 2021) while reranker models have scaled to the billions of parameters (Nogueira et al., 2020). Previous work on scaling bi-encoder architectures have also shown performance gains from scale (Muennighoff, 2022), but scaling up first-stage retrieval is less common than scaling cross-encoders. Due to the effectiveness of larger models, recent work has even shown that a better first-stage model does not lead to improvements over a BM25 + reranker pipeline (Rosa et al., 2022). Thus, for our experiments we use BM25 as first stage retrieval and show results reranking those. ## 8 Conclusion We conduct the first large scale analysis on large language model (LM) based query and document expansion, studying how model performance, architecture, and size affects these results. We find that these expansions improve weaker IR models while generally harming performance for the strongest models (including large rerankers and heavily optimized first-stage models). We further show that this negative correlation between model performance and gains from expansion are true for a wide variety of out of distribution datasets, except for long query shift, where this correlation is weaker. Overall, our results indicate that LM expansion should not be used for stronger IR models and should instead be confined to weaker retrieval models. ## Limitations * This work does not train rankers to deal with augmentations. That might mitigate negative effects of some expansions, although it requires having access to supervised data, which might not be available on out-of domain tasks. * Deciding whether to use augmentation requires having access to evaluation data for the target domain; in some cases, such data might not be available. * In the current version of the manuscript, we tested our approach with commercial language models available via paid APIs. We feel this is justified since our contributions are independent from the specific model used, as long as it can follow instruction given. Nevertheless, use of commercial APIs limits reproducibility and present a significant barrier to those who cannot get access to the model. * Similarly, a replication of this work would require access to significant computational resources, including GPUs. A rough estimate shows that generating results for this paper required north of 10,000 A6000 GPU hours, with further 5,000 hours required to reach develop a stable experimental platform. * This work only studies datasets in English. While LM augmentations could play an important role in improving non-English, cross-lingual, and multilingual information retrieval, they require careful analysis. ## Ethical Considerations * This work shows that LM augmentations make mistakes; while our system never returns output of LM, inaccuracies might result in non-relevant documents being presented to users.
2309.10468
K Isomers in Transuranium Nuclei
K isomers in transuranium nuclei have become a most interesting subject in nuclear structure investigation in many laboratories within the recent twenty years. In this paper an overview on the present day situation will be given. It will focus on the conditions for occuring of this kind of isomers, their decay properties and systematics in their properties as far as experimental data are available.
Fritz Peter Heßberger
2023-09-19T09:37:45Z
http://arxiv.org/abs/2309.10468v2
# K Isomers in Transuranium Nuclei ###### Abstract K isomers in transuranium nuclei have become a most interesting subject in nuclear structure investigation in many laboratories within the recent twenty years. In this paper an overview on the present day situation will be given. It will focus on the conditions for occuring of this kind of isomers, their decay properties and systematics in their properties as far as experimental data are available. ## 1 Introduction Usually isomers are defined as 'longlived' excited levels of an atomic nucleus, whereas the term 'longlived' is not specified initially. At the time of discovery of isomeric states it was assumed that nuclear levels had lifetimes \(<\)1\(\times\)10\({}^{-13}\)s [1]. It thus seemed justified to denote nuclear levels with lifetimes \(\tau\)\(>\)10\({}^{-13}\) s as 'long-lived' or 'isomeric'. Recently G.D. Dracoulis et al. [1] have expressed it in the following way: 'on the technical side any state which has a directly measurable lifetime in the sense of an electronic measurement, even down to the subnanosecond region could be termed an isomer, or more properly a metastable state'. We will follow this wording in the present paper. The condition for emergence of an isomeric state is due to the fact that decay into lower lying excited nuclear levels or the ground state does not occur promptly but is delayed or hindered. P.M. Walker and G.D. Dracoulis here distinguish three cases [2]: a) Shape isomers: these occur, if a second potential minimum exists at large elongation of the nucleus. Decay into levels with lower excitation energies or the ground state is thus connected with a large change of the nuclear shape. Typical examples are the 'fission isomers' in transuranium nuclei, e.g. \({}^{242m}\)Am. But also internal transitions may compete with spontaneous fission. b) Spin - isomers: these isomers occur, if the spin differences between the isomeric state and lower lying levels are large and transitions between them are only possible by emission of electromagnetic radiation of high multipolarity. Depending on the structure of the individual nuclei in these cases also \(\beta^{+}\) -, electron capture (EC) -, \(\beta^{-}\), \(\alpha\) - decay or even spontaneous fission may compete with internal transitions (\(\gamma\)-emission, internal conversion (IC)). c) K isomers: these isomers are specific cases of spin isomers for which not only the spin - difference, but also the orientation of the spin vector plays a role. These isomers accur in axially symmetric deformed nuclei. The quantum number \(K\) denotes the projection of the total angular momentum (total'spin') onto the symmetry axis. The situation is illustrated in fig. 1. Figs. 1a and 1b show the relations in a deformed nucleus with one unpaired nucleon. The total angular momentum (total spin) \(\vec{j}\) is given by the vector sum of the orbital momentum \(\vec{l}\) and the spin \(\vec{s}\) of the unpaired nucleon. Here two cases have to be distinguished: a) orbital angular momentum \(\vec{l}\) and and spin \(\vec{s}\) vectors are 'parallel' (fig. 1a) and b) orbital angular momentum \(\vec{l}\) and and spin \(\vec{s}\) vectors are 'anti-parallel' (fig. 1b). In case a) the projection \(\Lambda\) of the orbital angular momentum \(\vec{l}\) onto the symmetry axis is lower than \(\Omega\), the projection of the total angular momentum onto the symmetry axis, i.e. \(\Omega>\Lambda\), while in case b) the projection \(\Lambda\) of the orbital angular momentum onto the symmetry axis is larger than \(\Omega\), i.e. \(\Omega<\Lambda\). Using the usual notation for Nilsson - levels of\({}^{\Pi}\)\({}^{\pi}\)[\(N_{\pi}\),\(\Lambda\)], where \(\pi\) denotes the parity of the level, N denotes the total number of oscillator quanta and n\({}_{\alpha}\) the number of oszillator quanta along the symmetry axis, the case \(\Omega>\Lambda\) is denoted by an 'uparrow' \({}^{\pi}\), i.e. \(\Omega^{\pi}\)[\(N_{\pi}\),\({}_{\alpha}\)]\(\uparrow\), ('spin-up state'), while the case \(\Omega\)\(<\Lambda\) is denoted by a 'downarrow' \({}^{\downarrow}\), i.e. \(\Omega^{\pi}\)[\(N_{\pi}\),\({}_{\Lambda}\)]\(\downarrow\), ('spin-down state'). Fig. 1c shows the situation in case of two unpaired nucleons where the total spins of the individual nucleons \(\vec{j_{1}}\) and \(\vec{j_{2}}\) add up to the total spin \(\vec{j}\), the projection of which onto the symmetry axis is denoted as \(K\). Comparing figs. 1a and 1b with fig. 1c it is evident that formally for a single unpaired nucleon the quantum number \(\Omega\) is identical to \(K\) for the case of two unpaired nucleons taking \(\Omega_{2}\) = 0, or formally \(\Omega\)=\(K\). Therefore it meanwhile became common (see e.g. [3]) also to denote in specific cases single particle'spin isomers' as 'K isomers'. This feature will be discussed in detail in section 5. ## 2 Angular Momentum Coupling As the K isomers we are dealing with mostly result from 2-quasi-particle or 4-quasi-particles states, let us first discuss the conditions of their occurence which is strongly related to the coupling of angular momenta of two or more unpaired nucleons. In the previous section schematic coupling of angular momenta for two unpaired nucleons was discussed and also shown in fig. 1c. In that specific case orbital momentum and spin of the individual nucleons add up to the total angular momenta of each nucleon, which then add to the total angular momentum (or 'total' spin) of the nucleus. However, this is only one 'extreme' case of angular momentum coupling denoted as 'jj-coupling'. An alternative case is coupling of the orbital angular momenta of both nucleons to a total orbital angular momentum of the nucleus \(\vec{L}\), and coupling of the spins of both nucleons to the total spin of the nucleus \(\vec{S}\), which then couple to the total angular momentum of the nucleus \(\vec{J}\). This kind of coupling is known as LS - coupling or Russel-Saunders coupling. In the following both schemes will be briefly introduced. ### LS coupling (Russel-Saunders - coupling) In this coupling scheme it is assumed that there is only a negligibly weak coupling between the orbital angular momentum \(\vec{l}\) and the spin \(\vec{s}\) of an individual nucleon, but that there is a strong coupling between the orbital angular momenta \(\vec{l}\) of the different nucleons and also a strong coupling between the spins \(\vec{s}\) of the different nucleons (see e.g. [4]). Consequently the individual orbital angular momenta will couple to a total orbital angular momentum \(\vec{L}\) of the nucleus, i.e. \(\vec{L}=\sum_{i=1}^{n}\vec{l}\), while the individual spins will couple to a total spin \(\vec{S}\) of the nucleus, i.e \(\vec{S}=\sum_{i=1}^{n}\vec{s}\), where \(n\) denotes the number of involved nucleons. The total orbital angular momentum and the total spin then couple to a total angular momentum \(\vec{J}\) of the nucleus, \(\vec{J}=\vec{L}\) + \(\vec{S}\). In this coupling scheme it is also presumed that states of different \(\vec{L}\) have quite different energies and states of different \(\vec{S}\) having the same \(\vec{L}\) value correspond to quite different energy levels, so-called spin multiplets [4]. LS - coupling conditions are quite satisfactorily fulfilled in light nuclei. ### 2 jj coupling In the other extreme case of angular momentum coupling it is presumed that interactions between the orbital angular momentum \(\vec{l}\) and the spin \(\vec{s}\) of the individual nucleons dominate, while the coupling between the angular momenta and, respectively the spins, of the individual nucleon is small (see e.g. [4]), or as sometimes denoted, interaction between orbital angular momentum und spin is larger than the'residual interactions'. Consequently orbital angular momentum \(\vec{l}\) and spin \(\vec{s}\) of a nucleon couple to a total angular momentum \(\vec{j}=\vec{l}\pm\vec{s}\), which then couple to a total angular momentum \(\vec{J}\) of the nucleus, i.e. \(\vec{J}=\sum_{i=1}^{n}\vec{j}\), where \(n\) denotes the number of the involved nucleons. jj - coupling conditions are quite satisfactorily fulfilled in heavy nuclei. ### 2.3 jj-coupling in odd-odd nuclei - 'Nordheim rule' The jj - coupling discussed above states that angular momenta and spins of the individual particles first couple to the total angular momentum \(\vec{j}\) of the individual nucleon, which then couple to a total spin of the nucleus \(\vec{J}\). According to the rules of quantum mechanics the total spin \(J\) can access the values J = \(j_{1}\)+\(j_{2}\), \(j_{1}\)+\(j_{2}\)-\(1\)... \([j_{1}\)-\(j_{2}]\). It thus is of high interest, which of the possible values of the total spin Figure 1: Angular momentum coupling schemes; a) single unpaired nucleon with nucleon orbital angular momentum and spin vectors in ’parallel’; b) single unpaired nucleon with nucleon orbital angular momentum and spin vectors ’anti-parallel’; c) coupling scheme for two unpaired nucleons (for nonrotating nuclei). refers to the lowest excitation energy. A first rule was suggested by L.W. Nordheim [5, 6] on the basis of the hypothesis, that the individual configurations of neutrons and protons in odd-odd nuclei are the same as in odd A nuclei with the same number of nucleons in the odd particle group [6]: a) If the odd neutron and odd proton belong to different 'Schmidt groups', i.e. if \(j_{1}\!=\!l_{1}\!\pm\!1/2\) and \(j_{2}\!=\!l_{2}\!\mp\!1/2\), then the result spin (for the lowest excitation energy) is \(J\!=\![j_{1}\!\cdot\!j_{2}]\) ('strong rule'). b) If the odd neutron and odd proton belong to the same 'Schmidt group', i.e. if \(j_{1}\!=\!l_{1}\!\pm\!1/2\) and \(j_{2}\!=\!l_{2}\!\pm\!1/2\), then the result spin (for the lowest excitation energy) is \(J\!>\![j_{1}\!\cdot\!j_{2}]\) or sometimes also written as \([j_{1}\!\cdot\!j_{2}]<J\leq j_{1}\!+\!j_{2}\) (see e.g. [7]) ('weak rule'). It is, however, remarked that there is a tendency to couple to J = j\({}_{1}\)+j\({}_{2}\), but this not fulfilled generally, as already stated by L.W. Nordheim [6] that these rules'seem to hold for a great majority of cases, but not without exceptions'. **2.4 jj-coupling in odd-odd nuclei** - 'Brennan rule' The 'Nordheim rules' have been modified a couple of years later by M.H. Brennan and A.M. Bernstein [8]. Rule a) remained unchanged (in [8] denoted as'R1'); R1: \(J\!=\!|j_{1}\!\cdot\!j_{2}|\) for \(j_{1}\!=\!l_{1}\!\pm\!1/2\) and \(j_{2}\!=\!l_{2}\!\mp\!1/2\) For rule b) two cases where distinguished: R2-b1) \(j_{1}\!=\!l_{1}\!+\!1/2\) (\(\uparrow\)) and \(j_{2}\!=\!l_{2}\!+\!1/2\) (\(\uparrow\)); in this case both nucleons couple their spins preferrably to \(J\!=\!|j_{1}\!+\!j_{2}|\); R2-b2) \(j_{1}\!=\!l_{1}\!-\!1/2\) (\(\downarrow\)) and \(j_{2}\!=\!l_{2}\!-\!1/2\) (\(\downarrow\)); in this case both nucleons couple their spins preferrably to \(J\!>\![j_{1}\!\cdot\!j_{2}]\); for the specific case of \(j_{1}\!=\!j_{2}=1/2\), the resulting spin is determined as \(J=j_{1}\!+\!j_{2}\). The third rule concerns the case, when the configuration is a mixture of a particle and a hole state. In this case the resulting spin \(J\) is given as: R3: \(J=j_{1}\!+\!j_{2}\) - 1 It is, however, stated by the authors, that this coupling rule is quite uncertain, and can be regared only as a tendency. M.H. Brennan and A.M. Bernstein investigated about 75 cases; they obtained applying R1 a correct assigment in all cases and wrong assignements for R2 in 7.3% and R3 in 54% of the cases. It should be noted that they did not apply the rules consequently. In some cases also for particle - hole - configurations the rules R1 and R2 were applied. Also for two - hole states R2 seems inverted, i.e. maximum spin for \(\downarrow\) cases and minnum spin for \(\uparrow\) cases. **2.5 Strong coupling in two-quasi-particle states in even-even nuclei** - '**Gallagher rule' Another set of coupling rules was suggested by J.C. Gallagher [9], which were applied to two particle-states in deformed even-even nuclei. They are of relevance for the formation of 2-quasi-particle K isomers. Using the Nilsson notation as defined above, four cases can be distinguished: a) \(\Omega_{1}[\)N\({}_{1}\)n\({}_{1}\)\({}_{2}\)\({}_{1}\)\({}_{2}\)\({}_{2}\)N\({}_{2}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\((\uparrow)\) b) \(\Omega_{1}[\)N\({}_{1}\)n\({}_{1}\)\({}_{2}\)\({}_{1}\)\({}_{1}\)\({}_{1}\)\({}_{2}\)\({}_{1}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\((\downarrow)\) c) \(\Omega_{1}[\)N\({}_{1}\)n\({}_{1}\)\({}_{1}\)\({}_{1}\)\({}_{1}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\((\uparrow)\) d) \(\Omega_{1}[\)N\({}_{1}\)n\({}_{1}\)\({}_{1}\)\({}_{1}\)\({}_{1}\)\({}_{2}\)\({}_{1}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\({}_{2}\)\((\downarrow)\) It is pointed out by J.C Gallagher that at a'strong coupling' states of \(\Omega=|\Omega_{1}\!\pm\!\Omega_{1}|\) are degenerated. The degenration is cancelled by the residual interaction between the neutrons or protons, respectively. Further it is argued that at'strong coupling' the relation \(\Omega=\!\Lambda+\Sigma\) with \(\Lambda=|\Lambda_{1}\!\pm\!\Lambda_{2}|\) and \(\Sigma=|\Sigma_{1}\!\pm\!\Sigma_{2}|\) is fulfilled. On this basis it is postulated, that for the lowest lying level \(\Sigma=\!0\) is valid. Thus it follows: a) \(\Omega(\!E_{min}^{*})=|\Omega_{1}\!-\!\Omega_{2}|\) if \(\Omega_{1}=\!\Lambda_{1}\!\pm\!1/2\) and \(\Omega_{2}=\!\Lambda_{2}\!\pm\!1/2\), i.e. for cases with parallel (\(\uparrow\uparrow\)) or (\(\downarrow\downarrow\)) spin projections b) \(\Omega(\!E_{min}^{*})=\Omega_{1}+\!\Omega_{2}\) if \(\Omega_{1}=\!\Lambda_{1}\!\pm\!1/2\) and \(\Omega_{2}=\!\Lambda_{2}\!\mp\!1/2\), i.e. for cases with antiparallel (\(\uparrow\downarrow\)) or (\(\downarrow\uparrow\)) spin projections. **Fig. 2**: Hindrance factors F\({}_{W}\) for the decay of K isomers in dependence of the K difference \(|\Delta K|\) of initial and final states for transitions of multipolarities E1, E2, E3, M1, M2, M3. Data are taken from [16]. ## 3 K-hindrance of internal transitions Excited nuclear states predominantly decay by internal transitions, \(\gamma\) emission, internal conversion. At a transition energy difference \(\Delta\)E \(>\) 1022 keV (double rest mass energy of electrons) also decay via electron-positron pair creation is possible. The transition probability usually depends on the angular momentum difference between the initial and final states, which also defines the multipolarity of the transition. In general one can state, the higher the multipolarity of the transition is, the lower is the transition probability or, vice versa, the longer is the life-time of the initial state. At long life-times also other decay modes (\(\beta^{+}\)-, \(\beta^{-}\)-, EC-, \(\alpha\) - decay or spontaneous fission) may compete with internal transitions. In case of internal transition from a state \(i\) characterized by the quantum numbers (K\({}_{i}\) J\({}_{i}\)\(\pi_{i}\)) to a state \(f\) characterized by (K\({}_{f}\) J\({}_{f}\)\(\pi_{f}\)) some selection rules have to be fulfilled [10]: a) \(|J_{i}-J_{f}\leq\lambda\leq J_{i}+J_{f}\) b) \(\pi=\pi_{i}\pi_{f}\) c) \(|K_{i}-K_{f}|=\Delta K\leq\lambda\) where \(\lambda\) denotes the multipolarity of the transition. Rules a) and b) are strict, rule c) has to be discussed. Since, generally spoken, the K quantum number is given by the intrinsic structure of a nucleus changes of the K quantum number at internal nuclear transitions go along with changes in the intrinsic structure of the nucleus. It is thus understandable, that changes of the K quantum number at internal transitions will have some reaction on the transition probabilities. This was already demonstrated in early calculations by G. Alaga et al. [10] and C.J. Gallagher [11] and also discussed by A. Bohr and B.R. Mottelson [12]. They showed that the transition matrix element vanishes if \(\Delta\)K = \(|\)K\({}_{i}\)-K\({}_{f}|>\lambda\), with \(\lambda\) representing the multipolarity of the radiation of the transition between the initial and the final states. In other words, transitions \(\Delta\)K\(>\lambda\) are forbidden. However, this transition rule is only valid for cases when initial or final states (or both) are 'pure' states. Small pertubative admixtures in the wave function diminish the 'K forbideness'. In other words, 'K forbideness' will be replaced by 'K hindrance' or 'K retardation'. Such admixtures may be 'K mixing', which'might arise from dynamical effects such as rotation or vibration, random interactions with one or more close lying states, or configuration mixing with the underlying nuclear potential' [16]. See also [16] for more exhaustive discussion. It should be clarified at this point, that the K quantum number was defined as the projection of the total angular momentum \(\vec{I}\) onto the symmetrx axis of the nucleus (see e.g. [13]). For non rotating nuclei K = \(\Omega\). At rotation the total angular momentum \(\vec{I}\) is the vector sum \(\vec{I}=\vec{J}\) + \(\vec{R}\), where \(\vec{J}\) is the contribution from the internal motion of the nucleons and \(\vec{R}\) is the contribution from the collective motion, thus K \(>\) \(\Omega\). Another definition is given by G. Alaga et al. [10]. Here it is argued that all the states of the rotational band are characterized by the same intrinsic wave function \(\varphi_{\tau K}\) and thus have all the same K quantum number, namly that of the bandhead. This is of relevance for the calculation of transition probabilities. Thus the K hindrance (selection rule c) above) is given by the K difference of the bandheads of the initial and final states. Evidently transitions violating rule c) are hindered. The degree of K-forbiddeness \(\nu\) can be expressed as \(\nu=\Delta K\)-\(\lambda\). At first glance it may seem that rules a) and c) are quasi identical. But definitely it is not the case for deformed nuclei. Here each single particle,'multi'(\(\geq\) 2) quasi particle or vibrational state is the 'head' of a rotational band with angular momenta I = K, K+1, K+2.... Thus for members of rotational bands cases of \(|I_{i}-I_{f}|\leq|K_{i}-K_{f}|\) are possible, in other words transitions of multipolarities \(\lambda(\Delta I)<\lambda(\Delta K)\) are possible, but those transitions do not have transition probabilities of single particle transitions, but are 'K-hindered'. This item will be discussed in more detail in sect. 5. Quantitatively the degree K-hindrance of a transition can also be expressed by the 'delay' or increase of the lifetime relative to the expected lifetime for a single particle transition, i.e. by a hindrance factor \(F_{W}\) defined as [14] \(F_{W}=\frac{T_{i,(1/2)z}(experiment)}{T_{1/2z}(Wessioffestimate)}\) K - hindrance factors were compiled by K.E.G Lobner [14], presented in dependancy of the multipolarity of the transition and the \(|\Delta K|\) values and compared with the 'empirical rule' of L.I, Rusinov [15]_lg_\(F_{W}=\)_2_(\(|\Delta K|\)-\(\lambda\)). Evidently the experimental values showed a large straggling for each \(|\Delta\)K\(|\) value at each multipolarity. Agreement with the 'Rusinov empirical rule' is more or less satisfactory for E2, E3, M2 and M4 transition, but disagreed significantly for E1, M1 and E4 transitions. Recently an updated compilation of hindrance factors was presented by F.G. Kondev et al. [16]. The results are shown similar to the style of presentation of Lobner in fig. 2. Evidently two items are visible, already known from the presentation of Lobner: a) there is a general trend of increasing hindrance factors at increasing K difference b) in general transitions of lower multipolarities exhibit higher hindrance factors. To get some some quantitative measure for the hindrance we extracted mean hindrance factors \(|\)lgF\({}_{W}|\) for each transition multipolarity at each K difference. The mean values were obtained by fitting a Gaussian to the data if sufficient amount of data was available or taking the'mean value' in case of low statistics. In this case, however, values deviating significantly from the 'bulk' were not respected. The results are presented in table 1 and for E1, E2, (E3), M1 and M2 transitions are also shown in fig. 3. The trends already indicated in fig. 2 are seen more clearly in fig. 3. But definitely there is no unique trend that would suggest a simple scaling of the mean hindrance factors \(|\)lgF\({}_{W}|\) as a function of \(|\Delta K|\) as it had been originally suggested by L.I. Rusinov [15], who suggested a relation _lg_\(F_{W}=\)_2_(\(|\Delta K|\)-\(\lambda\)). But this item had already discussed by K.E.G. Lobner [14]. For the E2 and M2 transitions we observe roughly a linear increase of \(|\)lgF\({}_{W}|\) at increasing \(|\)\(\Delta K|\), but slopes are different. For E1 and M1 transitions the trend of increase seems to stop at \(|\)\(\Delta K|\)\(\approx\)6 and be followed by a plateau or even a decrease up to \(|\)\(\Delta K|\)\(\approx\)_10_, increasing again at \(|\)\(\Delta K|\)\(>\)_10_. While for E1 transitions mean hindrance factors are higher than for E2 transitions in the whole range of \(|\)\(\Delta K|\) values, the steep increase of the mean hindrance factors for M2 transitions suggest higher values than for M1 transitions at \(|\)\(\Delta K|\)\(>\)_6_. Thus in general one may state that hindrance factors certainly depend on details of nuclear structure or structure of the K isomers and transition probabilities into lower lying levels and there is no simple relation between hindrance factors, K differences between initial and final states and multipolarities of the transitions. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(|\Delta\)K\(|\) & \(|\)lgF\({}_{W}|\)(E1) & \(|\)lgF\({}_{W}|\)(E2) & \(|\)lgF\({}_{W}|\)(E3) & \(|\)lgF\({}_{W}|\)(M1) & \(|\)lgF\({}_{W}|\)(M2) \\ 0 & - & - & - & - & _2.71_ & - \\ [MISSING_PAGE_POST] 10.18_ & - & 7.20 & - \\ \hline \hline \end{tabular} \end{table} Table 1: Mean hindrance factors for individual transitions within the decay of K isomers. Values are obtained either from fitting Gaussians to the data or taking the arithmetic means. Results from the latter procedure are given in _italic font_. Data are taken from [16]. ## 4 Experimental aspects To obtain detailed information on the decay path of K isomers measurement of the \(\gamma\) rays emitted during the deexcitation process is indispensible. The present technique for investigation of K isomers in the region Z \(\geq\) 100 is producing the nuclei of interest by complete fusion reactions, separate the fusion products ('evaporation residues' (ER)) in-flight from the primary beam by magnetic (gas-filled separators) or electromagnetic field arrangements (e.g. velocity filters) and implant the ER into an arrangement of silicon detectors ('stop detector') where their \(\alpha\) decay or spontaneous fission (SF) - other radioactive decay modes do not play a role presently - is measured. Gamma rays are registered by an arrangement of Ge-detectors, placed close or around the stop detector. As production cross sections are low for these nuclei (\(\sigma\leq\) 2 \(\mu\)b) production rates are \(\leq\)10/s at beam currents of 1 p\(\mu\)A (6x10\({}^{12}\) projectiles/s). On the other hand, depending on the set-up, one has a high background (often \(>>\)100/s) of \(\gamma\) radiation at the focal plane. Sources are e.g. \(\gamma\)s from the environment, \(\gamma\)s from nuclear reactions at the target position or the 'beamstop', \(\gamma\)s from reactions with neutrons produced in nuclear reactions at the target position or the 'beamstop'. This simply means that one has to discriminate \(\gamma\) events from the decay of the isomeric state from background \(\gamma\) events. A possibility to do so is given by the fact that the decay pattern of a K isomer is usually complicated, i.e. decay of the isomer into the ground state occurs via a couple of steps, part of which are (highly) converted. So, following a technique applied at SHIP already at the end of the 1970ties [17], G.D. Jones suggested to use the conversion electrons (CE) registered in the'stop detector' as a kind of 'calorimeter' [18]. Indeed, using this 'technique' one gets rid of the 'background' \(\gamma\) rays which are not in coincidence with CE. So, one is only concerned with two kinds of background: a) accidental coincidences between CE (or 'CE - like events) and \(\gamma\)s and b) CE - \(\gamma\) coincidence from 'other' nuclei. These could be 'neighbour isotopes' also produced in the fusion-evaporation reactions (e.g. from 2n - deexcitation channel of the reaction (\({}^{208}\)Pb(4\({}^{88}\)Ca,2n)\({}^{254}\)No), if the product of the 1n - deexcitation (\({}^{208}\)Pb(4\({}^{88}\)Ca,1n)\({}^{255}\)No) is to be investigated), or isotopes produced in few nucleon transfer reactions and also passing the separator. These can be discriminated by correlations with the \(\alpha\) decay or SF of the considered isotope, i.e. (CE,\(\gamma\)) - \(\alpha\)/SF. So data analysis searching for K isomers will thus be investigation of correlations between the implanted nucleus (ER, implantation signal), decay of the Figure 3: Mean hindrance factors \(|\)lgF\({}_{W}\)\(|\) for the decay of K isomers in dependence of the K difference \(|\Delta K|\) of initial and final states for transitions of multipolarities E1, E2, M1, M2. Data are taken from [16]. isomer ((CE,\(\gamma\)) coincidences) and radioactive decay of the isotope by \(\alpha\) decay or SF, hence correlations ER - (CE,\(\gamma\)) - \(\alpha\)/SF. Even more complicated decay paths, e.g. decay of isomer '1' into isomer '2' can be analyzed by correlations ER - (CE1,\(\gamma\)1) - (CE2,\(\gamma\)2) - \(\alpha\),SF. Clearly, proceeding to heavier nuclei production rates become smaller and thus the observation of \(\gamma\) lines becomes less probable. Here it has been shown in recent years that observation of solely CE (ER - CE - \(\alpha\)/SF) is sufficient for identification of a K isomer and measuring its half-life. But estimation of the excition energy (here only limits can be given) and establishing decay pattern are hardly possible. ## 5 Single Particle K Isomers As already mentioned above for nuclei with one unpaired nucleon the relation \(\Omega\) = K is valid. So, formally, single particle isomers in well deformed nuclei may also be regarded as K - isomers. However, such a classification seems only meaningful, if the life-time or the decay are determined by the K - hindrance. To understand this issue, one should remind the above mentioned selection rule \(\Delta\)_K_= \(|\)_K\({}_{i}\)-\(K_{f}\)\(|\)\(\leq\)\(\Delta\)L_\(\leq\)\(\lambda\), where _K\({}_{i}\)_,_K\({}_{f}\)_ denote the K-values of the initial and the final state, and \(\Delta\)_L_ the angular difference which finally defines the multipolarity of the transition. So in the case of single particle isomers one has to distinguish the two cases _K\({}_{i}\)_\(<\)_K\({}_{f}\)_ and _K\({}_{i}\)\(>\)K\({}_{f}\)_. As the Nilsson level _K\({}_{f}\)_ is the head of a rotational band with spins \(\Omega\)(=_K\({}_{f}\)), \(\Omega\)+1, \(\Omega\)+2..., in the case of _K\({}_{i}\)\(<\)K\({}_{f}\)_ the angular momentum difference \(\Delta\)_L_ = \(|\)_K\({}_{i}\)-\(K_{f}\)_ is the lowest within the rotational band, i.e. the transition _K\({}_{i}\)\(\rightarrow\)K\({}_{f}\)_ has the lowest multipolarity, so according to the definitions given above, there is no 'K-hindrance' and the transition _K\({}_{i}\)\(\rightarrow\)K\({}_{f}\)_ can be regarded as a 'usual' single particle transition. Contrary to this situation in the case of _K\({}_{i}\)\(>\)K\({}_{f}\)_ angular momenta differences into excited members of the band are \(\Delta\)_L_ = _K\({}_{i}\)_-_K\({}_{f}\)_-_n_ (with n = 1, 2...) and thus are lower than _K\({}_{i}\)_-_K\({}_{f}\)_. This means, that transitions of lower multipolarities from the level of _K\({}_{i}\)_ into members of the rotational band built up on _K\({}_{f}\)_ are possible than for transitions into the bandhead having _K\({}_{f}\)_. Consequently those transitions are 'K - hindered' according to the definitions given above. As illustrative examples (simplified) decay schemes for the isomers \({}^{253m1}\)No (5/2\({}^{+}\)[622]) [19] and \({}^{251m}\)Cf (11/2\({}^{-}\)[725]) [20] are shown in fig. 4. In the case of \({}^{253m1}\)No the decay proceeds into the 9/2\({}^{-}\)[734] ground-state via an M2 transition (with some E3 admixture). The half-life of T\({}_{1/2}\) = 24 \(\mu\)s is quite close to the value of 5.9 \(\mu\)s for a single particle transition obtained from a Weisshoff estimation [21]. In the case of \({}^{251m}\)Cf the decay of the isomeric state (T\({}_{1/2}\) = 1.3 \(\mu\)s) essentially populates the 9/2\({}^{+}\) and 11/2\({}^{+}\) members of the rotational band built up on the 7/2\({}^{+}\)[613] Nilsson level by strongly hindered E1 - transitions for which half-lives of \(<\)10\({}^{-7}\)\(\mu\)s are Figure 4: Simplified decay schemes of \({}^{253m1}\)No [19] and \({}^{251m}\)Cf [20]. expected on the basis of Weiskopf estimation. The not K - hindered M2 transition into the bandhead is only weak with a relative intensity of 0.036. ## 6 'Multi'-Quasiparticle K-Isomers Although, as discussed in the previous section, formally, the cases mentioned there, have to be regarded as 'K isomers', still, speaking about K isomers usually high spin multi-quasiparticle states are meant. Thus in the following we will concentrate on such cases. The occurence of those isomers is usually explained by breaking at least one nucleon pair and exciting at least one nucleon of each pair in a different level, while the different nucleons will couple their spins to high values of K. The possibilities of 'nucleon coupling' is schematically shown in fig. 5. In left side of the figure proton levels are occupied up to the first level (1/2\({}^{-}\)) above the expected shell gap at Z = 100. Due to the relatively small energy gaps, after breaking the pair, one proton may be excited into the 7/2\({}^{-}\) or into the 9/2\({}^{+}\) states while the other one remains in the 1/2\({}^{-}\) state. But it is also conceivable that both protons are removed from the 1/2\({}^{-}\) state and one is excited into the 7/2\({}^{-}\) level, the other one into the 9/2\({}^{+}\) state, resulting in possible configurations K\({}^{\pi}\) = 4\({}^{+}\) (1/2\({}^{-}\)\(\otimes\) 7/2\({}^{-}\)), 5\({}^{-}\) (1/2\({}^{-}\)\(\otimes\) 9/2\({}^{+}\)), 8\({}^{-}\) (7/2\({}^{-}\)\(\otimes\) 9/2\({}^{+}\)). The right side shows the situation for neutrons occupying levels up to the 7/2\({}^{+}\) one still lying below the shell gap at N = 152. After breaking the neutron pair one of them can be excited into the 9/2\({}^{-}\) level, also lying below the N = 152 shall gap, resulting in the configuration K\({}^{\pi}\) = 8\({}^{-}\). Due to the large energy difference excitation into a level above the N = 152 shell gap is less probable. ### Early Investigations First information on K isomers in the transfermrium region came from A. Ghiorso et al. [22], who observed \(\alpha\) decay of \({}^{250}\)Fm and \({}^{254}\)No correlated to mother activities of T\({}_{1/2}\) = 1.8\(\pm\)0.1 s (\({}^{250}\)Fm) and T\({}_{1/2}\) = 0.28\(\pm\)0.04 s (\({}^{254}\)No). They interpreted the'mother activities' as isomeric states in the corresponding nuclei, decaying into the ground-state by internal transitions, but no information on excitation energies and decay pattern could be given. In [22] also possible configurations of the isomeric states were discussed. Possible two-neutron configurations were \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) or \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[613]\(\uparrow\) both leading to K\({}^{\pi}\) = 8\({}^{-}\) states. But also two-proton states were considered, \(\pi\)7/2\({}^{+}\)[633]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\) leading to a K\({}^{\pi}\) = 7\({}^{-}\) state for \({}^{250}\)Fm and \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\) leading to a K\({}^{\pi}\) = 8\({}^{-}\) state for \({}^{254}\)No. Calculations performed almost twenty years later for \({}^{250}\)Fm and \({}^{254}\)No by V.G. Solov'ev et al. [23] resulted in each three K\({}^{\pi}\) = 7\({}^{-}\) and K\({}^{\pi}\) = 8\({}^{-}\) 2-quasi-particle states in \({}^{250}\)Fm, namely: Figure 5: Coupling schemes for protons and neutrons close to Z = 100 and N = 152 (schematically). a) ( K\({}^{\pi}\) = 7\({}^{-}\)) \(\nu 5/2^{+}[622]\)\(\uparrow\otimes\)\(\nu 9/2^{-}[734]\)\(\uparrow\) (100%) at E\({}^{*}\) = 1.5 MeV b) ( K\({}^{\pi}\) = 7\({}^{-}\)) \(\pi 7/2^{-}[514]\)\(\downarrow\otimes\)\(\pi 7/2^{+}[633]\)\(\uparrow\) (99%) at E\({}^{*}\) = 1.53 MeV c) ( K\({}^{\pi}\) = 7\({}^{-}\)) \(\nu 7/2^{+}[624]\)\(\uparrow\otimes\)\(\nu 7/2^{-}[743]\)\(\downarrow\) (100%) at E\({}^{*}\) = 1.9 MeV d) ( K\({}^{\pi}\) = 8\({}^{-}\)) \(\nu 7/2^{+}[624]\)\(\downarrow\otimes\)\(\nu 9/2^{-}[734]\)\(\uparrow\) (100%) at E\({}^{*}\) = 0.8 MeV e) ( K\({}^{\pi}\) = 8\({}^{-}\)) \(\nu 7/2^{+}[613]\)\(\uparrow\otimes\)\(\nu 9/2^{-}[734]\)\(\uparrow\) (100%) at E\({}^{*}\) = 1.6 MeV f) ( K\({}^{\pi}\) = 8\({}^{-}\)) \(\pi 7/2^{-}[514]\)\(\downarrow\otimes\)\(\pi 9/2^{+}[624]\)\(\uparrow\) (99%) at E\({}^{*}\) = 1.7 MeV Calculations for \({}^{254}\)No resulted in three K\({}^{\pi}\) = 8\({}^{-}\) 2-quasi-particle states at E\({}^{*}\) = (1-2) MeV, namely configurations: a) \(\nu 7/2^{+}[613]\)\(\uparrow\otimes\)\(\nu 9/2^{-}[734]\)\(\uparrow\) (99%) with a 1% \(\pi 7/2^{-}[514]\)\(\downarrow\otimes\)\(\pi 9/2^{+}[624]\)\(\uparrow\) admixture at E\({}^{*}\) = 1.4 MeV b) \(\pi 7/2^{-}[514]\)\(\downarrow\otimes\)\(\pi 9/2^{+}[624]\)\(\uparrow\) (96%) with a 3% \(\nu 7/2^{+}[624]\)\(\uparrow\otimes\)\(\nu 9/2^{-}[734]\)\(\uparrow\) admixture at E\({}^{*}\) = 1.44 MeV c) \(\nu 7/2^{+}[624]\)\(\downarrow\otimes\)\(\nu 9/2^{-}[734]\)\(\uparrow\) (97%) with a 3% \(\pi 7/2^{-}[514]\)\(\downarrow\otimes\)\(\pi 9/2^{+}[624]\)\(\uparrow\) admixture at E\({}^{*}\) = 1.7 MeV. Further studies on both K isomers, \({}^{250m}\)Fm and \({}^{254m}\)No, were performed by Yu.A. Lazarev et al. [24], who searched for fission branches, but they could only give upper limits of b\({}_{SF}\)\(\leq\) 8.2x10\({}^{-7}\) for \({}^{250m}\)Fm and b\({}_{SF}\)\(\leq\) 2.0x10\({}^{-3}\) for \({}^{254m}\)No. Another K isomer was discovered in the course of a \(\beta^{-}\) decay study of \({}^{256}\)Es by H.L. Hall et al. [25]. They measured a half-life T\({}_{1/2}\) = 70\(\pm\)5 ns for the previously known level at E\({}^{*}\) = 1425 keV in \({}^{256}\)Fm, which they assigned to a K\({}^{\pi}\) = 7\({}^{-}\) state. They also observed two delayed fission events, which they attributed to the decay of the isomer. In late 2000 the first K isomer in the transfermium region decaying by \(\alpha\) emission, \({}^{270m}\)Ds [26], was observed at the velocity filter SHIP at GSI, Darmstadt (Germany). A breakthrough in K isomer spectroscopy came in the beginning of the 21st century, mainly thanks to higher available beam intensities for medium heavy projectiles like \({}^{48}\)Ca, fast and efficient separation of the evaporation residues from the primary beam, and the availability of Ge - detector set-ups of high detection efficiency. So \(\gamma\) spectroscopic investigation of \({}^{250m}\)Fm and \({}^{254m}\)No became possible. As an additional filter to discriminate between \(\gamma\) rays from the decay of the isomer and background radiation prompt coincidences between conversion electrons and \(\gamma\) rays were searched for as suggested by G.D. Jones et al. [18] following a technique that already had been applied by the end of the 1970ies at SHIP [17] (see sect. 4). First decay schemes of \({}^{254m}\)No were presented by R.-D. Herzberg et al. [27] at the RITU separator at the University of Jyvaskyla (Finland) and by S.K. Tandel et al. [28] at the FMA separator at ANL, Argonne (USA). In both experiments another isomeric state (\({}^{254m}\)No) with a half-life of \(\approx\)170 \(\mu\)s was observed. Short time later the first 'new' K isomer in the transfermium region, \({}^{252m}\)No [29] and the first even-Z - odd-A K isomer, \({}^{251m2}\)No [30] were discovered at SHIP by F.P. Hessberger et al., and a further K isomer, \({}^{253m2}\)No, was identified at SHIP [31] and by A. Lopez-Martens et al. at the VASSILISSA separator at FLNR JINR, Dubna (Russia) [32]. These positive results initiated an intense search for K isomers in the transfermium (or'superheavy element') region at many facilities. An overview of the presently known 2 - quasiparticle - K isomers in even - even nuclei is shown in table 2, 4 - quasiparticle are presented in table 3 and an overview of odd-mass K isomers is given in table 4. ### 6.2 K isomers in \({}^{254}\)No Decay of the K isomers in \({}^{254}\)No have been investigated at different facilities. After the pioneering experiments at RITU [27] and at the FMA [28] detailed studies have been performed at SHIP [33] and at the BGS at LNBL, Berkeley (USA) [36]. Still no unambiguous configuration or decay scheme were obtained so far. In [27] and [28] the long-lived isomer \({}^{254m1}\)No (275 ms [33]) was assigned as the 2-quasi-proton state of the configuration K\({}^{\pi}\) = 8\({}^{-}\) (\(\pi 7/2^{-}[514]\)\(\downarrow\otimes\)\(\pi 9/2^{+}[624]\)\(\uparrow\)), while the short-lived isomer \({}^{254m2}\)No (174 \(\mu\)s [33]), the decay of which was characterized by two intense \(\gamma\) lines of E = 134 keV and E = 605 keV, was attributed to a 4-quasi-particle state of spin and parity K\({}^{\pi}\) = 16\({}^{+}\)[27] or K\({}^{\pi}\) = 14\({}^{+}\)[28]. The decay of the K\({}^{\pi}\) = 8\({}^{-}\) isomer was assumed to occur via a rotational band built up on a K\({}^{\pi}\) = 3\({}^{+}\) 2-quasi-proton state of the configuration \(\pi 1/2^{-}[521]\)\(\downarrow\otimes\)\(\pi 7/2^{-}[514]\)\(\downarrow\) into the ground-state rotational band. The decay of the K\({}^{\pi}\) = 16\({}^{+}\) (14\({}^{+}\)) isomer was found to populate the K\({}^{\pi}\) = 8\({}^{-}\) isomer. While in [27] the decay path was left open, in [28] it was assumed, that the decay of \({}^{254m2}\)No would populate a rotational band built up on \({}^{254m1}\)No. The strong 605-keV line was not assigned to the decay of \({}^{254m2}\)No but from a level **Tab. 2**: 'Safely' identified 2-quasiparticle - K isomers in even-even nuclei \begin{tabular}{l l l l l l l l} \hline Isotope & E\({}^{*}\)/MeV & half-life & decay mode & (assumed) & (assumed) configuration & Reference \\ & & & & spin/parity & & \\ \hline \({}^{270m}\)Ds & \(\approx\)1.13 & 6.0\({}^{+8.2}_{-2.2}\)ms & \(\alpha\) & \(\nu\)9\({}^{-}\) & \(\nu\)11/2\({}^{-}\)[725]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[613]\(\uparrow\) & [26] \\ & & & & \(\nu\)10\({}^{-}\) & \(\nu\)11/2\({}^{-}\)[725]\(\uparrow\otimes\nu\)9/2\({}^{+}\)[615]\(\downarrow\) & \\ \hline \({}^{266m}\)Hs & & 74\({}^{+354}_{-34}\)ms & \(\alpha\) & & & [74] \\ \hline \({}^{254m}\)Rf & & 4.7\(\pm\)1.1 \(\mu\)s & IT, SF? & \(\nu\)8\({}^{-}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [54] \\ \hline \({}^{256m}\)Rf & \(\approx\)1.120 & 25\(\pm\)2 \(\mu\)s & IT & \(\pi\)5\({}^{-}\) & \(\pi\)1/2\({}^{-}\)[521]\(\downarrow\otimes\pi\)9/2\({}^{+}\)[624]\(\uparrow\) & [65, 66, 67] \\ & & & & & [68, 69] \\ \hline \({}^{256m2}\)Rf & \(\approx\)1.40 & 17\(\pm\)2 \(\mu\)s & IT & \(\pi\)8\({}^{-}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\otimes\pi\)9/2\({}^{+}\)[624]\(\uparrow\) & [65, 66, 67] \\ & & & & & [68, 69] \\ \hline \({}^{258m1}\)Rf & & 2.4\({}^{+2.4}_{-0.8}\) ms & IT & & & [70] \\ \hline \({}^{258m2}\)Rf & & 15\(\pm\)10 \(\mu\)s & IT & & & [70] \\ \hline \({}^{250m}\)No & & 43\({}^{+22}_{-15}\)\(\mu\)s & IT & \(\nu\)6\({}^{+}\) & \(\nu\)5/2\({}^{+}\)[622]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [59, 61] \\ \hline \({}^{252m}\)No & 1.254 & 110\(\pm\)10 ms & IT & \(\nu\)8\({}^{-}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [52] \\ \hline \({}^{254m1}\)No & 1.295 & 275\(\pm\)7 ms & IT & \(\nu\)8\({}^{-}\) & \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & [33, 36] \\ & & & & ((2.0\(\pm\)1.2)\(\times\)10\({}^{-4}\)) & & & \\ & & & & \(\alpha\) & & \\ & & & & (\(\leq\)1\(\times\)10\({}^{-4}\)) & & & \\ \hline \({}^{256m}\)No & & 7.8\({}^{+8.3}_{-2.6}\)\(\mu\)s & IT & \(\nu\)5\({}^{-}\) & \(\nu\)11/2\({}^{-}\)[725]\(\uparrow\otimes\nu\)1/2\({}^{+}\)[620]\(\uparrow\) & [64] \\ & & & 11.9\({}^{+21.7}_{-4.3}\)\(\mu\)s & IT & \(\nu\)7\({}^{-}\) & \(\nu\)11/2\({}^{-}\)[725]\(\uparrow\otimes\nu\)3/2\({}^{+}\)[622]\(\downarrow\) & \\ \hline \({}^{248m}\)Fm & & 10.1\(\pm\)0.6 ms & IT & & & [51] \\ \hline \({}^{250m}\)Fm & 1.199 & 1.92\(\pm\)0.05 s & IT & \(\nu\)8\({}^{-}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [50] \\ \hline \({}^{256m}\)Fm & 1.425 & 70\(\pm\)5 ns & IT, SF(\(\approx\)2x10\({}^{-5}\)) & \(\nu\)7\({}^{-}\) & & [25] \\ \hline \({}^{246m}\)Cm & 1.179 & 1.12\(\pm\)0.24 & IT & \(\nu\)8\({}^{-}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [39, 41, 45] \\ & & & & & [46, 47] \\ \hline \({}^{248m}\)Cm & 1.461 & 146\(\pm\)18 \(\mu\)s & IT & \(\nu\)8\({}^{-}\) & \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & [47] \\ & & & & \(\nu\)8\({}^{-}\) & \(\nu\)7/2\({}^{+}\)[613]\(\downarrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & \\ \hline \({}^{244m}\)Pu & 1.216 & 1.75\(\pm\)0.12 s & IT & \(\nu\)8\({}^{-}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\otimes\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [38] \\ \hline \hline \end{tabular} * see table 4 **Tab. 3**: 'Safely' identified 4-quasiparticle - K isomers in even-even nuclei \begin{tabular}{l l l l l l l} \hline Isotope & E\({}^{*}\)/MeV & half-life / \(\mu\)s & K\({}^{*}\) & (assumed) configuration & Reference \\ \hline \({}^{250}\)Fm & \(\geq\)1.530 & 8\(\pm\)2 & - & - & [26] \\ \hline \({}^{254}\)No & 2.914 & 108\(\pm\)13 \(\mu\)s & 16\({}^{+}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\otimes\pi\)9/2\({}^{+}\)[624]\(\uparrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & [33, 111] \\ \hline \({}^{254}\)Rf & - & 247\(\pm\)73 \(\mu\)s & 16\({}^{+}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\otimes\pi\)9/2\({}^{+}\)[624]\(\uparrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & [54, 111] \\ \hline \hline \end{tabular} **Tab. 4**: 'Safely' identified 4-quasiparticle - K isomers in even-even nuclei \begin{tabular}{l l l l l l l l} \hline Isotope & E\({}^{*}\)/MeV & half-life / \(\mu\)s & K\({}^{*}\) & (assumed) configuration & Reference \\ \hline \({}^{250}\)Fm & \(\geq\)1.530 & 8\(\pm\)2 & - & - & [26] \\ \hline \({}^{254}\)No & 2.914 & 108\(\pm\)13 \(\mu\)s & 16\({}^{+}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\otimes\pi\)9/2\({}^{+}\)[624]\(\uparrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & [33, 111] \\ \hline \({}^{254}\)Rf & - & 247\(\pm\)73 \(\mu\)s slightly below it. The more detailed study at SHIP [33] confirmed the data of the previous studies. In addition a weak transition of \({}^{254m1}\)No into the ground-state rotational band (8\({}^{+}\) - state) was observed. Also the assumption of populating a rotational band built up on \({}^{254m1}\)No by decay of \({}^{254m2}\)No [28] was in-line with the data measured in [33]. Again the 605-keV lines was not assigned to the decay of \({}^{254m2}\)No, for which in [33] also spin and parity K\({}^{\ast}\) = 16\({}^{+}\) was considered, but from an intermediate state populated by the decay of the isomer. The reason for this interpretation was the observation of a 605-keV \(\gamma\)-transition in in-beam investigations at the RITU separator at Jyvaskyl\(\beta\) in prompt coincidence with \({}^{254}\)No evaporation residues [34]. Thus it should be emitted from a level with a life-time of \(<\)1\(\mu\)s. The decay scheme suggested in [33] is shown in fig. 6 (including some modifications discussed in the following). In addition decay of \({}^{254m1}\)No by \(\alpha\) emission and spontaneous fission was searched for in [33]. Small branches of b\({}_{\alpha}\)\(\leq\) 1x10\({}^{-4}\) and b\({}_{sf}\) = (2.0\(\pm\)1.2) x 10\({}^{-4}\) were identified. A closer inspection of the E = 605 keV line observed in the in-beam experiments, however showed, that it rather is a line doublet composed of the E = 604.21 keV and E = 608.35 keV lines from transitions between excited states \({}^{74}\)Ge populated by \({}^{74}\)Ge(n,n'\(\gamma\)) reactions with the detector material and \(\gamma\) decays of the primarily populated states [35]. A follow-up study at the BGS at LNBL confirmed the data obtained in the previous studies, but came to another conclusion on the configuration of \({}^{254m1}\)No and the decay path of \({}^{254m2}\)No [36]. \({}^{254m1}\)No was still assigned as a K\({}^{\ast}\) = 8\({}^{-}\) but interpreted as a 2-quasi-neutron state of the configuration \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\) or \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[613]\(\uparrow\). The decay of the K = 16\({}^{+}\) isomer \({}^{254m2}\)No was now interpreted not to populate directly the rotational band built up on \({}^{254m1}\)No, but via a rotational band built up on a K = 10\({}^{+}\) state with the configuration \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\nu\)11/2\({}^{-}\)[725]\(\downarrow\). An experimental hint for this interpretation was the observation of \(\gamma\) - \(\gamma\) coincidences for events within the E = 133 keV line, which had not been observed in the previous studies [27; 28; 33] due to less'statistics'. This finding proved the existance of two \(\gamma\) of E \(\approx\) 133 keV within the decay path of \({}^{254m2}\)No, which was not in-line with the assumption of a 'direct' decay into the rotational band built up on \({}^{254m1}\)No. The prompt decays from the K\({}^{\ast}\) = 10\({}^{+}\) 2-neutron-quasiparticle state into members of the rotational band built up on the K\({}^{\ast}\) = 8\({}^{-}\) suggested that the latter would also be 2-neutron-quasiparticle state. The decay scheme proposed in [36] is shown in fig. 6b. However, measurements of E2 / M1 - strengths by means of \(\gamma\)- und conversion electron (CE) spectroscopy performed at the RITU separator in Jyvaskyl\(\beta\)[35] indicated that the decay path of \({}^{254m2}\)No suggested in [36] is not compatible with the data, i.e. is unlikely, thus supporting the decay path suggested in [33]. On the other hand they showed that a 2-quasi-neutron configuration is more likely for the K\({}^{\ast}\) = 8\({}^{-}\) isomer [35]. A modified decay scheme from [33] giving credit to the results in [35] is shown in fig. 6a. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Isotope & E\({}^{\ast}\)/MeV & half-life & (assumed) K\({}^{\ast}\) & (assumed) configuration & Reference \\ \hline \({}^{249m}\)Md & \(\geq\) 0.910 & 2.4\(\pm\)0.3 ms & 19/2\({}^{-}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)5/2\({}^{+}\)[622]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [79] \\ \hline \({}^{251m}\)Md & \(\geq\) 0.844 & 1.37\(\pm\)0.6 s & 23/2\({}^{+}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) & [79] \\ \hline \({}^{251m2}\)No & \(\geq\) 1.7 & \(\geq\)2 \(\mu\)s & & & [30] \\ \hline \({}^{253m2}\)No & \(\geq\) 1.36 & 715\(\pm\)3 \(\mu\)s & 25/2\({}^{+}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{+}\)[624]\(\downarrow\) & [31; 32; 82; 83] \\ & (\(\geq\) 1.24) & & 19/2\({}^{+}\) & \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\pi\)1/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{+}\)[624]\(\downarrow\) & [82] \\ \hline \({}^{255m2}\)No & \(\approx\)1.3 & 2\(\pm\)1 \(\mu\)s & 21/2\({}^{+}\) & \(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\nu\)11/2\({}^{-}\)[725]\(\uparrow\) & [33; 90; 93] \\ & & (1.2\({}^{+0.6}_{-0.4}\)\(\mu\)s) & & & \\ \hline \({}^{255m3}\)No & \(\geq\)1.5 & 92\(\pm\)13 \(\mu\)s & 27/2\({}^{+}\) & \(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)11/2\({}^{-}\)[725]\(\uparrow\) & [33; 90; 93] \\ & & 77\(\pm\)6 \(\mu\)s & & & \\ \hline \({}^{255m4}\)No & \(>\)2.5 & 5\(\pm\)1 \(\mu\)s & & & [93] \\ \hline \({}^{255m2}\)Lr & \(>\) 1.6 (1.41) & 1.81\(\pm\)0.2 ms & 25/2\({}^{+}\) & \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)11/2\({}^{-}\)[725]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\) & [94; 95; 98] \\ \hline \({}^{255m3}\)Lr & 0.74 & 10-100 ns & 15/2\({}^{+}\) & \(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\) & [98] \\ \hline \({}^{255m2}\)Rf & 1.103 & \(15^{+6}_{-4}\)\(\mu\)s & 19/2\({}^{+}\) & \(\nu\) 9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\pi\)\(1\)/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)\(9/2\)\({}^{+}\)[624]\(\uparrow\) & [103; 104] \\ & & 29\({}^{+7}_{-5}\)\(\mu\)s & & & [104] \\ \hline \({}^{255m3}\)Rf & 1.303 & \(38^{+12}_{-7}\)\(\mu\)s & 25/2\({}^{+}\) & \ Summarizing the discussion above, one has to state, that there is presently agreement in interpretation of the K\({}^{\pi}\) = 8\({}^{-}\) isomer \({}^{254m1}\)No as a 2-quasi-neutron configuration and its decay path seems clear. The shortlived isomer \({}^{254m2}\)No obviously is a K\({}^{\pi}\) = 16\({}^{+}\) state, while its decay path is still under discussion. But certainly some more precise measurements collecting higher'statistics', which is not so hard to achieve will solve the problem. In a recent review article by A. Lopez-Martens et al. [37]\(\gamma\) spectra from the decay of both isomers, but no further information or details on the structure or decay paths are given. 3 K isomers in N = 150 isotones \({}^{244}\)Pu, \({}^{246}\)Cm, \({}^{248}\)Cf, \({}^{250}\)Fm, \({}^{252}\)No, and \({}^{254}\)Rf **K isomer in \({}^{244}\)Pu** The observation of a K isomer in \({}^{244}\)Pu was reported by S.S. Hota et al. [38]. It was settled at E\({}^{*}\) = 1216 keV. It was interpreted to decay with comparable intensities (see fig. 1 in [38], but no values are given) either into the 8\({}^{+}\) state of the ground-state rotational band or into the 7\({}^{-}\) state of an octupole band. In analogy to the heavier N = 150 isotones (\({}^{246}\)Cm [39], \({}^{250}\)Fm [50], \({}^{252}\)No [52]) they assigned a K\({}^{\pi}\) = 8\({}^{-}\) state (configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\)). A half-life of T\({}_{1/2}\) = 1.75\(\pm\)0.12 s was reported. **K isomer in \({}^{246}\)Cm** First identification of a K isomeric state in \({}^{246}\)Cm was reported by P.R. Fields et al. [39], which was populated by \(\beta^{-}\) - decay of \({}^{246}\)Am (T\({}_{1/2}\) = 39 min). Based on calculations of C.J. Gallagher and V.G. Soloviev [40] it was assigned as a 2-neutron-quasiparticle state with spin and parity K\({}^{\pi}\) = 8\({}^{-}\) and the configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\otimes\nu\)9/2\({}^{-}\)[734]\(\uparrow\). An excitation energy of E\({}^{*}\) = 1181 keV was given, but no half-life was reported. The decay scheme published in [39] is shown in fig. 7 left side). Evidently the decay of the K isomer is connected with two high energy \(\gamma\) transitions; the stronger line of E = 680 keV (I\({}_{rel}\) = 1) was assigned as the decay of the K\({}^{\pi}\) = 8\({}^{-}\) isomer into the 8\({}^{+}\) state of the ground-state rotational band, the weaker line E = 757 keV (I\({}_{rel}\) = 0.25\(\pm\)0.02) was attributed to the decay of a tentative 6\({}^{-}\) level, populated by an assumed 127.5 keV transition 8\({}^{-}\) \(\rightarrow\) 6\({}^{-}\), into the 6\({}^{+}\) level of ground-state rotational band. Figure 6: Left hand side: Decay scheme of \({}^{254m1,m2}\)No as suggested by F.P. Hessberger et al. (modified version from [33], see text for details); right hand side: Decay scheme of \({}^{254m1,m2}\)No as suggested by R.M. Clark et al. [36]. Figure 7: Left hand side: decay scheme of \({}^{246m}\)Cm as suggested by P.R. Fields et al. [39]; right hand side: decay scheme of \({}^{246m}\)Cm as suggested by L.G. Multhauf et al. [41]. A more detailed study on the decay of the K\({}^{\pi}\) = 8\({}^{-}\) state was shortly after performed by L.G. Multhauf et al. [41]. They identified the 6\({}^{-}\) state decaying by the 756 keV transition as a member of an octupole band built up on a K\({}^{\pi}\) = 2\({}^{-}\) state at E\({}^{*}\) = 841.70 keV on the basis of theoretical predictions and earlier suggestions [42, 43, 44]. The decay scheme presented by L.G. Multhauf et al. is shown in fig. 7 (right hand side). This interpretation was later supported by studies of S.W. Yates et al. [45]. Further studies were performed by A.P. Robinson et al. [46] and U. Shirwadkar et al. [47] at the Argonne National Laboratory, Argonne (USA). Robinson et al. also populated the isomeric state by \(\beta^{-}\) - decay of \({}^{246}\)Am (T\({}_{1/2}\) = 39 min), but obtained a more detailed decay pattern, settled the isomer at an excitation energy of E\({}^{*}\) = 1179.7 keV, but also did not report a half-life. U. Shirwadkar et al. produced the isomer by a 2n- transfer reaction in bombardments of \({}^{248}\)Cm targets with a beam of \({}^{209}\)Bi. They confirmed the decay pattern of [46] and in addition measured a half-life of T\({}_{1/1}\) = 1.12 \(\pm\) 0.24 s. **K isomer in \({}^{248}\)Cf** The situation is less clear in \({}^{248}\)Cf. Using the \({}^{249}\)Cf(d,t)\({}^{248}\)Cf reaction K. Katori et al. [48] observed an excited level at E\({}^{*}\) = 1261 keV which they (tentatively) assigned as a K\({}^{\pi}\) = 8\({}^{-}\) state having a configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\). However, no half-life was measured and also no decay scheme was presented, as their experiment was not laid out for \(\gamma\)-ray measurements. Recently the nuclear structure of \({}^{248}\)Cf was investigated via in-beam spectroscopy at the Tokai Tandem Accelerator Laboratory of the Japanese Atomic Energy Agency (JAEA) at Tokai, Japan by R. Orlandi et al. [49]. Besides measuring the life-time of the already known I\({}^{\pi}\) = 2\({}^{-}\) collective state at E\({}^{*}\) = 592 keV, two new isomeric states were identified. The higher lying on, at E\({}^{*}\) = 950 \(\pm\) 300 keV has a half-life of T\({}_{1/2}\) = 11.6 \(\pm\) 0.3 ns. It feeds an isomeric state at E\({}^{*}\) = 900 \(\pm\) keV with a half-life of T\({}_{1/2}\) > 140 ns by an E = 48 keV transition. As candidates for the latter a 2-proton-quasiparticle configuration K\({}^{\pi}\) = 5\({}^{-}\) or a 2-neutron-quasiparticle configuration K\({}^{\pi}\) = 8\({}^{-}\), forming a K isomeric state in \({}^{246}\)Cm, \({}^{250}\)Fm, and \({}^{252}\)No, were considered. Since due to the applied experimental technique (in-beam spectroscopy) neither the half-life nor the decay could be measured, no further conclusions could be done. **K isomer in \({}^{250}\)Fm** A K isomer in \({}^{250}\)Fm was first identified by A. Ghiorso et al. [22]. They reported a half-life of T\({}_{1/2}\) = 1.8\(\pm\)0.1 s, but due to the experimental technique used in [22] neither the excitation energy could be measured nor a decay scheme could be established. A short time after discovery of a K isomer in \({}^{252}\)No results from a decay study of \({}^{250}\)Fm based on \(\gamma\) - ray spectroscopy were reported by P. Greenlees et al. [50]. The experiment was performed at the K130 cyclotron at the University of Jyvaskyla. The isotope was produced in the reaction \({}^{204}\)Hg(\({}^{48}\)Ca,2n)\({}^{250}\)Fm and separated from the primary beam by the gas-filled separator RITU. The decay was measured using the GREAT spectrometer, installed in the focal plane of RITU. They measured a half-life of T\({}_{1/2}\)= 1.92\(\pm\)0.05 s, confirming the results of A. Ghiorso et al. [22] and settled the excitation energy at E\({}^{*}\) = 1199.2 keV. The decay scheme showed strong similarities with those of \({}^{246m}\)Cm [41, 46, 47] and \({}^{252m}\)No [52]. It is shown in fig. 8, \(\gamma\) energies and assigned transitins are given in table 5. The isomer was settled as in the cases of \({}^{246m}\)Cm and \({}^{252m}\)No as a 2-neutron-quasiparticle configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) resulting in a K\({}^{\pi}\) = 8\({}^{-}\) state. This assignment was supported by comparison of experimental and theoretical ratios of reduced transition probabilities B(M\({}_{1}\))/ B(E\({}_{2}\)) for transitions from states of initial spins and parities I\({}^{\pi}\) = 14\({}^{-}\), 15\({}^{-}\), 16\({}^{-}\) within the band built up on the isomer. The experimental values (B(M\({}_{1}\))/ B(E\({}_{2}\)))\({}_{exp}\) = 0.2 - 0.3 rather fit to those expected for a 2-neutron-quasiparticle configuration, (B(M\({}_{1}\))/ B(E\({}_{2}\)))\({}_{thoe}\) = 0.32 - 0.38 than for a possible 2-proton-quasiparticle configuration \(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\), (B(M\({}_{1}\))/ B(E\({}_{2}\))\({}_{thoe}\)) = 0.67 - 0.77 [50]. Within the study of \({}^{250m}\)Fm a second isomeric state with a half-life of T\({}_{1/2}\) = 8 \(\pm\) 2 \(\mu\)s (\({}^{250m2}\)Fm) was observed by ER - CE1 - CE2 - \(\alpha\) correlations [51]. Thus it can be assumed that decay of the short-lived isomer (at least partly) populates the long-lived (1.92 s) isomer. A population probability of 6\(\pm\)3 % was estimated. From the energy distribution of the coincident CE its excitation energy can be assumed to be at least 350 keV above that of the 1.92 s isomer, i.e. E\({}^{*}\)\(\geq\) 1530 keV. No information about the decay path or the configuration is given in [51]. **K isomer in \({}^{252}\)No** A K isomeric state in \({}^{252}\)No was first reported in [29]. Due to contaminations from decay of nuclei produced in a preceding irradiation the \(\gamma\) lines at E\({}^{*}\) \(<\) 150 keV could not be unambiguously identified. So, a follow-up experiment was performed at SHIP to obtain more precise data [52]. The excitation energy was estimated as E\({}^{*}\) = 1254 keV. Based on the interpretation of the decay of \({}^{246m}\)Cm by L.G. Multhauf et al. [41] the decay of the isomer was suggested to occur essentially via a band built up on a K\({}^{\pi}\) = 2\({}^{-}\) state at E\({}^{*}\) = 929 keV, while the ground-state rotational band is only weakly populated by an 8\({}^{-}\)\(\rightarrow\) 8\({}^{+}\) transition of E = 710.4 keV, whereas the corresponding \(\gamma\) transitions in \({}^{246}\)Cm (679.2 keV) [41] and \({}^{250}\)Fm (682.3 keV) [50] appear much stronger (please note: only in [41] a relative intensity is given). The decay scheme presented by B. Sulignano et al. [52] is shown in fig. 8. The results of [52] were short time later confirmed by A.P. Robinson et al. [46]. Further information on the structure of the isomeric state was obtained in an in-beam study performed at the RITU - separator at the University of Jyvaskyla, Finland, where the rational band built up on the K\({}^{\pi}\) = 8\({}^{-}\) state up to I\({}^{\pi}\) = 22\({}^{-}\) was observed [53]. Based on the relationship between the gyromagnetic factor g\({}_{k}\) and the M1 / E2 intensity ratio I(M1)/I(E2) it was shown that the latter (precisely: the range of the experimental values) was rather compatible with the value g\({}_{k}\) = 0.01 expected for the already previously assumed 2-neutron-quasiparticle configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) than with the value g\({}_{k}\) = 1.01 expected for the 2-proton-quasiparticle configuration \(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\) leading also to a K\({}^{\pi}\) = 8\({}^{-}\) state [53]. **K isomer in \({}^{254}\)Rf** First indication of a K isomer in the next heavier N = 150 isotone was found in an experiment performed at the FMA at ANL, Argonne, USA, where in four of 28 cases low energy signals were registered inbetween the implantation of the \({}^{254}\)Rf residue produced in the reaction \({}^{206}\)Pb(\({}^{50}\)Ti,2n)\({}^{254}\)Rf into a double-sided silicon strip detector and the spontaneous fission of the ground-state [54] (T\({}_{1/2}\) = 23\(\pm\)3 \(\mu\)s [55]). These signals were assigned to conversion electrons emitted during the decay of an isomeric state. A more detailed study was performed in a follow-up experiment at the BGS at LBNL, Berkeley, USA. Isomeric states were searched for by analyzing ER - CE - SF correlations. Two activities of T\({}_{1/2}\) = 4.7\(\pm\)1.1 \(\mu\)s and T\({}_{1/2}\) = 247\(\pm\)73 \(\mu\)s were observed. In seven cases correlations ER - CE1 - CE2 - SF were registered and interpreted as events from feeding the short-lived isomer by decay of the long-lived one [54]. Also a couple of \(\gamma\) events were observed in prompt coincidence with CE assigned to the decay of the short-lived isomer: E\({}_{\gamma}\) = 893 keV (5 events), E\({}_{\gamma}\) = 853 keV (3 events), and E\({}_{\gamma}\) = 829 keV (5 events). As E\({}_{\gamma}\) = 893 keV is similar to the energy of the 7\({}^{-}\)\(\rightarrow\) 6\({}^{+}\) transition in the lighter N = 150 isotones \({}^{250}\)Fm (871 keV) and \({}^{252}\)No (909 keV) one may speculate that this line represents also the 7\({}^{-}\)\(\rightarrow\) 6\({}^{+}\) transition in \({}^{254}\)Rf. But certainly the data presented in [54] are too scarce to construct a decay scheme. Nevertheless the authors present possible configurations on the basis of calculations. The short-lived isomer is attributed to 2-neutron-quasiparticle configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) leading to a K\({}^{\pi}\) = 8\({}^{-}\) state as also in the lighter isotopes, the long-lived isomer is attributed to a 4-quasiparticle configuration \(\nu\) (7/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[734]\(\downarrow\)) \(\otimes\)\(\pi\)7/2\({}^{-}\)[514],9/2\({}^{+}\)[624] leading to a K\({}^{\pi}\) = 16\({}^{+}\) state [54]. The most striking feature, however, was the step decrease of the half-lives. While those of \({}^{246}\)Cm (1.12 s) and \({}^{250}\)Fm (1.92 s) are comparable a decrease by a factor od \(\approx\)17 is observed from \({}^{250}\)Fm to \({}^{252}\)No (111 ms) and even by a factor of \(\approx\)24000 from \({}^{252}\)No to \({}^{254}\)Rf (4.7 \(\mu\)s). As possible reasons for that behavior the authors give a decrease of the hindrance of the M1 decay branch from the K\({}^{\pi}\) = 8\({}^{-}\) isomer into the I\({}^{\pi}\) = 7\({}^{-}\) state of the octupole band and/or that the K\({}^{\pi}\) = 8\({}^{-}\) isomer and the I\({}^{\pi}\) = 8\({}^{-}\) state of the octupole band might be very close in energy leading to an accidental configuration mixing resultig in a shorter half-life. ### 6.4 K isomers in even-even nuclei with Z\(\geq\)98 Besides the cases discussed above search for K isomers was performed in most of the experimentally accessible nuclei in the region Z\(\geq\)100. In all cases the isomers essentially were identified via measuring the conversion electrons, in general only little numbers of \(\gamma\) rays were observed. The collected data were in most cases not sufficient for establishing well based configurations and decay schemes, however, in some cases (very speculative) partial decay schemes and configurations were presented. **K isomer in \({}^{248}\)Cm** The observation of a K isomer in \({}^{248}\)Cm was reported by U. Shirwadkar et al. [47]. They gave a half-life of T\({}_{1/2}\) = 146\(\pm\)18 \(\mu\)s, the excitation energy was settled at E\({}^{*}\) = 1461 keV. The isomer was assigned as a K\({}^{\pi}\) = 8\({}^{-}\) state, as possible configurations 2-quasi-neutron states (\(\nu\)7/2\({}^{+}\)[613]\(\uparrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) or \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{-}\)[734]\(\uparrow\)) were considered. The decay occured with similar intensities either directly into the 8\({}^{+}\) state of the ground-state rotational band (E\({}_{\gamma}\) = 954 keV) or by an unobserved 7 keV-transition into the 8\({}^{+}\) - state of a \(\gamma\) vibrational band (bandhead 2\({}^{+}\) at E\({}^{*}\) = 1048 keV). The further decay occured from the 8\({}^{+}\) and 6\({}^{+}\) states of the \(\gamma\) vibrational band into the corresponding level of the ground-state rotational band, i.e. 8\({}^{+}\) \(\rightarrow\) 8\({}^{+}\) (E\({}_{\gamma}\) = 947 keV) or 6\({}^{+}\) \(\rightarrow\) 6\({}^{+}\) (E\({}_{\gamma}\) = 985 keV). **Search for a K isomer in \({}^{246}\)Fm** A K isomer was searched for in \({}^{246}\)Fm at SHIP, GSI using the production reaction \({}^{208}\)Pb(\({}^{40}\)Ar,2n)\({}^{246}\)Fm [56]. Although a quite high number of \({}^{246}\)Fm nuclei were produced, about 31000 \(\alpha\) decays were observed, no signature for a K isomer in the half-life range between some microseconds and about 100 milliseconds was found applying the method of ER - (CE,\(\gamma\)) - \(\alpha\)(\({}^{246}\)Fm) correlation. **K isomer in \({}^{248}\)Fm** An isomeric state in \({}^{248}\)Fm was observed at the RITU - separator, University of Jyvaskyla, using the production reaction \({}^{202}\)Hg(\({}^{48}\)Ca,2n)\({}^{248}\)Fm. It was identified via ER - CE - \(\alpha\)(\({}^{248}\)Fm) correlations [51]. A half-life of T\({}_{1/2}\) =10.1\(\pm\)0.6 ms was measured. Also a few \(\gamma\) events, forming \(\gamma\) lines of E = 808 keV and E = 904 keV were registered. These data, however, were not sufficient to establish a decay pattern. **K isomer in \({}^{250}\)No** A spontaneous fission activity of T\({}_{1/2}\) = 36\({}^{+11}_{-6}\)\(\mu\)s was observed by Yu.Ts. Oganessian et al. [57] in bombardments of \({}^{204,206}\)Pb with \({}^{48}\)Ca and attributed to the decay of \({}^{250}\)No. Further investigations of A.V. Belozorov et al. [58] resulted in a splitting of the SF activity into two components with half-lives of T\({}_{1/2}\) = 5.6\({}^{+0.9}_{-0.7}\)\(\mu\)s and T\({}_{1/2}\) = 54\({}^{+14}_{-9}\)\(\mu\)s. Tentatively the shorter lived activity was assigned to \({}^{250}\)No, the longer lived one to \({}^{249}\)No. However, it was not excluded, that both components could also be assigned to the same isotope, i.e. the decay of the ground-state and of an isomeric state. The latter assumption was proven in an experiment performed at the FMA at ANL, Argonne by D. Peterson et al. [59]. The authors unambiguously could assign both activities to the same mass number A = 250. They (tentatively) assigned the shorter lived component (T\({}_{1/2}\) = 3.7\({}^{+1.1}_{-0.8}\)\(\mu\)s) to the ground-state and the longer lived one (T\({}_{1/2}\) = 43\({}^{+12}_{-12}\)\(\mu\)s) to the isomer, for which they assumed a 2-neutron - quasiparticle configuration \(\nu\)5/2\({}^{+}\)[622]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\) leading to a K\({}^{\pi}\) = 6\({}^{+}\) state. It was, however, not possible to decide if the isomeric state undergoes SF or decays by internal transitions into the ground state, and the measured half-life of the SF activity is that of the ground-state delayed by the decay of the isomer. To clarify the assignment and to search for a fission branch two more experiments were performed, one at the RITU - separator at the University of Jyvaskyla [60], and one at the gasfilled-separator TASCA at GSI, Darmstadt [61]. In both experiments digital electronics was used for identifying CE, as a signature for internal transitions, in between implantation of the ER and the fission event. In both experiments it was clearly shown, that the longer lived isomeric state decays into the shorter lived as already assumed in [59]. Another feature of both experiments was search for direct fission of the isomeric state. In [60] a population probability of 0.41\(\pm\)0.13, a relatively large upper limit of \(\approx\)0.5 for the fission branch, and an upper limit of 0.044 (including the data of [58] 0.029) for the \(\alpha\) branch were obtained. In [61] also no unambiguous indication of SF of \({}^{250m}\)No was observed and a considerably lower upper limit for SF of b\({}_{sf}\)\(\leq\) 0.035 was reported. Also a lower population probability of 0.17\(\pm\)0.03 was obtained; in addition in that study indication of a so far unknown higher-lying isomeric state (\({}^{250m2}\)No) with a half-life of T\({}_{1/2}\) = 0.7\({}^{+1.4}_{-0.3}\)\(\mu\)s on the basis of two registered ER - CE(1) - CE(2) - SF correlation was obtained, which decays via the known isomeric state (now denoted as \({}^{250m1}\)No) into the ground-state [61]. In a recent study at the SHELS separator at FLNR JINR, Dubna (Russia) more than an order of mag nitude higher number of SF of \({}^{250}\)No was registered [62]. Also four \(\gamma\) lines of E = 115, 176, 914, 1090 keV were observed. Due to energy values similar to the 4\({}^{+}\)\(\rightarrow\) 2\({}^{+}\) and the 6\({}^{+}\)\(\rightarrow\) 4\({}^{+}\) transition energies within the ground-state rotational band of \({}^{252}\)No (108, 167 keV) and \({}^{254}\)No (101, 159 keV) [63], the low energy \(\gamma\) lines were (tentatively) attributed to transitions within the ground state rotational band of \({}^{250}\)No. No further discussion on the decay path, nor on SF of \({}^{250m1}\)No are presented in [62]. It was, however, pointed to the result that the energy difference \(\Delta\)E = (1090-914) keV = 176 keV [62], i.e. is the same as for the candidate for the 6\({}^{+}\)\(\rightarrow\) 4\({}^{+}\) transition. So it seems possible that the high energy \(\gamma\) lines stem from the decay of the same level, populating the 6\({}^{+}\) and 4\({}^{+}\) of the ground state rotational band. Possible candidates for such an emitting level could be an I\({}^{\pi}\) = 5\({}^{+}\) (M1 transitions 5\({}^{+}\)\(\rightarrow\) 6\({}^{+}\), 5\({}^{+}\)\(\rightarrow\) 4\({}^{+}\)) or an I\({}^{\pi}\) = 5\({}^{-}\) (E1 transitions 5\({}^{-}\)\(\rightarrow\) 6\({}^{+}\), 5\({}^{-}\)\(\rightarrow\) 4\({}^{+}\)) state. **K isomer in \({}^{256}\)No** An isomeric state in \({}^{256}\)No was identified in a study performed at SHELS separator at the FLNR-JINR Dubna, Russia [64]. The isotope was produced in the reaction \({}^{238}\)U(\({}^{22}\)Ne,4n)\({}^{256}\)No. Fifteen correlations of the type ER - CE - \(\alpha\)(\({}^{256}\)No) were observed and assigned to the decay of an isomeric state. Half-lives of T\({}_{1/2}\) = 7.8\({}^{+8.3}_{-2.6}\)\(\mu\)s or T\({}_{1/2}\) = 1.0\({}^{+9.2}_{-4.3}\)\(\mu\)s, depending on the data selected for the half-life evaluation were obtained. Thirteen photon events were observed in prompt coincidence with CE, five of them in the range of K or L X-rays of nobelium. On the basis of the energy sums of CE and photons a lower limit of 1089 keV for the excitation energy of the isomer was given. Only speculated could be about the structure (spin, parity, configuration) of the isomeric state, just a probable role of the 2-quasi-neutron states I\({}^{\pi}\) = 5\({}^{-}\) (configuration \(\nu\)11/2\({}^{-}\)[725]\(\uparrow\)\(\otimes\)\(\nu\)1/2\({}^{+}\)[620]\(\uparrow\)) and I\({}^{\pi}\) = 7\({}^{-}\) (configuration \(\nu\)11/2\({}^{-}\)[725]\(\uparrow\)\(\otimes\)\(\nu\)3/2\({}^{+}\)[622]\(\downarrow\)) were emphasized. **K isomers in \({}^{256}\)Rf** Several experiments to search for K isomeric states in \({}^{256}\)Rf have been performed at different laboratories so far. The first report on observation of K isomeric states came from an experiment performed at the BGS separator at LNBL Berkeley (USA) by H.B. Jeppesen et al. [65]. The isotope was produced in the reaction \({}^{208}\)Pb(\({}^{50}\)Ti,2n)\({}^{256}\)Rf. On the basis of observed ER - CE1 - CE2 - CE3 - SF(\({}^{256}\)Rf) correlations the existence of three isomeric states was concluded. Also a \(\gamma\) line of E = 900:1 keV was registered in correlations ER - (CE,\(\gamma\)) - SF(\({}^{256}\)Rf). It was attributed to the decay of the head (K\({}^{\pi}\) = 2\({}^{-}\)) of an octupole vibrational band into the I\({}^{\pi}\) = 2\({}^{+}\) level of the ground - state rotational band [65]. Estimated excitation energies, half-lives and relative population intensities are given in table 6. The data were principally confirmed as by-products of studies primarily devoted to the investigation of \({}^{257}\)Rf by J.S. Berryman et al. [66] and J. Rissanen et al. [88] also performed at LNBL Berkeley. The results of [65] were not reproduced in an experiment at the AMS separator at ANL, Argonne (USA), where only one isomeric state with a half-life of T\({}_{1/2}\) = 17\(\pm\)5 \(\mu\)s was registered [67] which tentatively was equated with ISO1 of [65] (see table 6). The Berkeley data could be confirmed in an experiment at the RITU separator at the University of Jyvaskyla (Finland) by J. Rubert [68]. As the 900-keV - \(\gamma\)-line was not observed in the corresponding in-beam measured it was excluded that it may stem from the decay of a low - spin state, contrary to the tentative assignment of [65]. Recently results from two more studies were reported, an investigation at the gas-filled separator TASCA at GSI, Darmstadt (Germany) by J. Khuyagbaatar et al. [69] and at the SHELS separator at FLNR-JINR Dubna (Russia) using the GABRIELA detector system [37]. J. Khuyagbaatar et al. oberved two isomeric states, in-line with the data for ISO1 and ISO2 (see table 6). The third isomeric state, however, was not observed, probably due its low population intensity and the relatively low number of produced \({}^{256}\)Rf nuclei. A considerably higher numbers of \({}^{256}\)Rf decays than in the previous experiments was registered in the experiment at SHELS [37]. The 900-keV \(\gamma\) line is clearly observed, but no further details on obtained results are reported. Nevertheless one may expect enhanced information on the structure and decay properties and the structure of the K isomers in \({}^{256}\)Rf when the data from this study are fully analyzed. The structure of the isomer is thus not unbambiguously established. Comparisons with predicted 2-quasiparticle states in \({}^{256}\)Rf suggest that the 25 \(\mu\)s isomer might be a K\({}^{\pi}\) = 5\({}^{-}\) state with the configuration \(\pi\) 1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\) 9/2\({}^{+}\)[624]\(\uparrow\) and the 17 \(\mu\)s isomer a 2-quasiparticle state with a configuration \(\pi\) 7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\) 9/2\({}^{+}\)[624]\(\uparrow\)[67, 68]. We will make use of these tentative assignments in the further discussion. **K isomer in \({}^{258}\)Rf** Two isomeric states in \({}^{258}\)Rf were identified in the course of an electron capture (EC) decay study of \({}^{258}\)Db [70], where ER - (CE,photon) - SF (\({}^{258}\)Rf) and ER - (CE) - SF (\({}^{258}\)Rf) correlations were investigated to measure K - X - ray emission during the CE process to directly prove EC decay of 258Db and to identify EC - decay from both long-lived states in \({}^{258}\)Db, that had been identified by \(\alpha\) decay studies [71, 70, 72]. In addition, a couple of ER - CE - CE - SF, ER - CE - photon - SF correlations which could be assigned to the decay of two isomeric states in \({}^{258}\)Rf with half-lives of T\({}_{1/2}\) = 15\(\pm\)10 \(\mu\)s and T\({}_{1/2}\) = 2.4\({}^{+2.4}_{-0.8}\) ms were registered. The data further indicated, that the short-lived isomer, at least to a notable fraction, decays into the long-lived one. The excitation energies of both states remained uncertain. Observation of isomeric states in \({}^{258}\)Rf is not unexpected. Calculations of F.R. Xu et al. [73] predict in \({}^{258}\)Rf two 2-quasi-neutron states of I\({}^{\pi}\!=\!10^{-}\) (configuration \(\nu 11/2^{-}\)[725]\(\uparrow\)\(\otimes\)\(\nu 9/2^{+}\)[615]\(\downarrow\)) and I\({}^{\prime}\!=\!9^{-}\) (configuration \(\nu 11/2^{-}\)[725]\(\uparrow\)\(\otimes\)\(\nu 7/2^{+}\)[613]\(\uparrow\)) at E\({}^{*}\approx\) 1.2 MeV and E\({}^{*}\approx\) 1.1 MeV, respectively, as well as a 2-quasi-proton state of I\({}^{\pi}\!=\!7^{-}\) (configuration \(\pi 9/2^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi 5/2^{-}\)[512]\(\uparrow\) at E\({}^{*}\approx\) 1.4 MeV. As, tentatively, spin and parity of the long-lived state in \({}^{258}\)Db (T\({}_{1/2}\) = 4.41 s) are assigned as I\({}^{\prime}\!=\!5^{+}\) or I\({}^{\pi}\!=\!10^{-}\)[72], population of high-spin states in \({}^{258}\)Rf by EC decay can be expected. **K isomer in \({}^{266}\)Hs** The isotope \({}^{266}\)Hs has been produced as \(\alpha\) decay daughter of \({}^{270}\)Ds in two experiments at SHIP performed in autumn 2000 [26] and autumn 2010 [74, 75]. The isotope decays by \(\alpha\) emission with an energy E\({}_{\alpha}\) = 10.20\(\pm\)0.03 MeV and a half-life of T\({}_{1/2}\) = 2.6\({}^{+0.7}_{-0.5}\) ms [78], evaluation of data from combined data of [26, 74]). It also has a significant SF branch of \(\approx\)25 % [74]. In the 2010 - experiment a single \(\alpha\) decay of E\({}_{\alpha}\) = 10.440 MeV with a correlation time of \(\Delta\)t(\({}^{266}\)Hs - \({}^{270}\)Ds) = 104.66 ms was registered [74]. As these data are inconsistent with decay data of the ground-state of \({}^{266}\)Hs, it was assigned to the decay of isomeric state in \({}^{266}\)Hs [74, 75]. **K isomer in \({}^{270}\)Ds** The isotope \({}^{270}\)Ds was first time observed in an irradiation of \({}^{207}\)Pb with \({}^{64}\)Ni at SHIP. Energy and time distribution of the \(\alpha\) decays were found to be quite broad, and did not suggest to assign them to a single activity [26]: three decay events had energies of E\({}_{\alpha}\) = 10.987 MeV, 11.075 MeV 1.925 MeV+ and a life-time of \(\tau\!=\!0.15\) ms, and three events had energies E\({}_{\alpha}\) = 12.147 MeV, 11.151 MeV, 10.954 MeV and a life-time \(\tau\!=\!8.6\) ms. The theoretical \(\alpha\) decay half-life [76, 77] estimated as T\({}_{1/2}\) = 0.62 ms for the short - lived activity (for the mean energy 11.03 MeV from the two events registered with full energy), resulted in a hindrance factor HF = T\({}_{\alpha}\)(exp) = 1.6, i.e. it represents an unhindered transition [78] while for the long-lived isomer values of HF = 14824 (12.147 MeV), HF = 120 (11.151 MeV), HF = 43 (10.954 MeV) were obtained [78]. This finding suggested to assign the short-lived activity to the ground-state decay, the long-lived one to the decay of an isomeric state. On the basis of the highest observed \(\alpha\)-decay energy the excitation energy was settled at E\({}^{*}\approx\) 1.25 MeV. Self-consistent Hartree - Fock - Bogolubov calculations using Skyrme-SJ4 interaction resulted in 2-quasi - neutron states K\({}^{*}\) = 9\({}^{-}\) (configuration \(\nu 7/2^{+}\)[613]\(\downarrow\)\(\otimes\)\(\nu 11/2^{-}\)[725]\(\uparrow\)) at E\({}^{*}\) = 1.31 MeV and K\({}^{*}\) = 10\({}^{-}\) (configuration \(\nu 9/2^{+}\)[615]\(\downarrow\)\(\otimes\)\(\nu 11/2^{-}\)[725]\(\uparrow\)) at E\({}^{*}\) = 1.34 MeV. Assignment to the K\({}^{\pi}\!=\!9^{-}\) state was preferred using the argument of a lower hindrance for the 9\({}^{-}\to 0^{+}\) (ground state) - transition than for 10\({}^{-}\to 0^{+}\) (ground state) - transition. A second experiment on investigation of the K isomer \({}^{270n}\)Ds was performed at SHIP in autumn 2010 [74, 75]. Although a factor of \(\approx\)3 more decays were registered, no new or enhanced informations on the configuration, decay and decay path of the isomer were reported so far. Improved half-lives using the data of both SHIP experiments, regarding \(\alpha\) decays of E\({}_{\alpha}\)\(\leq\) 11.0 MeV and \(\Delta\)(\(\alpha\)-ER) \(\leq\) 1 ms as decays of \({}^{270g}\)Ds and \(\alpha\) decays of E\({}_{\alpha}\)\(\geq\) 10.9 MeV and \(\Delta\)(\(\alpha\)-ER) \(>\) 1 ms as decays of \({}^{270m}\)Ds result in improved half-life values of T\({}_{1/2}\) = 0.17\({}^{+0.09}_{-0.04}\) ms for \({}^{270g}\)Ds and T\({}_{1/2}\) = 3.81\({}^{+1.39}_{-0.80}\) ms for \({}^{270m}\)Ds [78]. Footnote †: In this case the \(\alpha\) particle ’left’ the detector depositing only part of its energy in it, i.e. only an energy loss signal was registered. **6.5 K isomers in odd-mass nuclei with Z\(\geq\)100** **K isomers in \({}^{249,251}\)Md** K isomers in \({}^{249}\)Md and \({}^{251}\)Md were searched for in experiments performed at the University of Jyvaskyla, Finland using the RITU separator and the GREAT detector system [79]. The isotopes were produced in the reactions \({}^{203}\)Tl(\({}^{48}\)Ca,2n)\({}^{249}\)Md and \({}^{205}\)Tl(\({}^{48}\)Ca,2n)\({}^{251}\)Md. The existence of K isomers was concluded from observed correlations ER - CE - \(\alpha\)(\({}^{249,251}\)Md. For \({}^{249m}\)Md a half-life T\({}_{1/2}\) = 2.4 \(\pm\) 0.3 ms was obtained, for \({}^{251m}\)Md a value of T\({}_{1/2}\) = 1.37 \(\pm\) 0.6 s. For both isomers \(\gamma\) decays were observed in coin Figure 10: \(\gamma\) spectrum \({}^{253m2}\)No measured at SHIP; left side: full range from 50-900 keV), right side: expanded, range 650-900 keV. Figure 9: Partial decay pattern of \({}^{251m2}\)No [30]. cidence with CE. In the case of \({}^{249m}\)Md two \(\gamma\) - lines of E = 175 \(\pm\)1 keV and E = 521.7\(\pm\)1.0 keV were indicated. Some more information was obtained for \({}^{251m}\)Md. Here three \(\gamma\) transitions of E = 216\(\pm\)1, 265\(\pm\)1, and 290\(\pm\)1 keV were observed, whose energies are similar to transitions within the ground-state rotational band E = 214.8\(\pm\)0.5 keV (17/2\({}^{-}\)\(\rightarrow\) 13/2\({}^{-}\)), E = 263.8\(\pm\)0.3 keV (21/2\({}^{-}\)\(\rightarrow\) 17/2\({}^{-}\)), and E = 289\(\pm\)1 keV (23/2\({}^{-}\)\(\rightarrow\) 19/2\({}^{-}\)), as obtained from an in-beam study [80]. Yet, it was stated, that also other transitions within the ground-state rotational band should have been observed, which was not the case, while other observed \(\gamma\) lines could not be placed in a level scheme (see [79] for more details). Nevertheless the authors suggest, as possibly the 23/2\({}^{-}\)\(\rightarrow\) 19/2\({}^{-}\) transition was observed, that decay of the isomeric state populates the 23/2\({}^{-}\) state of the ground-state rotational band. A relatively strong \(\gamma\) line of E = 389 keV was interpreted as the same as observed in the in-beam experiment and thus attributed to a 3/2\({}^{-}\)\(\rightarrow\) 7/2\({}^{+}\) transition (see [80] for further details). Nevertheless, the quality of the data was not sufficient to draw conclusions on the decay path. On the basis of the energy sum of CE and \(\gamma\) rays lower limits of E\({}^{*}\)\(\geq\) 910 keV (\({}^{249m}\)Md) and E\({}^{*}\)\(\geq\) 844 keV (\({}^{251m}\)Md) were given for the excitation energy of the isomers. Based on (assumed) configurations of K isomers in neighbouring even - even nuclei, i.e. \({}^{248}\)Fm and \({}^{250}\)No in the case of \({}^{249}\)Md and \({}^{250}\)Fm and \({}^{252}\)No in the case of \({}^{251}\)Md and the ground state Nilsson levels of both isotopes configurations \(\pi\)7/2\({}^{-}\)[514] \(\leq\)\(\psi\)\(\psi\)5/2\({}^{+}\)[622]\({}^{+}\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\) resulting in T\({}^{\pi}\) = 19/2\({}^{-}\) for \({}^{249m}\)Md and \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\) resulting in T\({}^{\pi}\) = 23/2\({}^{\pi}\) for \({}^{251m}\)Md were suggested. **K isomer in \({}^{251}\)No** In the course of a decay study of \({}^{251}\)No a couple of \(\gamma\) events with energies of E\({}_{\gamma}\) = 142.4, 203.1, 713.6 and 782.5 keV were observed in delayed coincidence (\(\Delta\)t(\(\gamma\)-ER) = (1-4)\(\mu\)s) with ER correlated to \(\alpha\) decays of \({}^{251}\)No [30]. The two low energy lines were known to stem from the decay of the 9/2\({}^{-}\)[734] level in \({}^{251}\)No, which is also populated by the \(\alpha\) decay of \({}^{255}\)Rf. That was seen as an indication that this level was populated by the decay of an isomeric state with a half-life of T\({}_{1/2}\)\(\geq\) 2 \(\mu\)s by the 'high' energy \(\gamma\) transitions. As also \(\gamma\) events above 700 keV were observed it was concluded that the lower lying 1/2\({}^{+}\)[631] isomeric state (E\({}^{*}\)\(\approx\) 105 keV) and the ground state (7/2\({}^{+}\)[624]) were not populated notably. From intensity considerations it was concluded that the high energy \(\gamma\)s do not stem from decay of the same nuclear level, but emitted in series via an intermediate level located at E\({}^{*}\) = 917 keV or E\({}^{*}\) = 985 keV. As a possible candidate the 7/2\({}^{+}\)[613] - Nilsson level was suspected. In the neighbouring N = 149 isotope \({}^{249}\)Fm it is predicted at E\({}^{*}\)\(\approx\) 900 keV [81]. No prediction is given for \({}^{251}\)No in [81]. But as the predicted energy of this level increases from plutonium (Z=94) to fermion (Z=100) it may be located in \({}^{251}\)No above 1 MeV, i.e. outside the range of predictions in [81]. It should be noted, that in such a scenario the E1 transition 7/2\({}^{+}\)[613] \(\rightarrow\) 9/2\({}^{-}\)[734] is expected to be faster than the M1 transition 7/2\({}^{+}\)[613] \(\rightarrow\) 7/2\({}^{+}\)[624], which explains the decay into the 9/2\({}^{-}\)[734] excited level and not into 7/2\({}^{+}\)[624] ground state. It should, however, be mentioned that this interpretation is still somewhat speculative. On the basis of the measured \(\gamma\) energies a lower limit of the excitation energy of 1.7 MeV was given. No speculation on spin, parity or configuration was presented in [30]. The tentavi Figure 11: Decay scheme \({}^{253m2}\)No based on the one proposed in [82] and from the SHIP Data [83]. **K isomers in \({}^{253}\)No** Identification of a K isomer in \({}^{253}\)No was first reported by F.P. Hessberger [31] and A. Lopez-Martens et al. [32]. While A. Lopez-Martens et al. could only measure CE and gave a rough half-life T\({}_{1/2}\) = 700 \(\pm\) 200 \(\mu\)s, F.P. Hessberger identified the isomer by measuring CE - \(\gamma\) - coincidences, suggested a decay scheme and presented more precise half-life values of T\({}_{1/2}\) = 715 \(\pm\) 30 \(\mu\)s from the \(\gamma\) lines and T\({}_{1/2}\) = 590 \(\pm\) 40 \(\mu\)s from the K x-rays. Further, more detailed studies were performed as well at VASILISSA, JINR, Dubna [82] as at SHIP, GSI, Darmstadt [83]. The \(\gamma\) spectra measured at SHIP are shown in fig. 10. In [82] a partial decay scheme was presented. It is shown in fig. 11, supplemented by the results from the SHIP experiment [83]. The strong \(\gamma\) - lines of E = 802 keV and E = 715 keV were interpreted as transitions from a 15/2\({}^{-}\) - state into the 13/2\({}^{-}\) - and 15/2\({}^{-}\) - states of the ground-state rotational band of \({}^{253}\)No which is known up to I\({}^{\pi}\) = 49/2\({}^{-}\)[86]. On the basis of comparison of calculated and experimental gyromagnetic factors [g-g-g] performed in [83], an I = 9/2 state was assumed as the possible bandhead. Another candidate for the bandhead was the Nilsson level 11/2\({}^{-}\)[725], which was not known at the time of publication of [82, 83], but recently identified at E\({}^{*}\) = 750 keV [3]. On the basis of the decay scheme presented in fig. 11 the 15/2\({}^{-}\) - level is located at E\({}^{*}\) = 942 keV, which would be \(\approx\) 192 keV above the bandhead. This is quite similar to the very tentative difference between 11/2- and 15/2- in \({}^{251}\)CF with \(\Delta\)E = 199 keV [21]. On the basis of the decay scheme suggested by Lopez-Martens [82], the results from the SHIP experiments [83], recent nuclear structure investigations of \({}^{253}\)No [91, 3] and predicted Nilsson levels at E\({}^{*}\) \(<\) 1 MeV (see [84]), we here suggest the decay scheme presented in fig. 11. It will be discussed in detail in the following. We here shall just note, that besides the strong transitions of E\({}_{\gamma}\) = 713 and 802 keV, interpreted to populate the 15/2\({}^{-}\) and 13/2\(-\) members of the ground-state rotational band [82], two more \(\gamma\) - lines clearly observed in the SHIP experiments, were assigned on the basis of energy balance to be emitted from the level at E\({}^{*}\) = 941 keV, namely E\({}_{\gamma}\) = 614 keV populating the 17/2\({}^{-}\) state and E\({}_{\gamma}\) = 877 keV populating the 11/2\({}^{-}\) state. As possible bandheads of the level at E\({}^{*}\) = 941 keV we will consider the Nilsson levels 11/2\({}^{-}\)[725] and 7/2\({}^{-}\)[743]. A bandhead with spin/parity 9/2\({}^{-}\) will not be considered as such a state (besides the ground-state of \({}^{253}\)No) is not predicted at an excitation energy below 1 MeV. Let us first consider the 11/2\({}^{-}\)[725] state. The bandhead was located at E\({}^{*}\) = 750 keV as mentioned above. The energy difference of the 11/2\({}^{-}\)[725] and the one at E\({}^{*}\) = 941 keV is \(\Delta\)E = 191 keV. Keeping I\({}^{\pi}\) = 15/2\({}^{-}\) as assigned by A. Lopez-Martens [82] the decay into the bandhead could occur by two M1 transitions 15/2\({}^{-}\)\(\to\) 13/2\({}^{-}\)\(\to\) 11/2\({}^{-}\). Since \(\Delta\)E/2 = 95.5 keV (we do not assume that both transitions have the same energy) one of the transitions must have an energy E \(>\) 95.5 keV, so its energy must be higher than E = 88 keV the transition feeding the E\({}^{*}\) = 941 keV, which is in contradiction to the energy systematics in rotational bands. Here we have to remark, that the feeding of the E\({}^{*}\) = 941 keV is established by \(\gamma\) - \(\gamma\) - coincidences. Therefore an 11/2\({}^{-}\)[725] assignment for the bandhead is in contradiction to the experimental data and can be excluded. The 7/2\({}^{-}\)[743] Nilsson level was identified at E\({}^{*}\) = 724 keV [91], so the energy difference to the E\({}^{*}\) = 941 keV is \(\Delta\)E = 217 keV. Analyzing known cases (see [21]) one finds typical energy differences \(\Delta\)E(15/2 - 7/2) \(>\) 230 keV for the Nilsson levels 7/2\({}^{-}\)[743], 7/2\({}^{+}\)[624], and 7/2\({}^{+}\)[613]. For \(\Delta\)E(13/2 - 7/2) values between 175 keV and 219 (236) keV are obtained with some tendency to increase at increasing atomic number. Specifically for \({}^{245}\)Cm (the closest case of transition within the band built up on the 7/2\({}^{-}\)[743] level one obtains \(\Delta\)E(13/2\({}^{-}\) - 7/2\({}^{-}\) = 209 keV [85]. On this basis we prefer to assign I\({}^{\pi}\)- 13/2\({}^{-}\) to the level at E\({}^{*}\) = 941 keV. In the SHIP - experiments also a weak line at E\({}_{\gamma}\) = 750 keV was observed that agrees with the excitation energy of the 11/2\({}^{-}\)[725] Nilsson level. So it can be assigned to the decay of that level into the ground-state, vice versa requiring that the 11/2\({}^{-}\)[725] is populated during the decay of the isomeric state. In the SHIP experiment further two significant lines of E\({}_{\gamma}\) = 701 keV (a line doublet, consisting of E\({}_{\gamma}\) = 701 keV and E\({}_{\gamma}\) = 705 keV) and E\({}_{\gamma}\) = 775 keV were observed. Their energy difference \(\Delta\)E = 74 keV is in-line with the energy difference \(\Delta\)E(13/2\({}^{-}\)-11/2\({}^{-}\)) = 76 keV between the corresponding members of the ground-state rotational band. The energy sums E = 140 keV + 701 keV = 841 keV and E = 64 keV + 775 keV = 839 keV do not support an assignment of the emitting state to a member of the band built up on the 7/2\({}^{-}\)[743] state, specifically the 11/2\({}^{-}\) one as this state would lie \(\approx\)100 keV below the (assumed) 13/2\({}^{-}\) state, as discussed above in contradiction the to the energy systematics in rotational bands. So we tentatively assign it to the 13/2\({}^{-}\) state of the rotational band built up on the 11/2\({}^{-}\)[725] Nilsson level. Thus the energy difference is \(\Delta\)E(13/2\({}^{-}\)-11/2\({}^{-}\)) = 89 (91) keV, which is in-line with the energy difference \(\Delta\)E(13/2\({}^{-}\)-11/2\({}^{-}\)) = 88 keV for these members of the rotational band built up on the 11/2\({}^{-}\)[725] Nilsson level in \({}^{257}\)RI [88]1. Both Nilsson levels are assumed to be connected by a not observed M1 transition of E = 102 keV (13/2\({}^{-}\)\(\to\) 13/2\({}^{-}\)). The energy of the line E\({}_{\gamma}\) = 705 keV fits quite well to the energy difference of the 15/2\({}^{-}\) member of the band built up on the 7/2\({}^{-}\)[743] Nilsson level and the 17/2\({}^{-}\) member of the ground-state rotational band (702 keV) and is thus attributed to that transition. The time distributions of the significant lines at E = 828 keV and E = 845 keV do not fit to the half-life of \({}^{253m2}\)No and are thus not attributed to decay of the isomer. Still some problems concerning the decay scheme should be mentioned: a) strong K\({}_{\alpha_{1},\alpha_{2},\beta_{1}}\) No - X-ray lines are observed. As the energies of the assumed and tentatively assigned low energy M1 transitions (see fig. 11) are below the K binding energy of nobelium (E = 149.2 keV [21]) and as the high energy transitions (E \(>\) 600 keV) are only little converted (\(\epsilon_{K}\)\(<\) 0.28 for E \(>\) 600 keV [87]) it seems quite unlikely that the observed K X-rays stem from the decay of the K isomer discussed above. This assumption is supported by the fact, that prompt coincidences between the K X-rays were observed, but no coincidences between K X-rays and \(\gamma\) events. Also the K X-rays show a half-life of T\({}_{1/2}\) = (552\(\pm\)15) \(\mu\)s [89], which is somewhat lower than the value T\({}_{1/2}\) = (627\(\pm\)5) \(\mu\)s obtained from the \(\gamma\) transitions [83]. This could be a hint for the existence of a further isomeric state. However, from the SHIP data due to the lack of delayed coincidences between K X-rays and gammas, we do not have an indication that such second K isomer feeds the 'known' one [89]. Also from the lack of prompt coincidences between K X-rays and gammas we do not have indication for a decay path bypassing the 'known' isomer. So the origin of the relatively strong K X-ray lines remains uncertain and the assumption of the existence of a second isomer remains speculative. The situation is similar for the line at E = 255 keV. For this transition we obtain half-life of T\({}_{1/2}\) = (540\(\pm\)30) \(\mu\)s [89], in-line with that for the K - X-rays and thus related to the same source. b) no transition that can be assigned to the 'direct' decay of the isomer was observed. More detailed measurements are necessary to clarify the situation. c) lines at E = 157 keV and E = 603 keV cannot be placed un the decay scheme. Possible configurations of the isomeric state were discussed in [82]. For the isomeric state a 3-quasiparticle - state (1n \(\otimes\) 2p - configuration) \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\) leading to a K\({}^{\pi}\) = 25/2\({}^{+}\) - state or \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\) leading to a K\({}^{\pi}\) = 19/2\({}^{+}\) - state were considered, which were predicted to lie below an excitation energy of 1.5 MeV by calculations using a universal Woods-Saxon - potential [82]. More detailed measurements are necessary to clarify the situation. **K isomer in \({}^{255}\)No** First indication for the existence of a K isomer in \({}^{255}\)No was obtained from the observation of \(\gamma\) transitions of E\({}_{\gamma}\)(1) = 742 keV, T\({}_{1/2}\)(1) = 105\(\pm\)25 \(\mu\)s and E\({}_{\gamma}\)(2) = 839 keV, T\({}_{1/2}\)(1) = 130\(\pm\)25 \(\mu\)s in a study at the velocity filter SHIP at GSI, Darmstadt (Germany) devoted to the investigation of \({}^{254m1,254m2}\)No produced in the reaction \({}^{208}\)Pb(\({}^{48}\)Ca,2n)\({}^{254}\)No [33]. The transitions could not be attributed to the K isomers in \({}^{254}\)No as a) the half-lives were not in-line with those of the K isomers in \({}^{254}\)No, T\({}_{1/2}\)(\({}^{254m1}\)No) = 275\(\pm\)7 ms and T\({}_{1/2}\)(\({}^{254m2}\)No) = 198\(\pm\)13 \(\mu\)s [33]), and b) the intensity of both lines increased relatively to those of the \(\gamma\) lines attributed to the decay of \({}^{254m1,254m2}\)No at decreasing excitation energy of the compound nuclei. So it seemed straightforward to assign them to an isomeric state in \({}^{255}\)No, the 1n - deexcitation channel of the reaction. A careful follow-up analysis of the data resulted in the identification of three K isomers in \({}^{255}\)No [90] (see table 4). The lowest lying isomer \({}^{255m1}\)No was attributed to the Nilsson level 11/2\({}^{-}\)[725]. It represents the so far'missing' link in the systematics of that state, the energies of which are decreasing with increasing proton number, forming short-lived isomers (T\({}_{1/2}\)\(\leq\) 100 \(\mu\)s) up Z = 102 [83, 90], a low-lying isomer at E\({}^{*}\) = (70-74) keV [55, 91, 3] in \({}^{257}\)R and finally becoming the ground-state in \({}^{259}\)Sg [92]. It belongs to the type of single particle K-isomers (see sect.5). Another study of isomeric states in \({}^{255}\)No was performed by K. Kessaci [93] (and coworkers) at the separator SHELS at the FLNR, JINR, Dubna (Russia). A significantly higher number of decays including also a couple of prominent \(\gamma\) lines were registered, allowing to establish a somewhat more detail decay scheme. Kessaci identified four isomeric states, the measured properties are summarized and compared with the data from Brons [90] in table 7. At first glance it may seem that data may be not in-line with each other, but they have to be compared with respect to their limited quality. Under this aspect one can state, that the half-lives agree sufficiently, indicating that the same isomers were observed. The same holds for the excitation energies. Spin assignments are quite uncertain in [90] for \({}^{255m2,255m3}\)No and als no parities are given. Also \({}^{255m4}\)No was not reported in [90]. In [93] also configurations for \({}^{255m,255m3}\)No were suggested, namly \(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\nu\)11/2\({}^{-}\)[725]\({}^{\uparrow}\) resulting in I\({}^{\pi}\) = 21/2\({}^{+}\) for \({}^{255m2}\)No and \(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\nu\)11/2\({}^{-}\)[725]\(\uparrow\) resulting in I\({}^{\pi}\) = 27/2\({}^{+}\) for \({}^{255m3}\)No. **K isomer in \({}^{255}\)Lr** The first report on discovery of a multiparticle K isomer in \({}^{255}\)Lr was presented by K. Hauschild et al. [94] from an experiment performed at the VASSILISSA separator at JINR - FLNR, Dubna. Although the data were of limited quality a couple of \(\gamma\) transitions from the decay of the K isomer were reported. A half-life T\({}_{1/2}\) = 1.4\(\pm\)0.1 ms was measured, the excitation energy was estimated as E\({}^{*}\)\(>\) 720 keV. Data of much higher quality were obtained in an experiment performed at SHIP at GSI, Darmstadt [95]. The spectrum of \(\gamma\) lines from the SHIP experiment is shown in fig. 12. Also \(\gamma\) - \(\gamma\) - coincidences could be established, specifically (\(\gamma_{1}\)(587.5 keV) - \(\gamma_{2}\)(109.1, 243.1, 300.0 keV) and (\(\gamma_{1}\)(493.1 keV) - \(\gamma_{2}\)(243.1, 300.0 keV) - coincidences. A half-life of T\({}_{1/2}\) = 1.81\(\pm\)0.02 ms was measured, the excitation energy was estimated as E\({}^{*}\)\(>\) 1.6 MeV, the spin as L\(>\)21/2. The decay of the isomeric state was predominantly followed by \(\alpha\) decays of E\({}_{\alpha}\) = 8467 keV, attributed to the 7/2\({}^{-}\)[514]\(\downarrow\) isomeric state in \({}^{255}\)Lr with a half-life of T\({}_{1/2}\) = 2.5 s [96], while in the direct production \({}^{209}\)Bi(\({}^{48}\)Ca,2n)\({}^{255}\)Lr the decay of the ground-state (1/2\({}^{-}\)[521]\(\downarrow\), E\({}_{\alpha}\) = 8373 keV, T\({}_{1/2}\) = 31 s [96, 95]) dominates. As it can be expected that decay of a high-spin K isomer preferrably populates a state with a high spin, this finding was seen as confirmation of the assignment of T\({}_{1/2}\) = 2.5 s - activity to the high-spin Nilsson level (7/2\({}^{-}\)[514]). The \(\alpha\) decays of E\({}_{\alpha}\) = 8373 keV were interpreted as decays of the ground-state populated by decay of the isomer via internal transitions, thus corroborating the assignment of the low-spin Nilsson level (1/2\({}^{-}\)[521]) to the ground-state and the high-spin Nilsson state (7/2\({}^{-}\)[514]) to the isomer as done in [96]. The excitation energy of the 2.5 s - state is given in [96] as E\({}^{*}\) = 37 keV, which was confirmed in the direct measurement at SHIPTRAP which resulted in E\({}^{*}\) = 32.2\(\pm\)2.5 keV [97]. Energy differences between the 1/2\({}^{-}\)[521] - bandhead and the 3/2\({}^{-}\) member of the rotational band are \(\Delta\)E = 19.9 keV in \({}^{251}\)Bk and \(\Delta\)E = 18.45 keV in \({}^{249}\)Bk [21]. As transition of the lowest multipolarity between the isomer and the ground-state rotational band thus the E2 - transition 7/2\({}^{-}\)\(\to\) 3/2\({}^{-}\) can be assumed for which even for an energy difference of \(\Delta\)E = (10-15) keV a half-life <<1 s can be expected [21]. Thus the E2 - transition is Figure 12: Gamma - spectrum from the decay of the K isomeric state \({}^{255m2}\)Lr at SHIP [78, 95] Insert: time distribution \(\Delta\)t((CE,\(\gamma\)),ER) of events attributed to the decay of \({}^{255m2}\)Lr. strongly K hindered (\(\Delta\)K = 3) and the 7/2\({}^{-}\) isomer thus represent the type of a single particle K isomer discussed in sect. 5 and we here observe the case of decay of a multi particle K isomer into a single particle K isomer. A further study of the K isomer in \({}^{255}\)Lr was performed by H.B. Jeppesen et al. [98], who obtained data of similar quality as in [95]. They obtained a half-life of T\({}_{1/2}\) = 1.70\(\pm\)0.02 ms. Also possible configurations were discussed and a decay scheme was proposed. It is shown in fig. 13. The 1.7 ms - K isomer is attributed to an I\({}^{\pi}\) = 25/2\({}^{+}\) - state with a possible configuration \(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\mu\)1/2\({}^{-}\)[725]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\). It decays via two paths, one feeding by the 244-keV \(\gamma\) transition an I\({}^{\pi}\) = 23/2\({}^{-}\) level of a band with a not assigned band. The other one feeding by the 301-keV the I\({}^{\pi}\) = 21/2\({}^{+}\) level of a band with the band head I\({}^{\pi}\) = 15/2\({}^{+}\) (possible configuration \(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)). This state was assumed to be isomeric with a half-life (10-100) ns. Both bands are connected by an (unobserved) 28-keV transition 19/2\({}^{-}\)\(\to\) 17/2\({}^{+}\). The decay of the assumed I\({}^{\pi}\) = 15/2\({}^{+}\) isomeric state is interpreted to populate a band of positive parity, built up on the 9/2\({}^{+}\)[624]\(\uparrow\) Nilsson level. As shown in [95] the decay of the 1.8 ms K isomer populates the 7/2\({}^{-}\)[514] isomeric state. Thus the decay scheme suggested in [98] requires an E1 - transition 9/2\({}^{+}\)[624] \(\to\) 7/2\({}^{-}\)[514]. As E1 transitions are only weakly converted such a line must be quite strong compared to the M1 transition between the members of the rotational band built up on the 9/2\({}^{+}\)[624]. Such a line, however, is not observed at E \(>\) 30 keV5. Alternatively one could assume the decay into the rotational band built up on the 7/2\({}^{-}\)[514] Nilsson level (blue numbers in fig. 13). Leaving the spin assignment unchanged, in this scenario the 9/2\({}^{-}\)\(\to\) 7/2\({}^{-}\) transition, the energy of which could be assumed around 50 keV, would be unobserved. This, however, would not be surprising, as already the 11/2\({}^{-}\)\(\to\) 9/2\({}^{-}\) transition is quite weak. Since the conversion coefficients for M1 transitions rise from \(\epsilon\) = 41 (at 70 keV) to \(\epsilon\) = 110 (at 50 keV) [87], in this scenario the 9/2\({}^{-}\)\(\to\) 7/2\({}^{-}\) would be simply too weak to be observed. Footnote 5: at E\(<\)30 keV it might be hidden by the lines attibted to L X-rays of lawnecium. **K isomer in \({}^{253}\)Rf** The first claim for discovery of \({}^{253}\)Rf (at this time denoted as \({}^{253}\)Ku) came from G.N. Flerov [99] who assigned an 1.8 s - SF activity observed in irradiations of \({}^{206}\)Pb with \({}^{50}\)Ti to the 3n - deexcitation - channel of that reaction. More thorough investigations by F.P. Hessberger et al. [55] at the velocity filter SHIP at GSI, Darmstadt, using a more suited experimental set-up disproved the the results of Figure 13: Decay scheme of \({}^{255m2}\)Lr as suggested by Jeppesen et al. [98], and alternative level assignments (blue numbers). G.N. Flerov and identified a T\({}_{1/2}\) = 48 \({}^{+17}_{-10}\)\(\mu\)s SF activity as \({}^{253}\)Rf4. In this experiment also a second SF activity of T\({}_{1/2}\) = 11\({}^{+6}_{-3}\)ms was observed. As only a few number of decays was observed and this half-life was quite similar to that of \({}^{256}\)Rf (T\({}_{1/2}\) = 6.2\(\pm\)0.2 ms [55]) it was not ruled out that it could represent decays of \({}^{256}\)Rf, produced in reactions with \({}^{207}\)Pb impurities in the target material. So it was not assigned to \({}^{253}\)Rf. Footnote 4: The 1.8 s SF activity errenously attributed to \({}^{253}\)Rf by Flerov probably was SF of \({}^{255}\)Rf (T\({}_{1/2}\) = 1.68 \(\pm\) 0.09 s [30]). The data of [55] were recently confirmed by J. Khuyagbaatar et al. [100] and by A. Lopez-Martens et al. [101]. Based on a higher number of observed decays, the longer - lived SF activity was now definitely assigned to an isomeric state in \({}^{253}\)Rf. The results are compared in table 8. In addition A. Lopez-Martens et al. observed one more isomeric state with a half-life of T\({}_{1/2}\) = 0.66\({}^{+0.40}_{-0.18}\) ms, decaying into the short-lived state in \({}^{253}\)Rf. As the energy of the CE attributed to the decay of that state reached up to 1.02 MeV, that energy can be regarded as the lower limit of the excitation energy of the high lying isomer, very likely a K isomer. The latter observation may have an interesting consequence. As low lying states on the basis of systematics in N = 149 isotones and theoretical predictions [81] the Nilsson levels 1/2\({}^{+}\)[631]\(\downarrow\) and 7/2\({}^{+}\)[724]\(\downarrow\) are expected. As decay of the K isomer, probably having a high spin, is populating the state of 52.8 \(\mu\)s it was concluded that this is the high-spin (7/2\({}^{+}\)) - state, while that of 9.9 ms half-life is the low spin state (1/2\({}^{+}\)), which means that contrary to cases known so far (see [102]) the high-spin state has lower fission hindrance than the low-spin state. Tentatively the 1/2\({}^{+}\)[631]\(\downarrow\) Nilsson level was assigned to the ground-state of \({}^{253}\)Rf, but it was not excluded, that also the 7/2\({}^{+}\)[724] level could be the gound state. If the first scenario could be verified, we here observe the same interesting feature as in \({}^{255}\)Lr (see above), namely the decay of a 2-quasiparticle K isomer into a single particle K isomer. **K isomer in \({}^{255}\)Rf** A K isomeric state was searched for by analyzing ER - (CE,CE-\(\gamma\)) - \(\alpha\),SF (\({}^{255}\)Rf) correlations. The isotope was produced in an experiment performed at the velocity filter SHIP at GSI, Darmstadt using the reaction \({}^{207}\)Pb(\({}^{50}\)Ti,2n)\({}^{255}\)Rf. A low lying isomeric state with a half-life of T\({}_{1/2}\) = (50\(\pm\)17) \(\mu\)s had already been known in that isotope [92]. It was attributed to the 5/2\({}^{+}\)[622]\(\uparrow\) Nilsson level. Such isomers are common in N = 151 isotones (see e.g. [92]). They are known to decay via highly K converted M2 - transitions (with some E3 contribution) into the 9/2\({}^{-}\)[734]\(\uparrow\) ground state. As no K X - rays were observed, it was concluded that its excitation energy is below the K binding - energy of rutherroform (E\({}_{B}\) = 156.288 keV [21]) and on the basis of the measured CE energies it was settled at E\({}^{*}\) \(\approx\) 135 keV [92]. Therefore possible contributions of this isomer had to be respected in the analysis of the ER - (CE,CE-\(\gamma\)) - \(\alpha\),SF correlations. The analysis was complicated due to the fact, that using analogue electronics, as in this case, the signal of the CE is sitting on the 'tail' of the ER pulse, adulterating the energy of the CE1. The analysis resulted in the identification of two activities [103]. One with a half-life of T\({}_{1/2}\) = 15\({}^{+6}_{-4}\)\(\mu\)s for CE with E\(\supset\)350 keV and one of T\({}_{1/2}\)\(=\) (35\(\pm\)5)\(\mu\)s for CE of E \(<\) 350 keV. As discussed above, the latter activity might contain contributions of the 5/2\({}^{+}\) isomer. But in addition 18 events were found in coincidence with photons, however without presence of a clear signature of a distinct \(\gamma\) line. As no \(\gamma\) events in coincidence with CE had been observed for the decay of the 5/2\({}^{+}\) isomer, these events were attributed to the decay of a different isomer. A half-life of T\({}_{1/2}\) = 38\({}^{+12}_{-7}\)\(\mu\)s was obtained [103]. On the basis of the measured CE- and \(\gamma\) energies the T\({}_{1/2}\) = 15\({}^{+6}_{-12}\)\(\mu\)s isomer was settled as the lower lying one at E\({}^{*}\) = (0.9-1.2) MeV with K \(>\) 17/2, the T\({}_{1/2}\) = 38\({}^{+12}_{-7}\)\(\mu\)s one was settled at E\({}^{*}\) = (1.15 - 1.45) MeV. Although no decay scheme could be established a possible configuration of the lower lying isomer was discussed. A K\({}^{*}\)\(=\) 19/2\({}^{+}\) state with the configuration \(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\omega\)/2\({}^{-}\)[734]\(\uparrow\) was considered. The partial level scheme of \({}^{255}\)Rf obtained on the basis of the study of P. Mosat et al. [103] and the previous decay study of \({}^{259}\)Sg by S. Antalic et al. [92] is shown in fig. 14a. The data were confirmed in an experiment performed at the FLNR - JINR Dubna by R. Chakma et al. [104], who in addition due to registering a significantly higher amount of decays also clearly observed several \(\gamma\) lines which allowed to establish the level and decay scheme shown in fig. 14b. Footnote 1: The 5/2\({}^{+}\)[622] isomer was identified from \(\alpha\)(\({}^{259}\)Sg) - CE \(\alpha\),SF(\({}^{255}\)Rf) correlations. Due to the much lower \(\alpha\) particle pulses this problem was not evident in that case. For the lower lying isomer \({}^{255m2}\)Rf a half-life of T\({}_{1/2}\) = 29\({}^{+7}_{-7}\)\(\mu\)s was given and the excitation energy was settled at E\({}^{*}\) = 1103 keV. Spin and configuration of K\({}^{*}\) = 19/2\({}^{+}\) and \(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\) Figure 14: Partial level scheme of \({}^{255}\)Rf and decay schemes of \({}^{255m1,255m2,255m3}\)Rf as suggested by S. Antalic et al. [92], P. Mosat et al. [103] (a) and R.Chakma et al.[104] (simplyfied). Figure 15: Partial level scheme of \({}^{257}\)Rf and decay scheme of \({}^{257m2}\)Rf based on the results of [88; 3; 19]. Only clearly observed \(\gamma\) transitions are given. \(\nu 9/2^{-}[734]\)+ were the same as in [103]. For the higher lying isomer a half-life and an excitation energy of T\({}_{1/2}\) = 49\({}^{+13}_{-10}\)\(\mu\)s and E\({}^{*}\) = 1238 keV were given. Spin and configuration of I\({}^{*}\) = 25/2\({}^{+}\) and \(\nu 9/2^{-}[734]\)+ \(\otimes\)\(\pi 7/2^{-}[514]\)+ \(\otimes\)\(\pi 9/2^{+}[624]\)+ were suggested. To place the observed \(\gamma\) into the level scheme of \({}^{255}\)Rf calculations of the bands built up on the 9/2\({}^{-}[734]\) ground-state and the 11/2\({}^{-}[725]\) Nilsson level placed at E\({}^{*}\) = 632 keV were performed (see [104] for details). The bands shown in fig. 14b represent the results of these calculations. Footnote †: \({}^{**}\)no parity assignment was given in [66]. **K isomer in \({}^{257}\)Rf** The existence of a K isomer in \({}^{257}\)Rf with a half-life of T\({}_{1/2}\) = 109 \(\pm\)13 \(\mu\)s was first mentioned by H.B. Jeppesen et al. [65] in a study mainly devoted to search for a K isomer in \({}^{256}\)Rf (see sect. 6.4) performed at the BGS separator at LNBL, Berkeley. But no further details were given. Short time later results from an experiment performed at ANL Argonne were reported by J. Qian et al. [105]. They observed in bombardments of \({}^{205}\)Pb with \({}^{50}\)Ti 39 events of the type ER - CE - \(\alpha(^{257}\)Rf) and measured a half-life of T\({}_{1/2}\) = 160\({}^{+42}_{-31}\)\(\mu\)s. Based on calculations possible spin and configuration were considered. They favoured a 3-quasiparticle - state with K\({}^{\pi}\) = 27/2\({}^{+}\) (\(\pi 9/2^{+}[624]\)\(\uparrow\)\(\otimes\)\(\pi 7/2^{-}[514]\)\(\downarrow\)\(\otimes\)\(\nu 11/2^{-}[725]\)\(\downarrow\)) but did not exclude a 3- quasiparticle - state with K\({}^{\pi}\) = 21/2\({}^{+}\) (\(\pi 9/2^{+}[624]\)\(\uparrow\)\(\otimes\)\(\pi 1/2^{-}[521]\)\(\downarrow\)\(\otimes\)\(\nu 11/2^{-}[725]\)\(\downarrow\)). A further study, including also \(\gamma\) ray measurements was performed by J.G. Berryman et al. [66] at the BGS at LNBL, Berkeley who als observed some \(\gamma\), specifically two 'high' energy ones of E = 446, 585 keV. They measured a half-life of T\({}_{1/2}\) = 134.9\(\pm\)7.7 \(\mu\)s. In [66] it was shown that the decay of the isomeric state populates the rotational band built up on the 11/2\({}^{-}[725]\)+ Nilsson level in \({}^{257}\)Rf, as already supposed in [105]. Based on the Lobner - systematics [14] possible K difference of \(\Delta\)K = 4,5,6 were estimated. With this finding as possible isomeric configurations K\({}^{*}\) = 21/2\({}^{+}\) (\(\pi 9/2^{+}[624]\)\(\uparrow\)\(\otimes\)\(\pi 1/2^{-}[521]\)\(\downarrow\)\(\otimes\)\(\nu 11/2^{-}[725]\)\(\uparrow\)) or K\({}^{*}\) = 23/2\({}^{-}\) (\(\pi 7/2^{-}[514]\)\(\downarrow\)\(\otimes\)\(\pi 5/2^{-}[512]\)\(\downarrow\)\(\otimes\)\(\nu 11/2^{-}[725]\)\(\uparrow\)) were considered\({}^{**}\). The \(\gamma\) lines of E = 446 keV and E = 586 keV were assigned as decays from the isomeric state into the I\({}^{*}\) = 23/2\({}^{-}\) and I\({}^{*}\) = 21/2\({}^{-}\) states of the rotational band built up on the 11/2\({}^{-}[725]\)+ Nilsson level. Data of higher quality were obtained in a follow up experiment at the BGS at LNBL Berkeley by J. Rissanen et al. [88]. The results of [66] were confirmed, but due to better statistics an enhanced decay scheme could be established. It is shown in Fig. 15, energies are however modified with respect to the recently more precisely measured excitation energy of the 11/2\({}^{-}[725]\)+ level in \({}^{257}\)Rf (E\({}^{*}\) = 74 keV) [3]. Spin and parity of the isomeric level were established as K\({}^{\pi}\) = 21/2\({}^{+}\), a half-life of T\({}_{1/2}\) = 106\(\pm\)6 \(\mu\)s was given. The excitation energy is settled at E\({}^{*}\) = 1085 keV. ## 7 Discussion ### General considerations on occurence and the structure of K isomers in even - even nuclei Generally spoken, for the occurence of K isomers it is necessary that below states of 'high' K values only states of sufficiently low K values exist, so that decay of the former is delayed by strong K hindrance. As according to the Gallagher rule (sect. 2.5) states of different spin projection tend to maximize their K values, i.e. states of K = \(\Omega_{1}\) +\(\Omega_{2}\) have the lowest excitation energies, such states can be assumed as ideal candidates for K isomers. Indeed, as seen in table 2 all 2-quasiparticle - K isomers in even - even nuclei where the configuration is laid down with some certainty are configurations of spin-up (\(\uparrow\)) and spin-down (\(\downarrow\)) states. However, things may be not so straightforward. In cases of coupling states of low and high spins (e.g. \(\nu 1/2^{+}[620]\)\(\uparrow\)\(\otimes\)\(\nu 11/2^{-}[725]\)\(\uparrow\), with K = \(|\Omega_{1}\) - \(\Omega_{2}|\) = 5) also states K = \(|\Omega_{1}\) - \(\Omega_{2}|\) may result in a K isomer, if no states of sufficiently low K difference lie between them and the K = 0 ground state. Calculations of excitation energies of K = \(\Omega_{1}\) + \(\Omega_{2}\) and K = \(|\Omega_{1}\) - \(\Omega_{2}|\) states (in the following denoted as 'K+' and 'K-' for better presentation) in the region of heaviest nuclei (Z = 104 - 110, N = 160 - 168) have been investigated by V. Prassa et al. [106]. In all cases the K+ - states are located below the K- - states, irrespective of the spin - projections, \(\uparrow\uparrow\), \(\downarrow\downarrow\) or \(\uparrow\downarrow\). Differences in the excitation energy are, however, quite small. For cases with parallel spin - projections (\(\uparrow\uparrow\), \(\downarrow\downarrow\)) one obtains values of \(\Delta\)E\({}^{*}\)(K+,K-) \(\leq\) 0.020 MeV, with a mean value \(\Delta\)E\({}^{*}\)(K+,K-) = 0.018\(\pm\)0.005 MeV. In the case of anti-parallel spin - projections (\(\uparrow\downarrow\)) the straggling of the energy differences is larger, one obtains values of \(\Delta\)E\({}^{*}\)(K+,K-) = (0-0.13) MeV, resulting in a mean value \(\Delta\)E\({}^{*}\)(K+,K-) = 0.083\(\pm\)0.041 MeV. As seen in fig. 16, the energy differences are strongly dependent on the K difference \(\Delta\)K; a trend of increasing energy differences with increasing \(\Delta\)K values is evident. But fig. 16 shows also another trend. At the same \(\Delta\)K values the energy differences are larger for states with negative parity (full squares) than for states of positive parity (full dots). ### Structure of K isomers in even - even nuclei - theoretical predictions The identification of about two dozens of new K isomers in the heavy actinide and transactinide region (Z \(\geq\) 100) within the last two decades asks for investigation of similarities and systematics in their structure and decay as well as comparison with theoretical calculations as far as available. As the 2-quasiparticle states in even - even nuclei are due to breaking of a pair of nucleons and exciting them into different levels, one may expect that they dependent on the single particle structure (Nilsson levels) and thus the K isomers exhibit similarities in their structure along the isotone lines in case of 2-quasi-neutron states, similar to even Z odd-mass nuclei which show similarities in their nuclear structure along the isotone lines (N = const.). Indeed such a behavior is indicated by similar decay pattern of the K isomers in N = 150 isotones, as shown in fig. 8. But unfortunately detailed decay data a scarce for most of the cases. For odd-mass nuclei the situation is more difficult, as the K isomers are produced by coupling of a 2-quasiparticle state to a single particle state, so more possibilities are available. Due to this difficulties and the lack of theoretical calculations we will do with the discussion of selected cases in even-even nuclei. Concerning theoretical results we will do with presenting and comparing them with experimental results and omit a detailed discussion of the underlying theoretical framework, as this would go beyond the scope of this paper. ### 7.21 K isomers in N = 150 isotones Detailed information on K isomers which allow for a systematic examination of structure and decay of K isomers are available for the even-Z N = 150 isotones. The similar decay schemes of \({}^{246m}\)Cm, \({}^{250m}\)Fm and \({}^{252m}\)No are shown in fig. 8, but also differences in the intensities of the relevant transitions are pointed to. For sake of the further discussion we present simplified decay schemes, showing only the'most relevant' transitions for \({}^{244m}\)Pu (for which a less detailed decay scheme was published), \({}^{246m}\)Cm, \({}^{250m}\)Fm and \({}^{252m}\)No in fig. 17. For all four cases the decay of the isomers occurs via two paths: a) decay of the isomer into the I\({}^{\rm r}\) = \(-\)8\({}^{+}\) level of the ground state rotational band. It was, however, already remarked that in the case of \({}^{252m}\)No this transition is quite weak, contrary to the situation for \({}^{244m}\)Pu, \({}^{246m}\)Cm and \({}^{250m}\)Fm; b) decay of the isomer into the I\({}^{\rm r}\) = 7\({}^{-}\) level of an octupole vibrational band with the bandhead I\({}^{\rm r}\) = 2\({}^{-}\). Strong decay intensities are observed into the I\({}^{\rm r}\) = 8\({}^{+}\), 6\({}^{+}\) members of the ground-state rotational band, but also strong intraband transitions followed by decay from lower members of the octupole band into lower members of the ground state rotational band are reported such as transitions 5\({}^{-}\)\(\rightarrow\) 4\({}^{+}\) and 2\({}^{-}\)\(\rightarrow\) 2\({}^{+}\) in \({}^{250m}\)Fm [50] and \({}^{252m}\)No [52]. All these findings hint to the same structure of the isomers and presently there is common agreement on a 2-quasi-neutron- state \(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\nu\)7/2\({}^{+}\)[624]\(\downarrow\) resulting Figure 16: Energy differences between states of K = \(\Omega_{1}+\Omega_{2}\) and K = \(|\Omega_{1}-\Omega_{2}|\) in 2- quasiparticle - configurations with anti-parallel spin projections according to the calculations in [106]; full squares (black): states of negative parity; full dots (red): states of positive parity. Figure 17: Simplified decay schemes of K isomers in N = 150 isotones \({}^{244}\)Pu, \({}^{246}\)Cm, \({}^{250}\)Fm, and \({}^{252}\)No. Figure 18: Comparison of predicted [107] excitation energies of 2-quasi-neutron - states in N = 150 isotones with the experimental values for the K\({}^{\pi}\) = 8\({}^{-}\) isomers. Figure 19: Comparison of predicted [107] excitation energies of 2-quasi-proton - states in N = 150 isotones with the experimental values for the K\({}^{\pi}\) = 8\({}^{-}\) isomers. Figure 20: Comparison of predicted excitation energies of for the K\({}^{\pi}\) = 8\({}^{-}\) isomers in N = 150 isotones with the experimental values. in K\({}^{\pi}\) = 8\({}^{-}\). Calculations of low lying 2 - quasiparticle - states in even-Z N = 150 isotones have been performed by a couple of authors [107, 112, 111, 108, 73]. In fig. 18 the results from J.-P. Delaroche et al. [107] for 2-quasi-neutron - states are compared with the experimental data for the N = 150 isotones in the range Z = 94 - 102. In all cases the K\({}^{\pi}\) = 8\({}^{-}\) state is the lowest lying one. Other states are predicted at E\({}^{*}\) \(>\) 1500 keV. The K\({}^{\pi}\) = 8\({}^{-}\) states, however, are predicted at somewhat lower excitation energies with \(\Delta\)E(exp,theo) = 115-190 keV, except for \({}^{248}\)Cf where the difference is \(\Delta\)E(exp,theo) = 250 keV. But here one should keep in mind, that the excitation energy of the isomer is not well established (see sect. 6.3). In fig. 19 we compare the 2-quasi-proton - states predicted by J.-P. Delaroche et al.[107] below E\({}^{*}\) = 2 MeV with the experimental results. In general the predicted 2-quasi-proton - states have different configurations, so no 'common trend' is observed as in the case of 2-quasi-neutron - states. On the other side this behavior is not unexpected, as the isotones differ in the number of protons leading also to different Nilsson-levels (proton single particle states) at low excitation energies, in contrast to nuclei within isotone lines, which are known to have similar structure (similar Nilsson single particle levels) at low excitation energies. In this way the similar structure of the K isomers in the N = 150 isotones supports their interpretation as 2-quasineutron states. Calculations for K isomers in N = 150 isotones (but in general not for all nuclei) were also performed by F.R. Xu et al. [73], G.G. Adamian et al. [112], H.L. Liu et al. [111], and N. Minkov et al. [108]. F.R. Xu et al. consider as a possible configuration of the K isomer in \({}^{250}\)Fm a 2-quasi-proton state of I\({}^{\pi}\) = 7\({}^{-}\) (configuration \(\pi\) 7/2\({}^{+}\)[633]\(\uparrow\)\(\otimes\)\(\pi\) 7/2\({}^{-}\)[514]\(\downarrow\)) at a calculated excitation energy of E\({}^{*}\) = 1.01 MeV(Note: the paper of F.R. Xu et al. was published before \({}^{252m}\)No was discovered and before detailed spectroscopic data for \({}^{250m}\)Fm were published.) In \({}^{252m}\)No the I\({}^{\pi}\) = 7\({}^{-}\) was settled at a considerably higher excitation energy of E\({}^{*}\)\(\otimes\)1.5 MeV. F.R. Xu et al., however, remark that their calculations show that I\({}^{\pi}\) = 8\({}^{-}\) 2-quasi-neutron states (configuration \(\nu\) 9/2\({}^{-}\)[734]\(\uparrow\)\(\otimes\)\(\nu\) 7/2\({}^{+}\)[613]\(\downarrow\)) exist systematically in N = 150 isotones at excitation energies around 1 MeV; but only for \({}^{250}\)Fm a definite value of E\({}^{*}\) = 0.97 MeV is given. In fig. 20 the experimental excitation energies of the K isomers in the N = 150 isotones are compared with the results of the different calculations for K\({}^{\pi}\) = 8\({}^{-}\) 2-quasi-neutron states. The calculations of J.P. Delaroche et al. reproduce the trend of quite stable excitation energies quite well, but deliver in general about 115 - 190 keV too low values. The calculations of N. Minkov et al. deliver quite stable excitation energies for \({}^{244}\)Pu, \({}^{246}\)Cm and \({}^{248}\)Cf and reproduce the experimental values very well within \(\Delta\)E = [20] keV, but for the Z-98 isotones the calculations result in steeply increasing excitation energies and deliver excitation energies too high by \(\approx\)175 keV for \({}^{250m}\)Fm and \(\approx\)250 keV for \({}^{252m}\)No. Quite satisfying agreement between experimental and calculated values is obtained by G.G. Adamian et al.. F.R. Xu et al., and H.L. Liu et al. obtain too low excitation energies for the cases they consider. The calculations of H.L. Liu et al. may hint to a change of the structure of the K isomers going from Z = 102 to Z = 104. The drastic change in the half-lives from \({}^{252m}\)No to \({}^{254m}\)Rf has been discussed in sect. 6.3. In [54] as a possible reason the authors gave a decrease of the hindrance of the M1 decay branch from the K\({}^{\pi}\) = 8\({}^{-}\) isomer into the I\({}^{\pi}\) = 7\({}^{-}\) state of the octupole band (indeed we observe in \({}^{252m}\)No a strong increase of the transition K\({}^{\pi}\) = 8\({}^{-}\) J\({}^{\pi}\) = 7\({}^{-}\) (or I\({}^{\pi}\) = 6\({}^{-}\)) compared to the E1 transition K\({}^{*}\) = 8\({}^{-}\)\(\rightarrow\) I\({}^{\pi}\) = 8\({}^{+}\) state of the ground state rotational band compared to the lighter isotones) and/or that the K\({}^{\pi}\) = 8\({}^{-}\) isomer and the I\({}^{\pi}\) = 8\({}^{-}\) might be very close in energy leading to an accidental configuration mixing resulting in a shorter half-life. We want to remind here, an indication for this already may be the drastic decrease of the isomeric half-life from \({}^{250m}\)Fm to \({}^{252m}\)No and also the low intensity of the E1 - transition 8\({}^{-}\)\(\rightarrow\) 8\({}^{+}\) (ground-state rotational band) compared to the M1 - transition 8\({}^{-}\)\(\rightarrow\) 7\({}^{-}\) (octupole vibration band). The calculations of H.L. Liu et al. result in clear separation of the K\({}^{\pi}\) = 8\({}^{-}\) isomeric state (there denoted as \(\nu^{2}\)8\({}_{1}^{-}\)) from other states, \(\pi^{2}\)7\({}^{-}\), \(\nu^{2}\)6\({}^{+}\) in \({}^{250m}\)Fm or \(\nu^{2}\)6\({}^{+}\), \(\pi^{2}\)5\({}^{-}\) in \({}^{252m}\)No. Contrary, in \({}^{254m}\)Rf the K\({}^{\pi}\) = 8\({}^{-}\) state lies close in energy to a K\({}^{\pi}\) = 5\({}^{-}\) 2-quasi-proton state (configuration \(\pi\) 1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\) 9/2\({}^{+}\)[624]\(\uparrow\)) and an K\({}^{\pi}\) = 8\({}^{-}\) 2-quasi-proton state (configuration \(\pi\) 7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\) 9/2\({}^{+}\)[624]\(\uparrow\)). Thus a change of the configuration of the isomeric state from \({}^{252m}\)No to \({}^{254m}\)Rf could also be the reason for the drastic change in the half-lives. **7.22 K isomers in N = 152 isotones** While for the 2-quasiparticle isomers in the N = 150 isotones \({}^{244}\)Pu, \({}^{246}\)Cm, \({}^{250}\)Fm, \({}^{252}\)Fm, and \({}^{254}\)Rf the isomeric state was attributed to the same configuration the situation is completely different for the 2-quasiparticle isomers in the N = 152 isotones \({}^{248}\)Cm \({}^{254}\)No and \({}^{256}\)Rf. The situation is shown in fig. 21. For \({}^{248}\)Cm and \({}^{254}\)No each one 2-quasiparticle isomer was observed with spin and parity assigned as K\({}^{\pi}\) = 8\({}^{-}\); while in \({}^{254m}\)No the 2-quasi-neutron configuration \(\nu\)7/2\({}^{+}\)[624]\(\downarrow\)\(\otimes\)\(\nu\) 9/2\({}^{-}\)[734]\(\uparrow\) is preferred for \({}^{248}\)Cm also the 2-quasi-neutron state \(\nu\)7/2\({}^{+}\)[613]\(\uparrow\)\(\otimes\)\(\nu\) 9/2\({}^{-}\)[734]\(\uparrow\) is considered [47]. Also decay paths are different. While \({}^{248m}\)Cm is interpreted to decay predominantly into the 8\({}^{+}\) and 6\({}^{+}\) members of a \(\gamma\) vibrational band with an I\({}^{\pi}\) = 2\({}^{+}\) bandhead at E\({}^{*}\) = 1049 keV, \({}^{254m}\)No decays predominantly into the 7\({}^{+}\) member of the rotational band built up on a 2-quasi-proton K\({}^{\pi}\) = 3\({}^{+}\) state with a configuration \(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\) 7/2\({}^{-}\)[514]\(\downarrow\). Also half-lives differ by a factor of roughly 2000. In \({}^{256}\)Rf two isomeric states at E\({}^{*}\)\(<\) 1.5 MeV are reported. A lower lying tentative K\({}^{\pi}\) = 5\({}^{-}\) state (\(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\) configuration) at E\({}^{*}\)\(\approx\) 1.120 MeV and higher lying tentative K\({}^{\pi}\) = 8\({}^{-}\) state (\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)) at E\({}^{*}\)\(\approx\) 1.4 MeV. While the K\({}^{\pi}\) = 8\({}^{-}\) isomer decays into the K\({}^{\pi}\) = 5\({}^{-}\) one (or members of the rotational band built up on it), the K\({}^{\pi}\) = 5\({}^{-}\) isomer is interpreted to decay into a band built up on a I\({}^{\pi}\) = 2\({}^{-}\) octupole vibrational state located at E\({}^{*}\)\(\approx\) 944 keV. Indeed such a state has been identified in the N = 152 isotope \({}^{250}\)Cf [21]. H.L. Liu et al. [111] predict for both isotopes \({}^{254}\)No and \({}^{256}\)Rf 2-quasi-proton- and 2-quasi-neutron states E\({}^{*}\)\(<\) 1.5 MeV, namely K\({}^{\pi}\) = 8\({}^{-}\)(\(\nu\)) (\(\nu\)7/2\({}^{+}\)[613]\(\downarrow\)\(\otimes\)\(\nu\)9/2\({}^{-}\)[734]\(\uparrow\)), K\({}^{\pi}\) = 8\({}^{-}\)(\(\pi\)) (\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[734]\(\downarrow\)), and K\({}^{\pi}\) = 5\({}^{-}\)(\(\pi\)) (\(\pi\)1/2\({}^{-}\)[521]\(\downarrow\)\(\otimes\)\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\))(not shown in fig. 21. for better presentation) In \({}^{254}\)No all three states are predicted very close in energy E\({}^{*}\)\(\approx\) 1.38 MeV and the authors prefer the K\({}^{\pi}\) = 8\({}^{-}\)(\(\pi\)) state as the isomeric one. In \({}^{256}\)Rf the K\({}^{\pi}\) = 5\({}^{-}\)(\(\pi\)) (E\({}^{*}\) = 1.05 MeV) and K\({}^{\pi}\) = 8\({}^{-}\)(\(\pi\)) (E\({}^{*}\) = 1.11 MeV) are quite close in energy, the K\({}^{\pi}\) = 8\({}^{-}\)(\(\nu\)) is predicted at a significantly higher energy E\({}^{*}\)\(\approx\) 1.5 MeV), leading to preferation of the K\({}^{\pi}\) = 5\({}^{-}\)(\(\pi\)) state as the lower isomeric one. Calculations of N. Minkov et al. [108] predict for \({}^{254}\)No the 2-quasi-proton -state K\({}^{\pi}\) = 8\({}^{-}\) (\(\pi\)) (\(\pi\)9/2\({}^{+}\)[624]\(\uparrow\)\(\otimes\)\(\pi\)7/2\({}^{-}\)[514]\(\downarrow\)) at E\({}^{*}\) = 1.914 MeV, i.e. 639 keV above the experimental value. For \({}^{256}\)Rf they predict the K\({}^{\pi}\) = 5\({}^{-}\) state at E\({}^{*}\) = 1.028 MeV and the K\({}^{\pi}\) = 8\({}^{-}\) state at E\({}^{*}\) = 1.748 MeV about 350 keV above the assumed value. J.-P. Delaroche et al. [107] who predict the excitation energies of the K\({}^{\pi}\) = 8\({}^{-}\) 2-quasi-neutron isomers in the N = 150 isotones sufficiently well, do not predict either a 2-quasi-neutron - state or a 2-quasi-proton - state of K\({}^{\pi}\) = 8\({}^{-}\) in \({}^{254}\)No at an excitation energy E\({}^{*}\)\(<\) 2 MeV. The drastic decrease in the half-lives from \({}^{254m1}\)No to \({}^{256m1,256m2}\)Rf can thus be understood due to the lower K - difference of the initial and final states involved in the decay. But again we here want to point to the quite low half-life of \({}^{248m}\)Cm with respect to the high difference of \(\Delta\)K = 6 between the assumed isomeric state and the \(\gamma\) vibrational state. This could be a hint, that probably the isomer might be a state of a lower K value. ### 7.23 K isomer in \({}^{270}\)Ds As mentioned in sect. 6.4, the K isomer \({}^{270m}\)Ds was identified by its \(\alpha\) decay [26]. The excitation energy (E\({}^{*}\) = 1.13 MeV) was estimated on the basis of the highest observed \(\alpha\) decay energy attributed to the isomer (12.147 MeV) and the mean \(\alpha\) decay energy of the ground state (11.03\(\pm\)0.05 MeV). On the basis of the hindrance factor for the \(\alpha\) decay a spin difference between ground state (0\({}^{+}\)) and isomeric state of \(\Delta\)I = 10\(\pm\)2 \(\hbar\) was estimated. Calculations resulted in possible K isomeric states of K\({}^{\pi}\) = 9\({}^{-}\), 10\({}^{-}\), while K\({}^{*}\) = 9\({}^{-}\) was slightly preferred in [26]. Such a procedure, however, is not unambiguous as the preferred K assignment was mainly based on the hindrance of the \(\alpha\) decay due to the angular momentum difference between the initial state and the final state, assumed as the ground state. But even for the \(\alpha\) decays of highest energy one cannot assume a priori decay into the ground state. Even only considering hindrance of \(\alpha\) decay due to angular momentum difference (leaving structural hindrance and hindrance due to parity change aside) one has to respect decreasing hindrance due to decreasing angular momentum difference and increase of \(\alpha\) decay half-lives due to lower Q - values for decays into excited members of the ground-state rotational band. The situation is shown in table 9. Here E\({}_{\alpha}\) denotes the \(\alpha\) decay energy into the considered level, taking excitation energies of the levels typically for heavy nuclei (see e.g. fig. 6). T\({}_{\alpha}\) is the theoretical \(\alpha\) decay half-life according to [76, 77]. HF(\(\Delta\)L) is the hindrance due to angular momentum change \(\Delta\)L as suggested by J.O. Rasmussen [109]. T\({}_{\alpha}\)/T\({}_{\alpha}\)(gs) is the ratio of the partial \(\alpha\) decay half-lives for the decay into the considered level and into the ground state. HF(m) denotes the'mean' hindrance factor, HF(m) = HF(\(\Delta\)L) x (T\({}_{\alpha}\)/T\({}_{\alpha}\)(gs)), and T\({}_{\alpha}\)(m) the resulting partial \(\alpha\) half-life. Although this treatment is quite crude, evidently \(\alpha\) decay into the ground state is strongly hindered compared to \(\alpha\) decay into low lying members of the ground-state rotational band, which makes the estimation of the excitation energy of the isomeric state by using the \(\alpha\) decay energy uncertain and the given value of E\({}^{*}\) = 1.13 MeV is rather a lower limit. It should here just be noted, that a small \(\alpha\) decay branch for \({}^{254m1}\)No is reported in [33] with an energy somewhat smaller than expected from the excitation energy of \({}^{254m1}\)No and its ground-state \(\alpha\)-decay energy. The discussion on the structure of the isomeric state is based on the coupling of low lying single particle levels. For the sake of the further discussion in fig. 22 calculated single particle levels in neighbouring odd-mass nuclei are shown. Data for the even-Z nuclei \({}^{260,271}\)Ds are taken from [81], for the odd-Z nuclei Figure 22: Predicted low lying Nilsson levels, a) neutron single particle levels in \({}^{269,271}\)Ds [81]; b) proton single particles levels in \({}^{269}\)Mt and \({}^{271}\)Rg [110]. \({}^{269}\)Mt and \({}^{271}\)Rg are taken from [110]. Evidently states of high K values can be formed by couplings \(\nu 7/2^{+}[613]\uparrow\)\(\otimes\)\(\nu 11/2^{-}[725]\uparrow\) resulting in K\({}^{\pi}\) = 9\({}^{-}\), \(\nu 9/2^{+}[615]\downarrow\otimes\nu 11/2^{-}[725]\uparrow\) resulting in K\({}^{\pi}\) = 10\({}^{-}\), \(\pi 9/2^{-}[505]\downarrow\otimes\pi 11/2^{+}[615]\uparrow\) resulting in K\({}^{\pi}\) = 10\({}^{-}\). Calculations on the possible structure of the isomer, besides of those performed in [26] were later performed by F.R. Xu et al. [73], H.L. Liu et al. [111], V. Prassa et al. [106] and G.G. Adamian et al. [112]. The results are compared with those of [26] in Fig. 23. Besides S. Hofmann et al. [26] also F.R. Xu et al. [73] and H.L. Liu et al. [111] predict 2-neutron - quasi-particle - states of K\({}^{\pi}\) = 9\({}^{-}\), 10\({}^{-}\) at excitation energies E = (1.0-1.5) MeV, but contrary to [26] the K\({}^{\pi}\) = 10\({}^{-}\) state is predicted below the K\({}^{\pi}\) = 9\({}^{-}\) state in [73, 111]. V. Prassa et al. [106] predict only the K\({}^{\pi}\) = 9\({}^{-}\) state, G.G. Adamian only the K\({}^{\pi}\) = 10\({}^{-}\) state below E\({}^{\ast}\) = 1.5 MeV. In addition F.R. Xu et al. [73], H.L. Liu et al. [111] and V. Prassa et al. [106] predict a K\({}^{\pi}\) = 10\({}^{-}\) 2-quasi-proton - state in the range E\({}^{\ast}\) = (1.0-1.5) MeV, In [111] the K\({}^{\pi}\) = 10\({}^{-}\) 2-quasi-neutron - state is favoured as the isomeric one due to the, according to the Gallagher rule favored coupling in energy (\(\uparrow\downarrow\) configuration) in contrast to the unfavoured coupling of the K\({}^{\pi}\) = 9\({}^{-}\) state (\(\uparrow\uparrow\) configuration), while the K\({}^{\pi}\) = 10\({}^{-}\) 2-quasi-proton - state is not considered as the isomeric one. Besides the K\({}^{\pi}\) = 10\({}^{-}\) 2-quasi-neutron - state G.G. Adamian et al. also consider a K\({}^{\pi}\) = 6\({}^{+}\) 2-quasi-neutron - state (configuration \(\nu 1/2^{-}[761]\downarrow\)\(\otimes\)\(\nu 11/2^{-}[725]\uparrow\)) as a possible isomeric one. In addition they obtain at E\({}^{\ast}\) < 1.5 MeV a K\({}^{\pi}\) = 8\({}^{+}\) 2-quasi-neutron - state (configuration not given, but probably \(\nu 7/2^{+}[613]\uparrow\)\(\otimes\)\(\nu 9/2^{+}[604]\downarrow\) as expected in \({}^{268}\)Ds) and a K\({}^{\pi}\) = 6\({}^{-}\) 2-quasi-proton - state; here also no configuration is given, but with respect to the calculated Nilsson levels shown in fig. 22 a possible configuration could be \(\pi 11/2^{+}[615]\uparrow\)\(\otimes\)\(\nu 1/2^{-}[510]\uparrow\). **7.3 \(\alpha\) decay of K isomers** \(\alpha\) decay of K isomers has been reported so far only for three cases \({}^{254m1}\)No [32], \({}^{270m}\)Ds [26; 74; 75] and \({}^{266m}\)Hs [74; 75]. For \({}^{254m1}\)No two \(\alpha\) events with individual energies of E = 9369 keV and E = 9336 keV were observed, while for the transition \({}^{254m}\)No \(\alpha\)\({}^{-}\)\({}^{250g}\)Fm a value E\({}_{\alpha}\) = 9370\(\pm\)10 keV was expected. The lower energy of the second decay may be regarded as a hint that not the ground state of \({}^{256}\)Fm was populated, but a low lying member of the ground-state rotational band. A branching ratio b\({}_{\alpha}\leq 1\)k\(10^{-4}\) and a partial \(\alpha\) decay half-life of \(\geq\)2750 s were obtained. The theoretical half-life [76; 77] for a 9370-keV transition is T\({}_{\alpha}\) = 3.63 ms, resulting in a hindrance factor HF \(\geq\) 7.6x10\({}^{5}\), which is a factor \(\geq\)2435 or \(\geq\)26600 higher than hindrance factors expected for \(\Delta\)L = 8 or \(\Delta\)L = 6 transitions (P\({}_{L}\)/P\({}_{0}\) = exp[-2.027L(L+1)Z\({}^{-1/2}\)A\({}^{-1/6}\)])[109]. But here we also have to consider that besides structural hindrance according to the selection rules for \(\alpha\) decay for transitions between states of different parities (e.g. 8\({}^{-}\)\(\rightarrow\) 0\({}^{+}\), 2\({}^{+}\)) only odd values of angular momentum change are allowed, which introduces strong hindrance between states of even values of angular momentum difference [113]. Figure 23: Comparison of predicted low lying state of high K numbers in \({}^{270}\)Ds [26; 73; 111; 106; 112] with the tentative experimental assignement [26]. In the case of [111] levels of K\(\leq\)8 in the range of E\({}^{\ast}\) = (1.0-1.5) MeV are omitted for better presentation. \({}^{270m}\)Ds is insofar a specific case of a K isomer as it decays to a large extent by \(\alpha\) emission. As the half-lives of the ground state (0.17 ms) and the isomeric state (3.81 ms) [78] differ by roughly a factor of 22 a rough estimate of the \(\alpha\) branching can be obtained from the ratio of \(\alpha\) - particles in the energy range of the ground state \(\alpha\) decays having correlation times a factor of five longer (\(>\)1 ms) than the half-life of the ground state, which results in a branching b\({}_{\alpha}\) \(\approx\) 0.93 [78], resulting in a partial \(\alpha\) decay half-life of T\({}_{\alpha}\) = 4.1 ms. Assuming events with energies E\({}_{\alpha}\)\(>\) 12 MeV as decays into the ground state (or a 'low' member of the ground-state rotational band) one obtains a branching ( 6 out of 15 decays) of 0.4 and hence a partial \(\alpha\) decay half-life of T\({}_{\alpha}\)\(\approx\) 4.1/0.4 ms \(\approx\) 10.25 ms. For the \(\alpha\) events of the highest energies with a mean value of 12.136 MeV one obtains a theoretical \(\alpha\) decay half-life of T\({}_{\alpha}\) = 3.9x10\({}^{-7}\) s resulting in a hindrance factor HF \(\approx\) 26300. Considering transitions 10\({}^{-}\), 9\({}^{-}\)\(\rightarrow\) 0\({}^{+}\) one expects hindrance factors of 4800 and 947 for \(\Delta\)L = 10 and \(\Delta\)L = 9 transitions [109]. Evidently the differences between the 'total' hindrance factors (from the ratio of the experimental and theoretical \(\alpha\) half-lives) and those due to the angular momentum change are much smaller than in the case of \({}^{25\alpha m1}\)No. If one now simply writes the total hindrance factor HF(tot) as the product of the angular momentum hindrance factor HF(\(\Delta\)L) and the 'nuclear structure based hindrance' HF(struct) as HF(tot) = HF(\(\Delta\)L) x HF(struct) one obtains HF(struct)(\({}^{270m}\)Ds) \(<<\) HF(struct)(\({}^{25\alpha m1}\)No) (see table 10). \(\alpha\) decay of \({}^{270m}\)Ds was investigated theoretically by R.M Clark and D. Rudolph [114] using the'superfluid tunneling model'(STM). They were able to reproduce the ground-state \(\alpha\) decay half-lives of all known even-even nuclei in the range Z = 100 - 118. Applying their model to the \(\alpha\) decay of \({}^{270m}\)Ds they obtain a partial \(\alpha\) decay half-life of 36 ms for a \(\Delta\)L = 9 transition, when reducing the pairing gap parameter to sixty percent of the value used to reproduce the half-lives for the ground-state \(\alpha\) - decay of the even - even nuclei, which caused an increase of the half-life by a factor of \(\approx\)136. This value is not in severe disagreement with the experimental partial half-life of 11 ms. For the decay of \({}^{270m}\)Ds into the assumed isomer in \({}^{266}\)Hs they obtain a partial \(\alpha\) decay half-life of 31 ms, which can be compared with the experimental value of 63 ms, assuming a relative intensity of 0.07 (one out of fifteen decays) for that transition. Thus, the agreement of the results of R.M. Clark and D. Rudolph with the experimental data is not so bad. It should be noted that reduction of the pairing gap parameter does not mean simply a reduction of the pairing, but includes also reduction of the decay probability due to nuclear structure 'effects' [114]. Consquently the authors conclude 'we find that the effects of nuclear structure of the multi-quasiparticle isomer, which is accounted for in the STM model by a reduction of the pairing gap parameter must be included' [114]. In conclusion we have to state that the quite low values of the'structural' hindrance factors are in disagreement with'very high' hindrance factors expected for transitions violating by the selection rules, in this case for \(\alpha\) decay with an even value for angular momentum change and a parity change, i.e. for the transition 10\({}^{-}\)\(\rightarrow\) 0\({}^{+}\), 2\({}^{+}\). In other words, the \(\alpha\) decay hindrance factors do not support a K\({}^{\pi}\) = 10\({}^{-}\) configuration of the isomer, as it seems preferred by the nuclear structure calculations and applying the Gallagher rule, but rather suggest a K\({}^{\pi}\) = 9\({}^{-}\). This statement, however, is quite vague and further detailed investigations are required to clarify the situation. ### 7.4 Stability of K isomers against fission It is well known that spontaneous fission of nuclei with odd numbers of protons and/or neutrons is hindered compared to even - even nuclei. This feature qualitativly can be understood by the conservation of angular momentum and parity. While a pair of nuclei coupling their angular momenta to L = 0 may change the nuclear level at crossing points during deformation towards the fission configuration, thus following the energetically most favourable path [115], for an unpaired nucleon this is normally not possible as it has to keep its angular momentum, which leads to an effective increase of the fission barrier, in liter \begin{table} \begin{tabular}{c c c} \hline & \({}^{254m1}\)No & \({}^{270m}\)Ds \\ \hline T\({}_{\alpha}\)(calc) & 3.63 ms & 0.39 \(\mu\)s \\ T\({}_{\alpha}\)(exp) & \(\geq\)2750 & 10.25 ms \\ HF & \(\geq\)7.58 x 10\({}^{5}\) & 26148 \\ HF(\(\Delta\)L) & 28.5 (\(\Delta\)L = 6) & 932 (\(\Delta\)L = 9) \\ & 311.2 (\(\Delta\)L = 8) & 4260 (\(\Delta\)L = 10) \\ HF(struct) & 26581 (\(\Delta\)L = 6) & 28 (\(\Delta\)L = 9) \\ & 2435 (\(\Delta\)L = 8) & 6 (\(\Delta\)L = 10) \\ \end{tabular} \end{table} Table 10: Hindrance of \(\alpha\) decay for transition from the K isomer into the ground-state or low lying members of the ground-state rotational band of the daughter nucleus for \({}^{254m1}\)No and \({}^{270m}\)Ds. ature denoted as'specalization energy' and thus to an increase of the half-life [116]. Quantitatively this increase can be expressed by a hindrance factor HF = T\({}_{sf}\)(exp)/T\({}_{ee}\), where T\({}_{sf}\)(exp) is the experimental half-life and T\({}_{ee}\) the 'unhindered' half-life, usually taken as the geometric mean of the half-lives of the neighbouring even-even nuclei. There is no general rule for estimation of such hindrance factors, since the increase of the effective fission barrier depends of the nuclear structure of the considered nuclei and on the increase in energy of the occupied single particle level at deformation. Nevertheless a correlation between the steepness of increase or decrease in energy at deformation seems evident [102]. In odd-odd nuclei with an unpaired proton and an unpaired neutron, both nucleons have to keep their angular momenta, leading to higher hindrance factors than obtained for odd-mass nuclei. Consequently only very few cases of spontaneous fission of odd-odd nuclei are known. It should be mentioned that detection of spontaneous fission is complicated by the fact, that the odd-odd nucleus may decay by \(\beta^{-}\)-, \(\beta^{+}\)- or electron capture decay into an even-even nucleus of much lower half-life that undergoes spontaneous fission. So techniques to discriminate between 'direct' spontaneous fission and spontaneous fission after \(\beta\)- or electron capture decay have to be applied. In K isomers, where pairs of nucleons are broken and the nucleons are excited into different levels, the situation resembles that in odd-odd nuclei, i.e. a strong hindrance of spontaneous fission can be expected. So far, only for two cases, \({}^{256m}\) Fm [25] and \({}^{254m}\)No (see sect. 6.2) spontaneous fission was observed. It should be mentioned that here one is faced with similar situation as in odd-odd nuclei. In some nuclei (\({}^{253,254}\)R, \({}^{250}\)No) two fission activities were observed. In these three cases, however, it was shown, that the isomer decays by internal transitions into the ground-state, which then undergoes spontaneous fission. To obtain information about fission hindrance of K isomers F.P. Hessberger et al. [33] performed for the case of \({}^{254m1,m2}\)No some basic calculations to estimate fission halflives of these isomers. The calculations were based on the empirical description of fission half-lives suggested by V.E. Viola and B.D. Wilkins [117]. Similar to description of \(\alpha\) decay spontaneous fission was estimated as a tunneling through a one-dimensional fission barrier. The fission half-life thus was expressed as (7.4.1) T\({}_{sf}\) = ln2 x (nP)\({}^{-1}\) where 'n' denotes the frequency of barrier assaults, 'P' the barrier penetration probability. The fission barrier was approximated by an inverted parabola, and the barrier transmission probability Figure 24: Barrier curvature energy for even - even nuclei. was calculated using the Hill-Wheeler approximation [118], resulting in (7.4.2) P = [1+exp(2\(\pi\)/\(\hbar\omega_{f}\))x(B\({}_{f}\)-E))]\({}^{-1}\) with \(\hbar\omega_{f}\) representing the barrier curvature energy, B\({}_{f}\) the fission barrier and E the excitation energy above the ground state. The fission half-life for the ground state (E\({}^{*}\) = 0) can thus be written as (7.4.3) lg(T\({}_{sf}\)/years) = (2.73/\(\hbar\omega_{f}\) x B\({}_{sf}\)) - 28.04 It should be noted, however, that is approximation is quite crude, as in the region of the actinides the fission barrier is known to be double humped [119] as sketched in fig. 24 in the inset. On the basis of the above relation for the fission half-lives F.P. Hessberger et al. [120] analyzed the barrier curvature energies for the known even-even nuclei using the fission barrier as the sum B\({}_{sf}\) = B\({}_{sf}^{LD}\) - \(\delta\)U\({}_{gs}\) with the liquid drop fission barrier B\({}_{fs}^{LD}\) as suggested by M. Dahlinger et al. [121] and \(\delta\)U\({}_{gs}\) being the ground-state shell correction energy as presented by P. Moller and J.R. Nix [122]. On this basis F.P. Hessberger et al. [120] obtained quite smooth behavior of \(\hbar\omega_{f}\) around 0.4 MeV as function of the fissility parameter x\({}^{\dagger\dagger}\) for the isotopes of uranium to californium (region 1), \(\hbar\omega_{f}\) around 0.8 MeV, for the isotopes of nobelium, rutherfordium and seabogrium, where the outer fission barrier was predicted to lie in energy below the ground state [123] (region 3) and a strong variation with x for the fermin and nobelium isotopes (region 2), which was explained by quite broad barriers in region 1, narrow barriers in region 3 and a 'transition region' (region 2). On this basis F.P. Hessberger et al. [33] calculated, however using liquid drops fissions barriers according to A. Sierk [124] and ground state shell correction energies from R. Smolanczuk and A. Sobiczewski [127], fission half-lives for \({}^{254m1,2542m}\)No of 1 s (\({}^{254m1}\)No) and \(<\)0.5 \(\mu\)s (\({}^{254m2}\)No) and hence hindrance factors of \(\approx\)1375 (\({}^{254m1}\)No) and \(>\)3.3 x 10\({}^{6}\) (\({}^{254m2}\)No). The hindrance factor HF = 1375 seems quite low, but one has to consider that it is very sensitive to the value of the barrier curvature \(\hbar\omega_{f}\) and the effective fission barrier (B\({}_{sf}\)-E\({}^{*}\)). In [33] quite conservative values of (B\({}_{sf}\)-E\({}^{*}\)) = 5.643 MeV and \(\hbar\omega_{f}\) = 0.75 MeV were used; using a slighty higher value \(\hbar\omega_{f}\) = 0.8 MeV and (B\({}_{sf}\)-E\({}^{*}\)) = 5.47 MeV using the fission barrier from [126], one obtains T\({}_{sf}\) = 0.013 s, hence a two orders of magnitude lower value (see table 11). In fig. 24 the results for the barrier curvature energies \(\hbar\omega_{f}\) according to relation 7.4.3. using, however, fission barriers as published by P. Moller et al. [126] (full squares) for the elements thorium to bassium are presented as function of the fissility parameter x; for comparison results for the isotopes of the elements rutherfordium, seabogrium, and bassium using the dynamical fission barriers B\({}_{sf}^{dyn}\) as published by R. Smolanczuk et al. [127] are shown as open squares in fig. 24. Evidently an increase of the barrier curvature energies at increasing fissility is observed, with partly strong variations within an isotope line, which is seen a signature for a change of the shape of the fission barrier. The values using fission barriers of [127] are higher than those using fission barriers of [126] since [127] predict higher fission barriers. For the case of \({}^{254m1}\)No one obtains a fission half-live of T\({}_{sf}\) = 0.0339 s using the value of \(\hbar\omega_{f}\) for the ground state. Respecting, however, that the excitation energy of the isomer may be already above the height of the outer barrier, thus taking the barrier curvature obtained for \({}^{250}\)No, which is expected to have a narrow single-humped barrier [123] one obtains T\({}_{sf}\) = 0.48 \(\mu\)s, and hence hindrance factors of HF = 40560 and HF = 2.86\(\times\)10\({}^{9}\). Recently the idea of parametrizing the barrier curvature energy was resumed by J. Khuyagbaatar [128]. He obtained somewhat higher values of \(\hbar\omega_{f}\) and a fission half-life of 9.4\(\times\)10\({}^{-4}\) for \({}^{254m1}\)No. The results are summarized in table 11; evidently the hindrance factor reported in [33] seems unrealistic low, seemingly using a too low value for the barrier curvature energy. But it also seen that the hindrance factors resulting from the other analyses also struggle in a wide range of more than four orders of magnitude, which can be ascribed due to the rather simple form of calculating the fission half-lives. In the case of \({}^{256}\)Fm Khuyagbaatar report a calculated unhindert fission half-life of 2.2\(\times\)10\({}^{-7}\) s, while the analysis presented here delivers a value T\({}_{sf}\)(calc) = 3.6\(\times\)10\({}^{-8}\) s and hence hindrance factors of 3.6\(\times\)10\({}^{3}\)[128] and 2.2\(\times\)10\({}^{5}\). A comparison for \({}^{254m1}\)No is shown in table 11. Evidently the cases 3 and 4 exhibit hindrance factors of \(>\)1\(\times\)10\({}^{6}\) which is higher than hindrance factor for odd-mass nuclei which are typically (except a few cases) \(<\)\(\times\)10\({}^{6}\), i.e. one can conclude that fission of \({}^{254m1}\)No is more hindered than typically for nuclei with one unpaired nucleon. A comparison with odd-odd nuclei is hardly possible, as there are only two cases of odd-odd nuclei where direct fission is reported, \({}^{262}\)Db (a less certain case) and \({}^{260}\)Md. In the case of \({}^{262}\)Db a quite low hindrance factor (HF\(\approx\)3250) is obtained, while for \({}^{260}\)Md a quite high value (HF\(\approx\)9.2\(\times\)10\({}^{8}\)) is obtained (see [102]). Thus one may conclude that fission hindrance of 2- quasiparticle - K isomers (with two unpaired nucleons) indeed rather resembles the case of odd-odd nuclei with each an unpaired proton and neutron. At present state, such a conclusion is indeed still somewhat speculative, more information of spontaneous fission of K isomers and odd-odd nuclei is required as well as efforts from theory to calculate fission half-lives of K isomers is required. Finally one other item should be mentioned; besides \(\hbar\omega_{f}\) fission half-lives calculated using the above mentioned method are also quite sensitive to the fission barrier. To show the situation fission half-lives of \({}^{256}\)Rf for the ground-state and at an excitation energy of 1 MeV were calculated using once the fission barriers from [126] and the analyzed barrier curvatures and once the dynamical fission barriers from [127]. Using in both cases the values of \(\hbar\omega_{f}\) extracted for the ground-state fission half-lives of T\({}_{sf}\)(E\({}^{*}\)=1 MeV) = 7.6 \(\mu\)s (fission barriers from [126]) and T\({}_{sf}\)(E\({}^{*}\)= 1 MeV) = 30 \(\mu\)s (fission barriers from [127]) and hence half-life ratios T\({}_{sf}\)(gs)/T\({}_{sf}\)(E\({}^{*}\)=1 MeV) = 860 and T\({}_{sf}\)(gs)/T\({}_{sf}\)(E\({}^{*}\)=1 MeV) = 210 are obtained; although these ratios vary within a factor of four, this difference is not of significant relevance for qualitative discussion on fission hindrance of K isomers. **8. Outlook** The occurence of K isomers in strongly prolate deformed nuclei is a physically interesting phenomenon. Thanks to improved experimental techniques investigation of K isomers became a key aspect in spectroscopy of transfermium nuclei during the past two decades. Although about thirty new K isomers were discovered in that region of nuclei, information on the structure and decay properties is scarce for most of the cases due to low production rates for most of the isomers which do not allow for detailed \(\gamma\) spectroscopic investigations. A specific feature pointed out quite often is the longer half-life of some isomers compared to the ground - state, which was also discussed in context with the stability of'superheavy' nuclei in vicinity of the closed proton and neutron shells expected at Z = 114, 120 or 126 and N = 172 or 184. Here one has to keep in mind, however, that K isomerism is a phenomenon in strongly deformed nuclei, while the nuclei around the closed proton an neutron shells are spherical. So K isomers are not expected in the region of spherical superheavy elements. Stability and decay properties of K isomers (also with respect to those of the ground-state) are strongly related to the properties of the radioactive decay modes, internal transitions, \(\alpha\) decay, spontaneous fission, relevant for the decay of the isomers. The ratios (partial half-lives) for internal transitions are essentially defined by the K hindrance, those for \(\alpha\) decay by the Q-value, the 'angular momentum hindrance' and by hindrance due to changes in nuclear structure and the parity. Spontaneous fission half-lives are essentially defined by light and shape of the fission barrier and a fission hindrance due to unpaired nucleons. It is thus the interplay between probabilities of these three decay modes which finally defines the decay of the K isomer. As long as partial half-lives for \(\alpha\) decay and spontaneous fission are significantly larger than those for internal transitions, the latter will dominate and \(\alpha\) decay and spontaneous fission will be exotic decay modes with low intensities as in the case of \({}^{254m1}\)No. Half-lives of the K isomers are definitely shorter than those of the ground-states in such cases. The situation changes, when partial half-lives for \(\alpha\) decay or spontaneous fission come into the same order of magnitude as the half-lives for internal transitions. In those cases \(\alpha\) decay or spontaneous fission can become the essential decay modes of K isomers and also half-lives of the isomeric state may become longer than that of the ground state due to strong hindrance of the isomeric decays while \(\alpha\) decay or spontaneous fission of the ground-state is unhindered or at least less hindered. An example is the decay of \({}^{270,270m}\)Ds by \(\alpha\) emission, where \(\alpha\) decay of the isomeric state is strongly hindered, and thus half-life is longer than that of the ground-state despite of a higher Q\({}_{\alpha}\) - value of the isomer. Another phenomenon is the observation of two fission activities for \({}^{256}\)No and \({}^{254}\)Rf (\({}^{254}\)Rf, T\({}_{1/2}\) = 247 \(\mu\)s), with the shorter one attributed to the ground-state and the longer one attributed to the isomeric state. Here it was shown that direct fission of the isomer is at best a decay mode of very low intensity due to the strong hindrance caused by the unpaired nucleons despite a significantly (\(\leq\)1 MeV) lower fission barrier of the isomer. The longer-lived fission activity attributed to the isomer rather represents spontaneous fission of the ground-state delayed by internal transitions. Thus we can conclude, although details of the decays are theoretically not well understood, that decay of the latter K isomers can be understood on the basis of the more or less well known properties of internal transitions, \(\alpha\) decay and spontaneous fission and certainly do not contain some kind of 'new physics'. Although properties of K isomers in the transfermium region will not have direct impact on the expected \begin{table} \begin{tabular}{c c c} \hline reference & T\({}_{sf}\)(calc) & HF \\ \hline [33] & 1 s & 1375 \\ [33], \(\hbar\omega_{f}\)=0.8 MeV & 0.013 s & 1.06\(\times\)10\({}^{5}\) \\ this work, \(\hbar\omega_{f}\)=0.783 MeV & 0.034 s & 4.1\(\times\)10\({}^{4}\) \\ this work, \(\hbar\omega_{f}\)=1.05 MeV & 0.048 \(\mu\)s & 2.9\(\times\)10\({}^{9}\) \\ [128] & 0.94 ms s & 1.4\(\times\)10\({}^{6}\) \\ \end{tabular} \end{table} Table 11: Comparison of fission hindrances for \({}^{254m1}\)No for different barrier curvature energies \(\hbar\omega_{f}\) values. T\({}_{sf}\) = 1375 s for all cases. properties of'spherical' superheavy nuclei, study of their properties will deliver valuable information on energies and ordering of single particle levels (Nilsson levels) which certainly will have feedback on the prediction and location of spherical proton and neutron shells in the range Z = 114-126 and N = 172-184.
2309.08227
VERSE: Virtual-Gradient Aware Streaming Lifelong Learning with Anytime Inference
Lifelong learning or continual learning is the problem of training an AI agent continuously while also preventing it from forgetting its previously acquired knowledge. Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning in a dynamic non-stationary environment without forgetting. We introduce a novel approach to lifelong learning, which is streaming (observes each training example only once), requires a single pass over the data, can learn in a class-incremental manner, and can be evaluated on-the-fly (anytime inference). To accomplish these, we propose a novel \emph{virtual gradients} based approach for continual representation learning which adapts to each new example while also generalizing well on past data to prevent catastrophic forgetting. Our approach also leverages an exponential-moving-average-based semantic memory to further enhance performance. Experiments on diverse datasets with temporally correlated observations demonstrate our method's efficacy and superior performance over existing methods.
Soumya Banerjee, Vinay K. Verma, Avideep Mukherjee, Deepak Gupta, Vinay P. Namboodiri, Piyush Rai
2023-09-15T07:54:49Z
http://arxiv.org/abs/2309.08227v2
# VERSE: Virtual-Gradient Aware Streaming Lifelong Learning ###### Abstract Lifelong learning, also referred to as continual learning, is the problem of training an AI agent continuously while also preventing it from forgetting its previously acquired knowledge. Most of the existing methods primarily focus on lifelong learning within a static environment and lack the ability to mitigate forgetting in a quickly-changing dynamic environment. Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning in a dynamic non-stationary environment without forgetting. We introduce a novel approach to lifelong learning, which is streaming, requires a single pass over the data, can learn in a class-incremental manner, and can be evaluated on-the-fly (anytime inference). To accomplish these, we propose virtual gradients for continual representation learning to prevent catastrophic forgetting and leverage an exponential-moving-average-based semantic memory to further enhance performance. Extensive experiments on diverse datasets demonstrate our method's efficacy and superior performance over existing methods. ## I Introduction Continuous machine perception is crucial for AI agents to learn while interacting with the environment, preventing catastrophic forgetting [1]. Lifelong Learning (LL) or Continual Learning (CL) [2, 3] methods are designed with the goal to accomplish this. Recent CL research focuses mainly on static environments [4, 5, 6, 7, 2], assuming large batch data and overlooking changing data distribution and permitting multiple passes over the data to facilitate CL. However, these approaches are not suitable for rapidly changing dynamic environments. While there have been efforts to enable CL in online settings [8], these methods have various limitations, such as batch data requirements, lack of anytime-inference, and the need for large replay buffers, limiting their applicability in Streaming Lifelong Learning (SLL) [9, 10, 11]. In SLL, the goal is to learn by observing each training example only once without forgetting. Below, we outline the key properties of SLL [9, 11]: * The AI agent observes each training example only once without storing it in memory (desirable). * The agent is required to adapt to new sample(s) in a single pass (desirable). * The input data stream may exhibit temporal correlations, deviating from the typical i.i.d pattern (essential). * The agent is required to be evaluated at any time (anytime inference) without fine-tuning its parameters (essential). * The agent needs to perform class-incremental streaming lifelong learning (CISLL), i.e., predict a class label from all the previously observed classes (essential). * To make it practical, especially in resource-constrained environments, the agent should minimize its memory requirements (desirable). Existing CL approaches often make strong assumptions that violate one or more key constraints required for SLL. Despite being desirable because of also being closer to biological learning [10], SLL hasn't received much attention. The SLL setting is natural in real-world scenarios like home robots, smart appliances, and drones, where AI agents must adapt quickly and continuously without forgetting. Table I categorizes existing CL approaches based on their underlying assumptions, revealing that only a few non-SLL methods can be adapted to the SLL setting without violating the constraints. Notably, ExStream [9], being an SLL method, violates subset replay in SLL by using all past samples for CL. Non-SLL methods like TinyER [17] and DER/DER++ [18] perform poorly when applied in the SLL setting. We introduce Virtual GradiEnt AwaRe Streaming LEarning (VERSE), a rehearsal-based CL model, facilitating CISLL in deep neural networks (DNNs). VERSE _implicitly_ regularizes the network parameters by employing virtual gradient updates, fostering robust representations that require minimal changes to adapt to the new task/sample(s), preventing catastrophic forgetting. We also utilize a small episodic buffer to store past samples, which are used for both local/virtual and global parameter updates to perform a single-step virtual gradient regularization, enabling CL. VERSE first adapts to a new example with a _virtual_ (local) parameter update and generalizes to past samples with a _global_ parameter update, promoting convergence between the two. This process facilitates SLL allowing the AI agent to be evaluated on-the-fly (anytime inf Fig. 1: SLL involves continuous learning from non-i.i.d. labeled streams with multiple views without forgetting. This fig. shows temporally ordered cup frames from CoRe50 [20]. fine-tuning with the stored samples. Moreover, VERSE utilizes an exponential-moving-average [21, 22, 23, 24] based semantic memory akin to long-term memory in mammalian brains [25, 26, 27]. Semantic memory is updated intermittently, consolidating new knowledge within the agent's parameters. It interacts with episodic memory, interleaving the past predictions on stored buffer samples, minimizing self-distillation loss [28, 29] to prevent forgetting and enhance the agent's performance. Experimental results on three temporally contiguous datasets show that VERSE is effective in challenging SLL scenarios. It outperforms recent SOTA methods, with ablations confirming the significance of its components. **In summary, our contributions are as follows:** * We present a novel approach VERSE, a rehearsal-based virtual gradient regularization framework, that incorporates both virtual and global parameter updates to mitigate catastrophic forgetting in CISLL, enabling 'any-time-inference' without fine-tuning. * We propose a semantic memory based on an exponential-moving-average approach, which enhances the agent's overall performance. * Through empirical evaluations and ablation studies conducted on three benchmark datasets with temporal correlations, we affirm the superiority of VERSE over the existing SOTA methods. ## II Related Work This section briefly summarizes different CL paradigms. **Task Incremental Learning (TIL).** In TIL, the AI agent learns from task-batches, observing samples related to specific tasks [30, 4, 12, 3, 2, 31, 13, 14, 32], each involving learning a few distinct classes. These methods rely on knowing the task-identifier during inference; otherwise, it leads to severe catastrophic forgetting [12]. **Incremental Class Batch Learning (IBL).** IBL, also referred to as class incremental learning (CIL), assumes that the dataset is divided into batches, each containing samples from different classes [12, 17, 33, 5, 34, 18, 19, 35, 36, 37, 38]. The AI agent observes and can loop over these batches in each incremental session. During inference, the agent isn't provided with task labels and evaluated over all the observed classes. **Online Continual Learning (OCL).** Unlike TIL and IBL, OCL involves an AI agent sequentially observing and adapting to samples in a _single pass_ through the entire dataset, avoiding catastrophic forgetting [8, 17, 39, 40, 16, 15]. While these methods enable continuous learning in dynamic environments, they have various limitations: (_i_) they require data in batches, assuming \(\forall\mathbf{i},|B_{t}|\gg 1\), where \(B_{t}\) is a batch of samples at time \(t\), (_ii_) they need fine-tuning before inference, lacking any-time-inference ability (e.g., GDumb [8] being a superior OCL method, requires fine-tuning model parameters with replay-buffer samples before each inference), (_iii_) they require large replay buffers [10]. **Streaming Lifelong Learning (SLL).** SLL, a challenging variant of LL, enables CL in a rapidly changing environment without forgetting [11, 9, 10]. It shares similarities with OCL but has additional constraints: (_i_) SLL limits the batch size to one datum per incremental step, while OCL requires \(|B_{t}|\gg 1\), (_ii_) it doesn't allow AI agent to fine-tune its parameters during training or inference. Additionally, in SLL, the input data stream can be temporally correlated in terms of class instance and instance ordering. Detailed essential and desirable properties of SLL are discussed in Section I. To our knowledge, ExStream [9], REMIND [10], and BaSiL [11] are the three methods tackling the challenging SLL setting. However, it's important to note some of the key differences: ExStream [9] uses full-buffer replay, violating the subset-replay constraint; REMIND [10] stores a large number of past samples compared to other baselines (e.g., iCaRL [41] stores 10K past ImageNet [42, 43] samples, whereas REMIND stores 1M samples); BaSiL [11] focuses on Bayesian methods for SLL and relies entirely on pre-trained weights for visual features in SLL. It does not adapt convolutional layers to sequential data and only trains linear layers, potentially posing severe challenges with non-i.i.d. data. In contrast, our approach (VERSE) adheres to the SLL constraints, stores a limited number of past samples, replays only a subset of buffer samples, and trains both convolutional \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{ \begin{tabular}{c} LWF \\ (3) \\ \end{tabular} } & \multicolumn{2}{c|}{EWC++} & \multicolumn{2}{c|}{MAS} & \multicolumn{2}{c|}{SI} & \multicolumn{2}{c|}{VCL} & \multicolumn{2}{c|}{CVEL} & \multicolumn{2}{c|}{GEM} & \multicolumn{2}{c|}{AGEM} & \multicolumn{2}{c|}{GDumb} & \multicolumn{2}{c|}{TinyER} & \multicolumn{2}{c|}{DER++} & \multicolumn{2}{c|}{ExStream} & \multicolumn{2}{c|}{REMIND} & \multicolumn{2}{c|}{CLS-ER} & \multicolumn{2}{c|}{**VERSE**} & \multicolumn{2}{c|}{**CISLL**} \\ & & & & \multicolumn{2}{c|}{(12)} & \multicolumn{2}{c|}{(4)} & \multicolumn{2}{c|}{(13)} & \multicolumn{2}{c|}{(14)} & \multicolumn{2}{c|}{(14)} & \multicolumn{2}{c|}{(15)} & \multicolumn{2}{c|}{(16)} & \multicolumn{2}{c|}{(8)} & \multicolumn{2}{c|}{(17)} & \multicolumn{2}{c|}{(18)} & \multicolumn{2}{c|}{(18)} & \multicolumn{2}{c|}{(9)} & \multicolumn{2}{c|}{(10)} & \multicolumn{2}{c|}{(19)} & \multicolumn{2}{c|}{(**Ours**)} & \multicolumn{2}{c|}{**Constraints**} \\ \hline **Type** & B & B & B & B & B & B & O & O & O & O & O & O & S & S & B & S & and fully connected (FC) layers for SLL. In this paper, we introduce VERSE, adhering to CISLL constraints, enabling CL in challenging SLL settings. We compare VERSE with SLL frameworks, REMIND [10] and ExStream [9], as well as various OCL and IBL methods. ## III Proposed Approach In this section, we introduce the proposed approach VERSE (Fig. 2), which trains the CNN architecture in CISLL setup. The model, consisting of the parameters \(\Theta=\{\xi,\theta\}\), comprises of two components: \((i)\) a non-plastic feature extractor \((G_{\xi})\) with parameter \(\xi\), and \((ii)\) a plastic neural network \((F_{\theta})\) with parameter \(\theta\). \(G_{\xi}\) includes the initial CNN layers, and \(F_{\theta}\) encompasses the final few layers, including the fully connected (FC) layers. The class label output for a given input \(\mathbf{x}\), is predicted as: \(y=F_{\theta}(G_{\xi}^{u}(\mathbf{x}))\). We focus on adapting the plastic network \(F_{\theta}(\cdot)\) while keeping non-plastic parameters \(\xi\) frozen throughout. In each streaming incremental step of CISLL, data arrives sequentially, \(\mathscr{D}_{t}=(\mathbf{x}_{t},y_{t})\), one datum at a time, and the learner adapts without catastrophic forgetting [44, 45] by observing this datum only once. The following section provides brief details of the proposed model. ### _Virtual Gradient as Regularizer_ We denote by \(\mathscr{D}_{1},\mathscr{D}_{2},\ldots,\mathscr{D}_{T}\) the set of tasks, with \((\mathbf{z}_{t},\gamma_{t})=(G_{\xi}(\mathbf{x}_{t}),\gamma_{t})\sim\mathscr{D}_{t}\) with \(|\mathscr{D}_{t}|=1\), arriving one-by-one in each incremental step. During the training of the \(t^{th}\) task only the data \(\mathscr{D}_{t}\) is available, the previous tasks' data \(\mathscr{D}_{1},\mathscr{D}_{2}\ldots\mathscr{D}_{t-1}\) are discarded, and we only keep a few samples into a small memory buffer \(\mathscr{M}\). Optimizing network parameters with a single example is challenging; therefore, we select a subset of samples with size \(C\) from memory \(\mathscr{M}\), that is, \(\mathscr{D}_{\nu}^{t}\sim\mathscr{M}\) with \(|\mathscr{D}_{\nu}|=C\). We combine the subset \(\mathscr{D}_{\nu}^{t}\) with the new example \(\mathscr{D}_{t}\) to form a joint batch: \(\mathscr{D}_{\nu}^{t}\leftarrow\mathscr{D}_{t}\cup\mathscr{D}_{\nu}^{t}\) and compute the cross-entropy loss, as defined below: \[\mathscr{L}_{\nu}^{t}=\mathbb{E}_{(\mathbf{z},y)\sim\mathscr{D}_{t}^{t}}[ \mathscr{L}_{CE}(y,F_{\theta}(\mathbf{z}))] \tag{1}\] Let \(\nabla_{\theta}\mathscr{L}_{\nu}^{t}(F_{\theta})\) be the gradient of the loss (Eq. 1) w.r.t. the plastic network parameters \(\theta\). With this gradient, we compute the updated _local parameter_ as follows: \[\mathbf{\theta}^{v}\leftarrow\theta-\alpha\nabla_{\theta}\mathscr{L}_{\nu}^{t}(F_ {\theta}) \tag{2}\] The optimization above is a virtual/local gradient update, as it doesn't alter the model parameters \(\theta\). However, \(\mathbf{\theta}^{v}\) is optimized focusing on the novel sample, which may not generalize well to the observed past samples due to changes in the previously optimal network parameters, leading to forgetting. To address this, we perform a global optimization with rehearsal. For this, we choose two more sample subsets from memory: \(\mathscr{D}_{1}^{t},\mathscr{D}_{\mathscr{M}}\sim\mathscr{M}\), each with size \(C\). Let \(H_{t}^{t}\gets F_{\theta}(\mathscr{D}_{t}^{t})\) represent the logits obtained over replay samples \(\mathscr{D}_{t}^{t}\), with \(F_{\theta}(\cdot)\) denoting the semantic memory (see Sec. III-C). Then, we compute the loss over virtual parameters \(\theta^{v}\) using both subsets from the replay buffer as the sum of cross-entropy and knowledge distillation loss [28], defined as: \[\mathscr{L}^{t}=\mathbb{E}_{(\mathbf{z},y)\sim\mathscr{D}_{\nu}^{t}}[\mathscr{L} _{CE}(y,F_{\theta^{v}}(\mathbf{z}))]+\lambda\mathscr{L}_{MSE}(H_{t}^{t},F_{ \theta^{v}}(\mathscr{D}_{t}^{t})) \tag{3}\] Eq. 3 assesses the generalization loss over the rehearsal subsets using virtual parameters optimized for the new streaming example. If the model exhibits forgetting, then the virtual parameters will incur high loss due to poor generalization. Otherwise, the loss will be small. Suppose \(\nabla_{\theta^{v}}\mathscr{L}^{t}(F_{\theta^{v}})\) be the gradient of the loss in Eq. 3 w.r.t. the virtual model parameters \((\mathbf{\theta}^{v})\). Then, we can compute the _global parameters_, for the plastic network \((\mathbf{\theta})\), as follows: \[\mathbf{\theta}\leftarrow\mathbf{\theta}-\beta\nabla_{\theta^{v}}\mathscr{L}^{t}(F_{ \theta^{v}}) \tag{4}\] Eq. 4 updates \(\mathbf{\theta}\) using the gradient of the virtual parameter, which might appear counter-intuitive. However, the alternating competitive training between Eq. 2 and Eq. 4 is crucial. Below, we briefly discuss this training behavior. The updates in Eq. 2 and Eq. 4 occur alternately. Eq. 2 assesses how well \(\mathbf{\theta}\) generalizes to new data, while Eq. 4 focuses on \(\theta^{v}\)'s generalization to past data. For minimum loss (in Eq. 1), \(\mathbf{\theta}\) must adapt to new samples, which is more challenging as compared to \(\mathbf{\theta}^{v}\), which is optimized for new samples. Conversely, \(\mathbf{\theta}^{v}\) may struggle to generalize to past samples due to its emphasis on new data. Hence, both \(\mathbf{\theta}\) and \(\mathbf{\theta}^{v}\) need to generalize effectively to new and replayed samples, minimizing losses in both Eq. 1 and Eq. 3. In this alternating learning, convergence occurs when these losses approach zero, implying mutual generalization and reduced forgetting. ### _Tiny Episodic Memory (TEM)_ The model uses a fixed-sized tiny episodic memory (TEM) to act as short-term memory. In each incremental step, it \((i)\) replays subsets of \((C)\) samples uniformly selected from memory for continual learning, and \((ii)\) stores the new sample in memory. It employs Reservoir Sampling [46] and Class Balancing Random Sampling to maintain a fixed-sized replay buffer. Reservoir Sampling selects a random buffer sample to replace with the new example, while Class Balancing Random Sampling picks a sample from the most populated class in the buffer to replace the new one. ### _Semantic Memory (SEM)_ Semantic memory (SEM) retains long-term knowledge and combats forgetting using self-distillation-loss [28, 47, Fig. 2: In VERSE, Virtual-gradient-regularization (VGR) enables CL by adapting to new sample(s) with a virtual model \((\mathbf{\theta}^{v})\), which computes the final model \((\mathbf{\theta})\) through rehearsal. Episodic memory (TEM) stores a few observed samples, while Semantic memory (SEM) enforces consistency with self-distillation loss, improving overall performance. [48, 29, 49] minimization (Eq. 3), aligning the current model's decision boundary with past memories. SEM, based on a DNN and initialized with working model parameters, absorbs knowledge from the working network \(\theta\) in incremental steps. Inspired by mean-teacher [21, 22, 23, 24], SEM is updated stochastically via exponential moving average (EMA) rather than at every iteration. Given a randomly sampled value \(u\) from a uniform distribution (\(u\sim\mathcal{U}(0,1)\)) and an acceptance probability \(r\), the update process for SEM denoted with \(\Phi\) and the working model parameter \(\theta\), is defined as follows \[\Phi=\begin{cases}\gamma\,\Phi+(1-\gamma)\,\theta&,u<r\\ \Phi&,Otherwise\end{cases} \tag{5}\] The acceptance probability is a hyper-parameter that regulates the frequency of SEM updates. A lower (higher) acceptance probability means less (more) frequent updates, retaining more (less) information from the remote model. This update resembles the mammalian brain, with information initially stored in short-term memory before transitioning to long-term memory. Algorithm 1 illustrates the different stages of the proposed model. ## IV Experiments ### _Datasets, Data Orderings and Metrics_ **Datasets.** We evaluate our method (VERSE) through extensive experiments on three temporally coherent datasets: iCub1.0 [50], iCub28 [51], and CoRe50 [20]. iCub1.0 involves object recognition from video frame sequences, with each frame containing a single object instance, while iCub28 is similar but spans across four days. CoRe50, like iCub1.0/28, includes temporally ordered images divided into 11 sessions with varying backgrounds and lighting. **Data Orderings.** To test VERSE's robustness in a challenging SLL setup, we assess its streaming learning capability using four data-ordering schemes, as in [10, 9, 11]. These schemes include (i) streaming i.i.d., (ii) streaming class i.i.d., (iii) streaming instance, and (iv) streaming class instance ordering. ``` 0:Initialize: \(\Phi=\theta\) 0:Hyperparameters: \(\lambda,r,\alpha,\beta,\gamma\), Memory: \(\mathcal{M}\) 1:for\(t\in{1,\ldots,T,\ldots}\)do 2:\(\{(z_{t},y_{t})\}=\{(G_{\texttt{z}}(x_{t}),y_{t})\}\sim\mathcal{D}_{t}\)\(\triangleright\)\(|\mathcal{D}_{t}|=1\) 3: Select samples from \(\mathcal{M}\): \(\mathcal{D}_{t}^{t}\sim\mathcal{M}\)\(\triangleright\)\(|\mathcal{D}_{t}^{t}|=C\) 4:\(\mathcal{D}_{t}^{t}\leftarrow\mathcal{D}_{t}\cup\mathcal{D}_{t}^{t}\) 5: Compute \(\mathcal{L}_{t}^{t}\) using Eq. 1 and evaluate \(\nabla_{\theta}\mathcal{L}_{t}^{t}(F_{\theta})\) 6:\(\theta^{v}=\theta-\alpha\nabla_{\theta}\mathcal{L}_{v}^{t}(F_{\theta})\)\(\triangleright\) Virtual gradient update 7: Select samples from \(\mathcal{M}\): \(\mathcal{D}_{t}^{t}\), \(\mathcal{D}_{\mathcal{M}}^{t}\sim\mathcal{M}\)\(\triangleright\)\(|\mathcal{D}_{t}^{t}|=|\mathcal{D}_{\mathcal{M}}^{t}|=C\) 8:\(H_{t}^{t}\leftarrow\mathcal{F}_{\Theta}(\mathcal{D}_{t}^{t})\) 9: Compute \(\mathcal{L}^{t}\) using Eq. 3 and evaluate \(\nabla_{\theta^{v}}\mathcal{L}^{t}(F_{\theta^{v}})\) 10:\(\theta=\theta-\beta\nabla_{\theta^{v}}\mathcal{L}^{t}(F_{\theta^{v}})\)\(\triangleright\) Global update 11: sample \(u\sim\mathcal{U}(0,1)\) 12:if\(u<r\)then 13:\(\Phi\leftarrow\gamma\,\Phi+(1-\gamma)\,\theta\) 14:\(UpdateMemory(\mathcal{M},\mathcal{D}_{t},t)\)\(\triangleright\) Add sample to \(\mathcal{M}\) 15:return\(\theta,\Phi\) ``` **Algorithm 1****VERSE** **Metrics.** To assess the learner's performance in a CISLL setup, we employ the \(\mathbf{\Omega}_{\text{all}}\) metric, following the approach in [7, 9, 10, 11]. This metric quantifies CL performance normalized against an _Offline_ baseline: \[\mathbf{\Omega}_{\text{all}}=\frac{1}{T}\sum_{t=1}^{T}\frac{\mathbf{\alpha}_{ t}}{\mathbf{\alpha}_{\text{Offline},t}} \tag{6}\] where \((i)\)\(T\) is the total number of testing events, \((ii)\)\(\mathbf{\alpha}_{t}\) is the streaming learner's unnormalized performance at time \(t\), and \((iii)\)\(\mathbf{\alpha}_{\text{Offline},t}\) is the unnormalized performance of the Offline baseline at time \(t\). ### _Baselines and Compared Methods_ VERSE adheres to the challenging CISLL approach. We \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**iid**} & \multicolumn{4}{c}{**Class-iid**} & \multicolumn{4}{c}{**instance**} & \multicolumn{4}{c}{**Class-instance**} \\ \cline{2-13} & **iCub1.0** & **iCub28** & **CoRe50** & **iCub1.0** & **iCub28** & **CoRe50** & **iCub1.0** & **iCub28** & **CoRe50** & **iCub1.0** & **iCub28** & **CoRe50** \\ \hline Fine-tune & 0.9550 & 0.8432 & 0.9681 & 0.3902 & 0.4265 & 0.4360 & 0.1981 & 0.2483 & 0.2468 & 0.3508 & 0.4810 & 0.3400 \\ EWC++ [2, 12] & - & - & - & 0.3747 & 0.4218 & 0.4307 & - & - & - & 0.3507 & 0.4805 & 0.3401 \\ MAS [4] & - & - & - & 0.3758 & 0.4334 & 0.4333 & - & - & - & 0.3509 & 0.4807 & 0.3401 \\ AGEM [16] & - & - & - & 0.4626 & 0.7507 & 0.5633 & - & - & - & 0.3510 & 0.4811 & 0.3399 \\ FIFO & 0.9269 & 0.9774 & 0.9943 & 0.4971 & 0.6550 & 0.4763 & 0.3257 & 0.2807 & 0.1481 & 0.3698 & 0.4811 & 0.3399 \\ _GDumb_[8] & 0.9269 & 0.7076 & 0.9502 & 0.9683 & 0.8293 & 0.9767 & 0.6240 & 0.4704 & 0.6521 & 0.7734 & 0.6481 & 0.6628 \\ TinyE [17] & 0.9852 & 0.9752 & 1.0064 & 0.9766 & 0.8584 & 0.9723 & 0.9324 & 0.7959 & 0.9315 & 0.8825 & 0.7074 & 0.8525 \\ DER [18] & 0.5976 & 0.8625 & 0.9897 & 0.8727 & 0.8402 & 0.9734 & 0.7972 & 0.8397 & 0.8970 & 0.8293 & 0.8286 & 0.9630 \\ DER++ [18] & 0.9004 & 0.9020 & 0.9985 & 0.9398 & 0.8746 & 0.9786 & 0.8785 & 0.8547 & 0.9933 & 0.9125 & 0.8484 & 0.9696 \\ CLS-EF [19] & 0.9573 & 0.1837 & 0.1107 & 0.5010 & 0.6664 & 0.3182 & 0.6854 & 0.1837 & 0.1107 & 0.5007 & 0.5858 & 0.2580 \\ ESKStream [9] & 0.9114 & 0.8053 & 0.9286 & 0.9035 & 0.8375 & 0.8884 & 0.8713 & 0.7389 & 0.8530 & 0.8806 & 0.8339 & 0.9091 \\ REMIND [10] & 0.9666 & 0.9483 & 0.9988 & 0.9544 & 0.8197 & 0.9507 & 0.9102 & 0.7764 & 0.8993 & 0.8453 & 0.6784 & 0.8259 \\ **Ours** & **1.0087** & **1.0045** & **1.0202** & **1.0069** & **0.8874** & **0.9918** & **0.9613** & **0.8555** & **0.9945** & **0.9985** & **0.8840** & **0.9851** \\ \hline Offline & 1.000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ Offline & 0.8046 & 0.8726 & 0.9038 & 0.8785 & 0.9266 & 0.9268 & 0.8046 & 0.8726 & 0.9038 & 0.878 compare it with ExStream [9] and REMIND [10]. We also evaluate various IBL and OCL approaches (EWC++, MAS, AGEM, GDumb, TinyER, DER/DER++, CLS-ER, FIFO), as well as two additional baselines: \((i)\) offline training with full dataset access (Offline/Upper Bound), and \((ii)\) fine-tuning with one example at a time and no CL strategy (Fine-tuning/Lower Bound). All comparisons are performed under the SLL setup, except for _GDumb_, which fine-tunes with replay buffer samples, giving it an unfair advantage. ### _Implementation Details_ In all experiments, baselines are trained with one sample at a time using the same network architecture. We employ ResNet-18 [52] pretrained on ImageNet-1K [42, 43] available in PyTorch [53] TorchVision package, using its first 15 convolutional (conv) layers and 3 downsampling layers as the feature extractor \((G)\). The remaining 2 conv layers and 1 fully connected (FC) layer constitute the plastic network \((F)\). For ExStream [9], all 17 conv and 3 downsampling layers are utilized for feature extraction \((G)\), and the final FC layer serves as the plastic network \((F)\). Feature embeddings are stored in memory for all baselines, including VERSE. Replay buffer capacity is specified in Table III. We employ reservoir sampling for class-instance and instance ordering and class-balancing random sampling for class-iid and iid ordering. Experience-replay and self-distillation consistently use \(C=16\) samples across all baselines. Hyperparameters are set as follows: \(\alpha=0.005,\beta=0.01,\lambda=0.3\), and \(\gamma=0.9\). \(r\) values are set as: \((i)\)\(r=0.4\) for iCub1.0, \((ii)\)\(r=0.1\) for iCub28, and \((iii)\)\(r=0.05\) for CORe50 dataset. Each experiment is repeated 10 times with different data permutations, and the average accuracy is reported. ### _Results_ Table II presents VERSE's performance in various experimental setups with different data orderings and datasets. We conducted 10 repetitions of each experiment, reporting average accuracy. Notably, VERSE consistently surpasses the baselines by a significant margin. It demonstrates robustness to different data-ordering schemes, which are known to induce catastrophic forgetting. In contrast, IBL methods like EWC++ [2, 12] and MAS [4] experience severe forgetting. Even _GDumb_[8], which fine-tunes network parameters with buffer samples before each inference, fails to outperform VERSE. iCub1.0/28 and CoRe50 are temporally coherent datasets, offering a more realistic and challenging evaluation scenario. Class-instance and instance ordering requires the agent to learn from temporally ordered video sequences one at a time. Table II shows that VERSE achieves notable improvements: \((i)\) up to 8.6% and 2.89% on iCub1.0 for class-instance and instance ordering, \((ii)\) 3.56% on iCub28 for class-instance ordering, and \((iii)\) 1.55% on CoRe50 for class Fig. 4: Performance (\(\mathbf{\mu_{\text{all}}}\)) comparison between VERSE (Ours) and the other baselines on ImageNet100. Fig. 5: Performance (\(\mathbf{\mu_{\text{all}}}\)) comparison between VERSE and Lifelong MAML [54] on iCub1.0. Fig. 3: Plots of \(\mathbf{\alpha_{t}}\) as a function of streaming learning model and data-orderings. VERSE outperforms other SLL models in both streaming class-iid (top-row) and streaming class-instance (bottom-row) orderings across datasets. instance ordering. Fig. 3 plots accuracy (\(\alpha_{t}\)) of VERSE (Ours) and other baselines for class-iid and class-instance ordering. Notably, VERSE better retain knowledge of old classes compared to other baselines, particularly excelling in class-instance ordering. We also assess VERSE and other baselines on ImageNet100, a subset of ImageNet-1K (ILSVRC-2012) [42, 43], comprising randomly selected 100 classes, each with 700-1300 training samples and 50 validation samples. As ImageNet-1K lacks labels for test samples, we used the validation set for testing, following [10]. Fig. 4 illustrates the performance (\(\mathbf{\mu}_{\text{all}}\)) of various baselines, including VERSE, showing VERSE consistently outperforming all other CL methods. \(\mathbf{\mu}_{\text{all}}\) represents the mean-absolute accuracy with: \[\mathbf{\mu}_{\text{all}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{\alpha}_{t} \tag{7}\] where \((i)\)\(T\) is the total number of testing events and \((ii)\)\(\mathbf{\alpha}_{t}\) is the accuracy of the streaming learner at time \(t\). Finally, we conduct a performance comparison between VERSE and Lifelong MAML [54], a continual learning variant of MAML [55], on the iCub1.0 dataset. Fig. 5 illustrates the performance comparison using \(\mathbf{\mu}_{\text{all}}\) metric with VERSE consistently outperforming LifeLong MAML across all data-orderings. ## V Ablations We perform extensive ablations to validate the importance of the various components of VERSE. **Choice Of Buffer Capacity.** Fig. 6 (left) shows the impact of different buffer capacities on iCub1.0. Increased buffer capacity leads to improved model performance. **Choice Of Hyper-parameter \((\lambda)\).** Fig. 6 (middle) shows the impact of changing the self-distillation hyper-parameter \((\lambda)\) on iCub1.0. The best performance is consistently achieved across all data orderings with \(\lambda=0.3\). **Significance Of Self-Distillation Loss.** Fig. 6 (middle) depicts the model's performance with \(\lambda=0.0\), indicating no self-distillation. While the best performance is achieved with \(\lambda=0.3\), self-distillation alone does not significantly improve performance. **Significance Of Acceptance-Rate \((r)\).** Fig. 6 (right) illustrates the impact of changing the acceptance-rate \((r)\) on iCub1.0. The best performance is achieved with \(r=0.4\). However, increasing \(r\) to \(0.50\) leads to performance degradation. For instance ordering, the model tends to perform best with \(r=0.0\). **Significance of Exponential Moving Average (EMA).** Fig. 7 highlights the importance of SEM \((\Phi)\) in the model's performance. Without using SEM and relying solely on the working model \((\theta)\) for computing logits in self-distillation loss (Eq. 3), the model's performance degrades. Additionally, for temporally coherent orderings (instance and class instance orderings), not using EMA to update SEM severely degrades VERSE's performance. **Significance of Buffer Replacement Policies.** Fig. 8 shows the model's performance with different buffer replacement policies used for TEM. For temporally ordered data (instance and class instance ordering), reservoir sampling yields the best performance. However, for i.i.d and class i.i.d ordering, class balancing random sampling or balanced sampling achieves the best results. ## VI Conclusion We address the challenging problem of streaming lifelong learning, where the learner is given only one sample at a time during training, the learned model is required to have anytime inference capability. Our replay-based virtual-gradient-regularization with global and virtual/local parameters generalization to both previous and novel task samples. Tiny episodic memory for rehearsal and semantic memory help align the decision boundary with past memories through self-distillation-loss. Extensive experiments and ablations on various datasets and data orderings demonstrate our approach's efficacy. Fig. 8: Plots of \(\mathbf{\mu}_{\text{all}}\) as a function of buffer replacement policies on iCub1.0. Fig. 6: Plots of \(\mathbf{\Omega}_{\text{all}}\) as as function of \((i)\) replay buffer capacity \((|\mathcal{M}|)\), \((ii)\) knowledge-distillation hyper-parameter \((\lambda)\), and \((iii)\) acceptance-rate hyper-parameter \((r)\) on iCub1.0. Fig. 7: Plots of \(\mathbf{\mu}_{\text{all}}\) as a function of EMA \((\Phi)\) used to compute the semantic memory (Eq. 5) on CORe50.
2309.03372
Neutrinos and gamma-rays from Galaxy Clusters constrained by the upper limits of IceCube
Clusters of galaxies possess the capability to accelerate cosmic rays (CRs) to very high energy up to $\sim10^{18}$~eV due to their large size and magnetic field strength which favor CR confinement for cosmological times. During their confinement, they can produce neutrinos and $\gamma-$rays out of interactions with the background gas and photon fields. In recent work, \cite{hussain2021high, hussain2023diffuse} have conducted three-dimensional cosmological magnetohydrodynamical (MHD) simulations of the turbulent intracluster medium (ICM) combined with multi-dimensional Monte Carlo simulations of CR propagation for redshifts ranging from $z \sim 5$ to $z = 0$ to study the multi-messenger emission from these sources. They found that when CRs with a spectral index in the range $1.5 - 2.5$ and cutoff energy $E_\mathrm{max} = 10^{16} - 10^{17}$~eV are injected into the system, they make significant contributions to the diffuse background emission of both neutrinos and gamma-rays. In this work, we have revisited this model and undertaken further constraints on the parametric space. This was achieved by incorporating the recently established upper limits on neutrino emission from galaxy clusters, as obtained by the IceCube experiment. We find that for CRs injected with spectral indices in the range $2.0 - 2.5$, cutoff energy $E_\mathrm{max} = 10^{16} - 10^{17}$~eV, and power corresponding to $(0.1-1)\%$ of the cluster luminosity, our neutrino flux aligns with the upper limits estimated by IceCube. Additionally, the resulting contribution from clusters to the diffuse $\gamma$-ray background (DGRB) remains significant with values of the order of $ \sim 10^{-5}\, \mathrm{MeV} \, \mathrm{cm}^{-2} \,\mathrm{s}^{-1} \, \mathrm{sr}^{-1}$ at energies above $500$ GeV.
Saqib Hussain, Elisabete M. de Gouveia Dal Pino, Giulia Pagliaroli
2023-09-06T21:47:12Z
http://arxiv.org/abs/2309.03372v1
# Neutrinos and gamma-rays from Galaxy Clusters constrained by the upper limits of IceCube ###### Abstract Clusters of galaxies possess the capability to accelerate cosmic rays (CRs) to very high energy up to \(\sim 10^{18}\) eV due to their large size and magnetic field strength which favor CR confinement for cosmological times. During their confinement, they can produce neutrinos and \(\gamma-\)rays out of interactions with the background gas and photon fields. In recent work, Hussain et al. (2021, 2023) have conducted three-dimensional cosmological magnetohydrodynamical (MHD) simulations of the turbulent intracluster medium (ICM) combined with multi-dimensional Monte Carlo simulations of CR propagation for redshifts ranging from \(z\sim 5\) to \(z=0\) to study the multi-messenger emission from these sources. They found that when CRs with a spectral index in the range \(1.5-2.5\) and cutoff energy \(E_{\rm max}=10^{16}-10^{17}\) eV are injected into the system, they make significant contributions to the diffuse background emission of both neutrinos and gamma-rays. In this work, we have revisited this model and undertaken further constraints on the parametric space. This was achieved by incorporating the recently established upper limits on neutrino emission from galaxy clusters, as obtained by the IceCube experiment. We find that for CRs injected with spectral indices in the range \(2.0-2.5\), cutoff energy \(E_{\rm max}=10^{16}-10^{17}\) eV, and power corresponding to \((0.1-1)\%\) of the cluster luminosity, our neutrino flux aligns with the upper limits estimated by IceCube. Additionally, the resulting contribution from clusters to the diffuse \(\gamma\)-ray background (DGRB) remains significant with values of the order of \(\sim 10^{-5}\,\rm MeV\,cm^{-2}\,s^{-1}\,sr^{-1}\) at energies above 500 GeV. Multi-messenger Astronomy; Galaxy Clusters 0000-0002-4003-2886]Saqib Hussain 0000-0002-1888-7885]Elisabete M. de Gouveia Dal Pino 0000-0002-0002-3883]Giulia Pagliaroli ## 1 Introduction The diffuse neutrino (Aartsen et al., 2015) and gamma ray (Ackermann et al., 2015) backgrounds provide a unique prospective of the high-energy Universe but their origin is still debated. It is closely related with ultra-high-energy cosmic rays (UHECRs) (Abu-Zayad et al., 2013; Aab et al., 2017). The most plausible scenario is that the neutrino and gamma-rays are produced by interactions of UHECRs with the background gas and photon fields in astrophysical environments. Several sources embedded in galaxy clusters arise as candidates for the production of very high energy CRs including active galactic nuclei and starburst galaxies. The turbulent intracluster medium (ICM) is also believed to be particularly suitable to accelerate and confine CRs up to energies \(\sim 10^{18}\) eV (Inoue et al., 2007; Wiener and Zweibel, 2019; Alves Batista et al., 2019). Recently, the IceCube collaboration (Abbasi et al., 2022) performed a stacking analysis for 1094 galaxy clusters of masses \(\gtrsim 10^{14}\,M_{\odot}\) and up to redshift \(z\lesssim 1.0\) taken from the 2015 PLANCK survey (Ade et al., 2016). To complete the catalogue, they calculated the distribution of galaxy clusters by drawing \(\sim 10^{5}\) samples with mass range \(10^{14}\lesssim M/M_{\odot}\lesssim 10^{15}\) extending to redshift \(z\lesssim 2.0\), using the Tinker\(-\)2010 halo mass function (Tinker et al., 2010). Based on these samples, they estimated upper limits according to which the contribution from the clusters to the observed diffuse neutrino background is at most \(\sim 4.6\%\) at 100 TeV for the most realistic scenario (see for details Raghunathan et al., 2022). Several recent studies (see e.g., Murase et al., 2013; Fang and Olinto, 2016; Fang and Murase, 2018; Hussain et al., 2021, 2023) have predicted that clusters of galaxies can contribute to a sizeable percentage to the diffuse neutrino and \(\gamma-\)ray background. In particular, Hussain et al. (2021, 2023) used the most detailed numerical approach to date combining three-dimensional (3D) cosmological magneto-hydro-dynamic (MHD) simulations with Monte-Carlo simulations of CR propagation and cascading and predicted that clusters can potentially contribute to a fairly large fraction to the diffuse neutrino and gamma-ray background observed by IceCube (Aartsen et al., 2015) and Fermi-LAT (Ackermann et al., 2015), respectively. However, the aforementioned upper limit reported by the IceCube (Abbasi et al., 2022) excludes part of the parametric space they considered in their analysis of the neutrino flux. In this work, we constrain the parametric space considered in Hussain et al. (2021, 2023) employing the latest upper limits reported by the IceCube for the neutrino flux of galaxy clusters and derive new limits to the diffuse \(\gamma-\)ray flux. We find that these new constraints eliminate the harder CR spectral indices \(\alpha\lesssim 2.0\), but still predict a substantial contribution from the clusters to the very high energy range of the \(\gamma-\)ray flux. In section 2 we describe the methodology for the calculation of the fluxes of neutrinos and \(\gamma-\)rays in galaxy clusters, in section 3 we present the results, and section 4 is dedicated to the discussion of our results and the conclusions. ## 2 Methodology We have employed here the same set of simulations as in Hussain et al. (2021, 2023). The ICM was modeled with 3D-MHD cosmological simulations (Dolag et al., 2005) up to redshift \(z\lesssim 5.0\) taking into account the non-uniform distribution evolution of the magnetic field, gas density, and temperature. CRs were proppagated in the ICM and intergalactic medium (IGM) to produce neutrinos and \(\gamma-\)rays, employing the Monte Carlo code CRpropa (Batista et al., 2016). To calculate the neutrino flux CRs were injected in the energy range \(10^{14}\leq E/\mathrm{eV}\leq 10^{19}\) because we are interested in neutrino energies above TeV. On the other hand, for \(\gamma-\)rays, since we are interested in energies above 10 GeV, CRs were injected with energies \(10^{11}\leq E/\mathrm{eV}\leq 10^{19}\). However, in order to normalize the total energy to the luminosity of the clusters, it was considered the whole energy range starting from 1 GeV. Also, in order to account for different sources of acceleration, CRs were injected at different locations within the clusters: in the center, at a radius \(\sim 300\) kpc, and in the outskirt \(\sim 1\) Mpc. Obviously, the dominant contribution to \(\gamma-\)ray and neutrino fluxes come from CRs sources located in the center (see Hussain et al., 2021, 2023, for details). In these previous studies, it has been considered that 1% of the cluster luminosity (\(L_{C}\)) goes to the CRs. The simulations have two steps. In the first step, CRs are propagated inside the clusters and \(\gamma-\)rays and neutrinos are collected at the edge of them. All relevant CR interactions were considered during their propagation inside clusters, namely, proton-proton (pp) interactions, photopion production, Bethe-Heitler pair production, pair production, and inverse Compton scattering (ICS). Energy losses due to the expansion of the universe and the synchrotron emission were also considered, but the photons produced in these processes are below the energy range of interest of this work. In the second step, the \(\gamma-\)rays collected at the boundary of the clusters were propagated through the intergalactic medium (IGM) across the redshift interval. During this propagation, the electromagnetic cascade processes including (single, double, and triplet) pair production and ICS were also accounted for. The effect of the IGM magnetic field was neglected in this propagation step since it does not significantly affect the \(\gamma-\)ray flux above 10 GeV (Hussain et al., 2023). As in Hussain et al. (2021) and Hussain et al. (2023), the integrated neutrino and \(\gamma-\)ray fluxes (\(\Phi\)) from all clusters in the mass range \(10^{12}\lesssim M/M_{\odot}\lesssim 2\times 10^{15}\) and \(z\leq 5.0\), are obtained from: \[E_{\mathrm{obs}}^{2}\Phi(E_{\mathrm{obs}})=\int\limits_{z_{\mathrm{min}}}^{z_{ \mathrm{max}}}dz\int\limits_{M_{\mathrm{min}}}^{M_{\mathrm{max}}}dM\frac{dN}{ dM}E^{2}\frac{d\dot{N}(E/(1+z),M,z)}{dE}\times g(E_{\mathrm{obs}},E,z)\left(\frac{\psi_{ \mathrm{ev}}(z)f(M)}{4\pi d_{L}^{2}(z)}\right) \tag{1}\] where \(dN/dM\) is the number of clusters per mass interval calculated from the MHD simulation, \(g(E_{\mathrm{obs}},E,z)\) accounts for the interactions of gamma rays in the ICM and the IGM, \(\psi_{\mathrm{ev}}(z)\) is a function that describes the cosmological evolution of the emissivity of the CR sources (see e.g., Alves Batista et al., 2019), the quantity \(E^{2}\,d\dot{N}/dE\) denotes the neutrino or \(\gamma-\)ray power spectrum obtained from the simulations, \(d_{L}\) is the luminosity distance, and \(f(M)\) is a factor that accounts for stellar and AGN feedback (Planelles et al., 2014). For detailed calculations, we refer to Hussain et al. (2023) and Hussain et al. (2021). ## 3 Results In Fig. 1, we show the flux of neutrinos from clusters of galaxies, considering that their contribution is constrained by the upper limits recently reported by the IceCube (Abbasi et al., 2022). As stressed, in the previous works Hussain et al. (2021, 2023) had assumed that 1% of each cluster luminosity (\(L_{C}\)) goes to CRs, and considered a range for the CR spectral index \(1.5\leq\alpha\leq 2.5\) and maximum energy \(10^{16}\leq E_{\rm max}/{\rm eV}\leq 5\times 10^{17}\). Here, we find that the CR parameters which are more suitable to fulfill the new IceCube limits are the following: \(2.0\leq\alpha\leq 2.5\), \(E_{max}=10^{16}-10^{17}\) eV, and luminosities in the range \((0.1-1.0)\%\,L_{C}\). The figure shows three bands all constrained by this interval of \(\alpha\) values, but with three different luminosities \(1\%L_{C}\), \(0.5\%L_{C}\), and \(0.1\%L_{C}\), from light to dark-blue, respectively. Also, while the light-blue and blue bands have \(E_{\rm max}=10^{17}\) eV, the dark-blue band shows the flux for \(E_{\rm max}\) in the range \(10^{16}-10^{17}\) eV. In each of these bands, the larger the value of the index \(\alpha\), the smaller the flux is. We see that the band with \(0.1\%\)\(L_{C}\) is the most constrained one by the IceCube limits. In particular, for \(\alpha\geq 2.3\), this band falls entirely below the IceCube limits. Also, decreasing the value of \(E_{max}\) results in larger flux at smaller neutrino energies. This explains why the light-blue and blue bands produce fluxes around or below the IceCube limits only for \(E_{max}\simeq 10^{17}\) eV. For \(E_{max}\simeq 10^{16}\) eV, the peak of the flux in the figure increases by almost an order of magnitude at neutrino energies \(\sim 10^{13}\) eV. Fig. 2 summarizes our results showing both the \(\gamma-\)ray and the neutrino fluxes for the same parameters of Fig. 1. Besides the IceCube upper limits for clusters and diffuse neutrino background, it also depicts the diffuse \(\gamma-\)ray background (DGRB) data from Fermi-LAT (Ackermann et al., 2015) and the upper limits from HAWC (Albert et al., 2022). The \(\gamma-\)ray flux is given by the light-green, dark-green and olive-green bands, which are the counterparts of the light-blue, blue and dark-blue neutrino bands, respectively. We see that the olive-green, which is the most constrained band by the upper limits of the IceCube, falls below the Fermi data for \(\gamma-\)rays. In Figure 3 we compare the results obtained in Fig. 2 with the earlier results of (Hussain et al., 2021) and (Hussain et al., 2023). We note that the parametric space considered in these works cannot be excluded entirely by the upper limits of the IceCube (Abbasi et al., 2022). In Fig. 4, we show the constrained \(\gamma-\)ray flux obtained in figure 2 for the entire population of clusters compared with sensitivity curves of different experiments. It includes the sensitivity curves for point-like sources from the Figure 1: Neutrino flux from the entire population of clusters up to redshift \(z\leq 5.0\). From top to bottom blue bands correspond to \(1\%\), \(0.5\%\), and \(0.1\%\) of the cluster luminosity (\(L_{C}\)), respectively. Light-blue and blue bands represent the flux for \(2.0\leq\alpha\leq 2.5\) and \(E_{\rm max}=10^{17}\) eV. The dark-blue band shows the flux for the same spectral indices, but \(E_{\rm max}\) ranges from \(10^{16}-10^{17}\) eV. The IceCube upper limits for clusters (Abbasi et al., 2022) as well as the diffuse neutrino background (Aartsen et al., 2015, 2015) are also depicted. High Altitude Water Cherenkov Observatory (HAWC) (Abeysekara et al., 2013), Large High Altitude Air Shower Observatory (LHASSO) (Lhaaso Collaboration et al., 2016), and the forthcoming Cherenkov Telescope Array (CTA) (CTA Consortium et al., 2018), as well as the upper limits for the DGRB from HAWC (Albert et al., 2022) and CASA-MIA (Chantell et al., 1997) experiments (see also Hussain et al., 2023). Clearly, these observatories can potentially observe very high energy \(\gamma-\)rays from the clusters. ## 4 Discussion and Conclusions We have computed the flux of neutrinos and \(\gamma-\)rays from the entire population of galaxy clusters using the most detailed numerical simulations to date, considering 3D-MHD cosmological simulations of galaxy clusters up to redshift \(z\lesssim 5.0\), as in Hussain et al. (2021, 2023). According to these authors, clusters can contribute to a sizeable percentage of up to 100% to diffuse neutrino background, depending on the parameters adopted for the CR spectrum. However, the IceCube collaboration (Abbasi et al., 2022) has reported that this contribution cannot exceed \((9-13)\%\). Evaluating upper limits for the neutrino flux of the clusters, this collaboration concluded that these new constraints would exclude the Hussain et al. (2021) models for hard CR spectral indices \(\alpha<2.0\). Our present results indicate that, in fact, the new IceCube limits point to harder spectral indices \(\alpha\geq 2.0\). Nevertheless, these new results are entirely compatible with the parametric space explored in Hussain et al. (2021), which also included values of \(\alpha\geq 2.0\) (see Figure 3). In particular, for \(\alpha=2.5\) and \(E_{\rm max}=10^{17}\) eV the neutrino flux obtained in (Hussain et al., 2021) is below the upper-limits of the IceCube and decreases approximately by an order of magnitude if we assume CR luminosity \(0.1\%\,L_{C}\) instead of \(1\%\,L_{C}\). We have also computed the \(\gamma-\)ray flux constrained by these IceCube upper limits and found that it decreases by roughly an order of magnitude in comparison with Hussain et al. (2023), for CR luminosity \(\sim 0.1\%\,L_{C}\). Despite that, the contribution of clusters to the DGRB is still substantial above 500 GeV (Figure 3). Moreover, the flux falls within the sensitivity ranges of the HAWC, LHAASO, and the upcoming CTA observatories, suggesting the possibility for direct detections of \(\gamma-\)rays from clusters of galaxies (Figure 4). As in the case of the neutrinos, the \(\gamma-\)ray flux goes for values lower than the Fermi-LAT observations (Ackermann et al., 2015) for \(\alpha\sim 2.5\), and reduces even more, if we consider a CR luminosity \(0.1\%\,L_{C}\), rather than \(1\%\,L_{C}\). While discrete source categories such as AGNs (Di Mauro Figure 2: Multi-messenger emission from clusters of galaxies. Blue bands are the same as presented in Fig. 1 for neutrinos. The green bands give the diffuse flux of \(\gamma-\)ray obtained for the same CR parametric space as in Fig. 1 that suits the IceCube upper limits. The \(\gamma-\)ray fluxes in the light-green, dark-green and olive-green bands, have the same parameters as the neutrino fluxes in the light-blue, blue and dark-blue, respectively. The diffuse neutrino background upper limits reported by the IceCube (Aartsen et al., 2015, 2015), the DGRB observed by Fermi-LAT (Ackermann et al., 2015), and the upper limits for the DGRB from HAWC (Albert et al., 2022) are also depicted. et al., 2013; Ajello et al., 2015) and star-forming galaxies (Roth et al., 2021) can make an important contribution to the DGRB for energy levels below the TeV range, our results highlight that the combined gamma-ray flux originating from clusters can surpass the combined impact of individual classes of unresolved sources for energies exceeding 500 GeV. This aligns with the findings of Hussain et al. (2023). We should emphasize that our aim here was not to fit either IceCube upper limits (Abbasi et al., 2022) or Fermi-LAT data (Ackermann et al., 2015). Instead, we have only compared our evaluation of the integrated flux of neutrinos and \(\gamma-\) rays for the entire population of clusters with those observations. Our estimations are dependent on the parametric space and so does the IceCube results. To obtain their upper-limits, Abbasi et al. (2022) considered sources with masses in the range \(10^{14}\leq M/M_{\odot}\leq 10^{15}\) up to redshift \(z\leq 2.0\). Our analysis, on the other hand, has considered the entire mass range of clusters \(10^{12}\leq M/M_{\odot}\leq 2\times 10^{15}\), and redshifts \(z\leq 5.0\). Though major contribution to neutrino and \(\gamma-\)ray background comes from the nearby sources (\(z\leq 1\)), and more massive clusters are more frequent, the contribution from clusters in the mass range \(10^{13}\leq M/M_{\odot}\leq 10^{14}\) is not negligible (Hussain et al., 2021, 2023). Therefore, including this mass interval might change the upper limits estimated by IceCube (Abbasi et al., 2022). Finally, we would like to stress that some specific parameters may have the potential to influence our simulations. For instance, a mixed composition of cosmic rays (CRs) and the distribution of CR sources within the clusters could lead to variations in our results. If we were to consider a CR composition involving heavy elements like iron (Fe), it could potentially alter our conclusions. Similarly, slight adjustments in the assumed distributions of CR sources within the structures (see Section 2) might also have an impact on our outcomes. Exploring further these effects is a direction we plan to pursue in the future. Furthermore, it is worth noting that we have not accounted for the influence of the uncertain diffuse magnetic field outside the clusters, which could introduce further minor variations in the gamma-ray observations. Figure 3: Neutrino (brown band) and \(\gamma-\)ray (gray band) fluxes form the entire population of galaxy clusters as obtained in (Hussain et al., 2023, 2021). These are compared with the results of Figure 2, i.e., the flux of neutrinos (blue bands) obtained with the new parametric space constrained by the upper limits estimated by the IceCube (Abbasi et al., 2022). The corresponding \(\gamma-\)ray flux (green bands) for the same parameters is also shown. The diffuse neutrino background reported earlier by the IceCube (Aartsen et al., 2015, 2015) is also depicted. The DGRB observed by Fermi-LAT (Ackermann et al., 2015) and the upper limits for the DGRB from HAWC (Albert et al., 2022) and the CASA-MIA (Chantell et al., 1997) are also depicted. We are indebt with Rafael Alves Batista for his very insightful comments and suggestions to this work. The work of SH and GP is partially supported by the research grant number 2017W4HA7S "NAT-NET: Neutrino and Astroparticle Theory Network" under the program PRIN 2017 funded by the Italian Ministero dell'Istruzione, dell'Universita' e della Ricerca (MIUR). The work of EdGDP is partially supported by the Brazilian agencies FAPESP (grants \(2013/10559-5\) and \(2021/02120-0\)) and CNPq (grant \(308643/2017-8\)).
2309.08540
Non-equilibrium effects on stability of hybrid stars with first-order phase transitions
The stability of hybrid stars with first-order phase transitions as determined by calculating fundamental radial oscillation modes is known to differ from the predictions of the widely-used Bardeen--Thorne--Meltzer criterion. We consider the effects of out-of-chemical-equilibrium physics on the radial modes and hence stability of these objects. For a barotropic equation of state, this is done by allowing the adiabatic sound speed to differ from the equilibrium sound speed. We show that doing so extends the stable branches of stellar models, allowing stars with rapid phase transitions to support stable higher-order stellar multiplets similarly to stars with multiple slow phase transitions. We also derive a new junction condition to impose on the oscillation modes at the phase transition. Termed the reactive condition, it is physically motivated, consistent with the generalized junction conditions between two phases, and has the common rapid and slow conditions as limiting cases. Unlike the two common cases, it can only be applied to nonbarotropic stars. We apply this junction condition to hybrid stellar models generated using a two-phase equation of state consisting of nuclear matter with unpaired quark matter at high densities joined by a first-order phase transition and show that like in the slow limiting case, stars that are classically unstable are stabilized by a finite chemical reaction speed.
Peter B. Rau, Gabriela G. Salaben
2023-09-15T17:04:49Z
http://arxiv.org/abs/2309.08540v2
# Non-equilibrium effects on stability of hybrid stars with first-order phase transitions ###### Abstract The stability of hybrid stars with first-order phase transitions as determined by calculating fundamental radial oscillation modes is known to differ from the predictions of the widely-used Bardeen-Thorne-Meltzer criterion. We consider the effects of out-of-chemical-equilibrium physics on the radial modes and hence stability of these objects. For a barotropic equation of state, this is done by allowing the adiabatic sound speed to differ from the equilibrium sound speed. We show that doing so extends the stable branches of stellar models, allowing stars with rapid phase transitions to support stable higher-order stellar multiplets similarly to stars with multiple slow phase transitions. We also derive a new junction condition to impose on the oscillation modes at the phase transition. Termed the _reactive condition_, it is physically motivated, consistent with the generalized junction conditions between two phases, and has the common rapid and slow conditions as limiting cases. Unlike the two common cases, it can only be applied to nonbarotropic stars. We apply this junction condition to hybrid stellar models generated using a two-phase equation of state consisting of nuclear matter with unpaired quark matter at high densities joined by a first-order phase transition and show that like in the slow limiting case, stars that are classically unstable are stabilized by a finite chemical reaction speed. ## I Introduction Understanding the equation of state (EoS) of dense matter is a fundamental outstanding goal in both nuclear physics and astrophysics. One important question to improving this understanding is the nature of the phase transition between nuclear matter and deconfined quark matter. Quark deconfinement may occur via a first-order phase transition with a density discontinuity [1; 2], a second-order transition from nuclear matter to quark-onic matter [3; 4], or a smooth hadron-quark crossover [5; 6; 7]. Different quark phases may also exist, including color superconducting phases [8], with associated phase transitions between them. Neutron stars are the principal astrophysics laboratory for studying matter at densities where the transition from hadronic to quark matter may occur, with stars containing quark matter cores termed _hybrid stars_. The presence of a first-order phase transition in the dense matter EoS is of particular interest for its effect on the masses and radii of hybrid stars, since it is required for the existence of twin stars [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. These are compact stars with identical gravitational masses but different radii and internal compositions. Additional first-order phase transitions in the dense matter EoS can give rise to higher-order stellar multiplets with identical masses but different radii. In the case of only classically-stable stars-those with mass increasing as a function of central density \(\partial M/\partial\rho_{c}>0\)- triplet stars have been proposed [20]. However, the presence of the phase transition modifies the definition of stellar stability away from the Bardeen-Thorne-Meltzer (BTM) criterion [21; 22], part of which is that \(\partial M/\partial\rho_{c}>0\). Taking this into account, numerous authors have demonstrated the existence of _slow stable_ stars [23; 24; 25]. These allow for many more pairs of twin stars consisting of a BTM-stable star plus a slow stable star. Goncalves and Lazzari [26] extended the study of slow stable stars to EoS with two-phase transitions, while Rau and Sedrakian [27] applied similar EoS which supported classical twin _and_ triplet stars to demonstrate the possible existence of higher-order slow-stable stellar multiplets- up to six stars with identical masses but different radii. Determining whether slow stable stars exist given an EoS requires computing the normal modes of oscillation of the star, specifically their fundamental (nodeless) radial modes. If the EoS has first-order phase transitions, junction conditions are needed to describe how the perturbation of the stellar fluid changes across the density discontinuity [28; 29]. The most common junction conditions are the _rapid_ and _slow_ junction conditions. Physically, these correspond to cases where the rate of the phase transition is much faster or slower than the oscillation period, such that a fluid element perturbed across the equilibrium phase boundary will either instantaneously undergo a phase transition or will retain its phase indefinitely. Stars with only rapid phase transition have identical stability properties to those determined by the BTM stability criterion, while stars with a slow phase transition do not obey this criterion in its usual form and hence may permit stable stars which would be deemed unstable according to the BTM criterion [30]. Most studies of stability of hybrid stars with first-order phase transitions have made the simplifying assumption that the stellar fluid is always in chemical equilibrium. Since chemical equilibrium is restored through weak interactions in the bulk of the star, and the rates for these interactions are much slower than typical oscillation periods at usual temperatures for all but the youngest neutron stars, this approximation should be re-examined. Additionally, the rapid and slow phase tran sitions and corresponding junction conditions are purely limiting cases, and alternative conditions require considering out-of-chemical equilibrium effects. In this paper, we consider the stability of hybrid stars including non-equilibrium effects in both the bulk and at the phase transition. In the bulk, our work generalizes studies of white dwarfs [31] and neutron stars [32] without strong first-order phase transitions. At the phase transition, we introduce a novel junction condition termed the _reactive_ condition, showing that it generally stabilizes stars that are unstable according to the BTM criterion, but does not merely replicate the slow phase transition results. In fact, it interpolates between the slow and rapid phase transition cases, but unlike the interpolating model developed in Rau and Sedrakian [27] (henceforth RS23), it is physically motivated and consistent with the generalized junction conditions introduced in Karlovini _et al._[33]. In Section II we review the calculation of radial modes of general-relativistic stars, how this is used to determine stellar stability, and discuss how non-equilibrium effects modify this calculation. The main result of this paper, the reactive junction condition, is derived in Section II.2. Section III describes the equations of state used to generate stellar models whose stability is examined when out-of-equilibrium effects are included. Section IV discusses the fundamental radial mode calculation with out-of-equilibrium effects and how these effects change the stability of stars with first-order phase transitions. We also discuss how the reaction mode, the radial mode which has no corresponding mode in the single-phase star, is modified using the new junction condition. Our results are reviewed in Section V. We work in units where \(c=G=\hbar=1\). ## II Out-of-equilibrium physics and radial oscillation modes Computing the radial oscillation modes of non-rotating stars in general relativity (see e.g., [34; 35; 31]) requires first computing equilibrium stellar models for a given equation of state by solving the Tolman-Oppenheimer-Volkoff (TOV) equation. The normal modes are then found by solving two coupled first-order differential equations in the dimensionless Lagrangian displacement field \(\xi\) and the Lagrangian pressure perturbation \(\Delta P\), for which the angular frequency squared of the oscillation modes \(\omega^{2}\) is an eigenvalue. This is a Sturm-Liouville problem, so stability is guaranteed if the fundamental (nodeless) mode has positive eigenvalue: \(\omega_{0}^{2}>0\). The oscillation mode equations are [31; 35]: \[\frac{\mathrm{d}\xi}{\mathrm{d}r} = \left(\frac{\mathrm{d}\nu}{\mathrm{d}r}-\frac{3}{r}\right)\xi- \frac{\Delta P}{r\Gamma P}, \tag{1}\] \[\frac{\mathrm{d}\Delta P}{\mathrm{d}r} = \left[\mathrm{e}^{2\lambda}\left(\omega^{2}\mathrm{e}^{-2\nu}-8 \pi P\right)+\frac{\mathrm{d}\nu}{\mathrm{d}r}\left(\frac{4}{r}+\frac{ \mathrm{d}\nu}{\mathrm{d}r}\right)\,\right]\] (2) \[\times\left(\rho+P\right)r\xi\] \[-\left[\frac{\mathrm{d}\nu}{\mathrm{d}r}+4\pi(\rho+P)r\mathrm{e} ^{2\lambda}\right]\Delta P,\] where \(P\), \(\rho\) and \(\Gamma\) are the pressure, energy density and polytropic index [36] of the equilibrium background star and \(r\) is the radial coordinate. \(\nu\) and \(\lambda\) are radial functions appearing in the metric tensor of the background star, whose first fundamental form is \[\mathrm{d}s^{2}=-\mathrm{e}^{2\nu(r)}\mathrm{d}t^{2}+\mathrm{e}^{2\lambda(r) }\mathrm{d}r^{2}+r^{2}(\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}). \tag{3}\] The formalism to compute the radial normal modes of a general relativistic star is identical to that presented in RS23: Eq. (1-2) are solved subject to the boundary conditions \[\Delta P(r=0) = -3\Gamma P\xi(r=0), \tag{4}\] \[\Delta P(r=R) = 0, \tag{5}\] where \(R\) is the outer radius of the star. \(\xi\) is only determined up to an overall normalization factor; we take the conventional \(\xi(r=0)=1\). At a density discontinuity at a first-order phase transition, junction conditions relating the values of \(\xi\) and \(\Delta P\) across the discontinuity must be imposed. Karlovini _et al._[33] found that the most general junction conditions are \[\left[\xi-\mathcal{F}\right]_{-}^{+} = 0, \tag{6a}\] \[\left[(\rho+P)\mathcal{F}\right]_{-}^{+} = 0,\] (6b) \[\left[\Delta P\right]_{-}^{+} = 0, \tag{6c}\] where \[\mathcal{F}=\frac{\Delta F}{r}\left(\frac{\mathrm{d}F}{\mathrm{d}r}\right)^{- 1}, \tag{7}\] for a function \(F=F(r)\) which defines the phase boundary. The subscripts refer to the high/low density ends of the phase transition. The simplest cases of junction conditions are the rapid and slow cases, corresponding to choosing \(F=P\) or \(\Delta F=0\) respectively. When employing the rapid junction conditions in a fundamental mode calculations, the stability as determined from \(\omega_{0}^{2}>0\) matches what is found using the BTM criterion. When the slow junction conditions are used, stars which are unstable according to the BTM criterion can be stable-these are the "slow stable" stars. ### Polytropic index vs. adiabatic index The first out-of-equilibrium effect to consider is the distinction between the polytropic index of the fluid \(\Gamma\) and the adiabatic index of the perturbation \(\Gamma_{1}\). These are defined by \[\Gamma=\frac{\rho+P}{P}\frac{\mathrm{d}P}{\mathrm{d}\rho},\qquad\Gamma_{1}\equiv \frac{\rho+P}{P}\left.\frac{\partial P}{\partial\rho}\right|_{s,\{Y_{i}\}}, \tag{8}\] where the variables held constant during partial differentiation are entropy per particle \(s\) and chemical species fractions \(Y_{i}\). We work at zero temperature, so \(s=0\) and only the \(Y_{i}\) are relevant. The derivation of Eq. (1) has assumed that the fluid elements are always in chemical equilibrium with their surroundings and hence the Lagrangian perturbations of \(P\) and \(\rho\) are related by \[\Delta P=\frac{\Gamma P}{\rho+P}\Delta\rho, \tag{9}\] hence why Eq. (1) depends on \(\Gamma\). If we do not assume that the fluid elements are always in chemical equilibrium with the background, we instead have \[\Delta P=\frac{\Gamma_{1}P}{\rho+P}\Delta\rho+P\sum_{i}\beta_{Y_{i}}\Delta Y_{ i}, \tag{10}\] where \[\beta_{Y_{i}}\equiv\left.\frac{\partial\ln P}{\partial Y_{i}}\right|_{\rho,Y_ {j}\neq Y_{i}}. \tag{11}\] The standard assumption is that the Lagrangian perturbations of the species fractions are zero \(\Delta Y_{i}=0\), which corresponds to the chemical composition of fluid elements being fixed. In this case, \(\Gamma\) in Eq. (1) is replaced with \(\Gamma_{1}\). The \(r=0\) boundary condition is also changed, with \(\Gamma\) in Eq. (4) being replaced by \(\Gamma_{1}\). The use of the polytropic index instead of the adiabatic index is the default assumption if the EoS used to generate the equilibrium stellar models is barotropic \(P=P(\rho)\). This assumption is made by most papers which have studied stability of hybrid stars with first-order phase transitions [23; 24; 25; 26; 27], with the justification that \(\Gamma\) and \(\Gamma_{1}\) differ by \(\lesssim 15\%\) in the relevant range of densities in the nuclear phase [37]. However, in single-phase compact stars allowing the stars to be out-of-equilibrium changes the stability compared to the assumption of chemical equilibrium [31; 32], allowing some BTM-unstable stars to be stable. The effect of \(\Gamma_{1}\neq\Gamma\) has not been examined in hybrid stars with strong first-order phase transitions prior to this work. Given a barotropic EoS, taking \(\Gamma_{1}\neq\Gamma\) is the only way to model out-of-equilibrium effects. The allowed values of \(\Gamma_{1}\) are limited by causality and the Ledoux criterion for convective stability [38]. Since \(\Gamma_{1}\) and \(\Gamma\) are related to the adiabatic sound speed \(c_{\mathrm{s}}\) and equilibrium sound speed \(c_{\mathrm{eq}}\) by \[c_{s}^{2}\equiv\frac{P\Gamma_{1}}{\rho+P},\qquad c_{\mathrm{eq}}^{2}\equiv \frac{P\Gamma}{\rho+P}, \tag{12}\] causality \(c_{s},c_{\mathrm{eq}}<1\) and the Ledoux criterion require that anywhere in the star \[\Gamma\leq\Gamma_{1}\leq 1+\frac{\rho}{P}. \tag{13}\] ### The reactive junction condition Out-of-equilibrium effects beyond taking \(\Gamma_{1}\neq\Gamma\) require using a non-barotropic equation of state, which also allows for a new junction condition instead of the rapid and slow cases. RS23 posed the question of whether alternatives choices of \(F\) in Eq. (7) could change the stability of hybrid stars with first-order phase transitions compared to that exhibited for the rapid and slow cases. They specifically considered an intermediate speed phase transition with junction conditions \[[\Delta P]_{-}^{+}=0,\qquad\left[\xi-\alpha\frac{\Delta P}{r}\left(\frac{ \mathrm{d}P}{\mathrm{d}r}\right)^{-1}\right]_{-}^{+}=0, \tag{14}\] where the value of \(0\leq\alpha\leq 1\) is varied. The slow and rapid conversion rate junction conditions are recovered when \(\alpha=0\) and \(\alpha=1\) respectively. For general \(\alpha\) these conditions do not satisfy Eq. (6a)-(6c), an obvious weakness of this choice. However, RS23 also pointed out that for a barotropic EoS \(P=P(\rho)\), \(F\) can only be a function of \(P\) or \(r\), and any attempt to write \(F\) as a more complicated function of \(P\) will give \(\mathcal{F}\) that is equal to the rapid conversion case. The obvious next step to obtain a more realistic junction condition intermediate is to allow the perturbed fluid elements to be out of chemical equilibrium. For concreteness, we consider an EoS which can be described as a function of \(\rho\) and two chemical species fractions \(X\) and \(Y\). The EOS has a single phase transition at some density, below which we assume that \(Y=0\) and \(P=P(\rho,X)\), and above which \(X=0\) and \(P=P(\rho,Y)\). This allows us to take \(F=Y\) as the definition of the phase boundary, since it drops to zero here, and hence \[\mathcal{F}=\frac{\Delta Y}{r}\left(\frac{\mathrm{d}Y}{\mathrm{d}r}\right)^{-1}. \tag{15}\] Combining Eq. (6a-6b) to eliminate \(\mathcal{F}^{-}\), we obtain a junction condition for \(\xi\): \[[\xi]_{-}^{+}=\mathcal{F}^{+}\left(\frac{\rho^{+}-\rho^{-}}{\rho^{-}+P}\right). \tag{16}\] This and Eq. (6c) form a new set of junction conditions, but we still need to specify \(\Delta Y\), which if taken to be zero simply recovers the slow junction condition. How does allowing the perturbed fluid elements to be out of chemical equilibrium modify Eq. (1-2)? In the high-density phase with \(P=P(\rho,Y)\), combining Eq. (10) with \[\Delta\rho=-(\rho+P)\frac{1}{r^{2}}\frac{\mathrm{d}}{\mathrm{d}r}\left(r^{3}\xi \right)-\frac{\mathrm{d}P}{\mathrm{d}r}r\xi, \tag{17}\] and _not_ assuming \(\Delta Y=0\), we find that Eq. (1) is replaced by \[\frac{\mathrm{d}\xi}{\mathrm{d}r}=\left(\frac{\mathrm{d}\nu}{\mathrm{d}r}-\frac{ 3}{r}\right)\xi-\frac{\Delta P}{r\Gamma_{1}P}+\frac{\beta_{Y}}{r\Gamma_{1}} \Delta Y. \tag{18}\] Compared to Eq. (1), Eq. (18) replaces the equilibrium value of \(\Gamma\) with the adiabatic index \(\Gamma_{1}\), and includes an additional term \(\propto\Delta Y\). In general there will be a term of this form for each chemical species fraction in a given phase of matter. Similar to Eq. (10), for the Eulerian perturbation of \(\rho\), using \(\delta Y=\Delta Y-r\xi(\mathrm{d}Y/\mathrm{d}r)\) for purely radial motion, \[\delta P=\frac{\Gamma_{1}P}{\rho+P}\delta\rho+\xi P\beta_{Y}\frac{\mathrm{d}Y }{\mathrm{d}r}-P\beta_{Y}\Delta Y. \tag{19}\] Using this in the derivation of the normal mode equation for \(\mathrm{d}\Delta P/\mathrm{d}r\) in e.g., [34], we obtain \[\frac{\mathrm{d}\Delta P}{\mathrm{d}r} =\left[\mathrm{e}^{2\lambda}\left(\omega^{2}\mathrm{e}^{-2\nu}- 8\pi P\right)+\frac{\mathrm{d}\nu}{\mathrm{d}r}\left(\frac{4}{r}+\frac{ \mathrm{d}\nu}{\mathrm{d}r}\right)\right.\] \[\qquad\left.-\frac{\beta_{Y}}{2\Gamma_{1}}\frac{\mathrm{d}\nu}{ \mathrm{d}r}\frac{\mathrm{d}Y}{\mathrm{d}r}\right]\left(\rho+P\right)r\xi\] \[\qquad-\left[\frac{\mathrm{d}\nu}{\mathrm{d}r}+4\pi(\rho+P)r \mathrm{e}^{2\lambda}\right]\Delta P. \tag{20}\] We see that compared to Eq. (2) of RS23 there is an additional term in the coefficient of \(\xi\) proportional to the gradient of \(Y\), and that this term should be included even when \(\Delta Y=0\). This term is proportional to the Brunt-Vaisala frequency squared \(N^{2}\) term that appears in the nonradial oscillation mode equations [39], and could be included instead by replacing \(\omega^{2}\) with \(\omega^{2}-N^{2}\) in the first term on the right-hand side of Eq. (20). However, no purely radial \(g\)-modes exist, and the effect of this term is to shift the radial mode frequencies. Starting with the equation for total baryon conservation \(\nabla_{\mu}n_{b}^{\mu}=0\) for baryon four-current \(n_{b}^{\mu}\), and assuming that we can write similar equations for the different fluid species in each phase, we obtain an equation describing the evolution of the chemical fractions. For fraction \(Y\), this is \[\mathrm{e}^{-\nu/2}\gamma(|v^{r}|)\left(\frac{\partial Y}{\partial t}+v^{r} \frac{\partial Y}{\partial r}\right)=\frac{\gamma_{Y}}{n_{b}}, \tag{21}\] where \(\gamma_{Y}\) is the volumetric creation rate of the particles with chemical fraction \(Y\), \(\gamma(|v^{r}|)\) is the Lorentz factor and \(v^{r}\) is the radial velocity (assuming radial motion only) of the fluid. \(n_{b}\) is the baryon number density. Taking the Lagrangian perturbation of this and retaining only terms to lowest order in the velocity gives \[\mathrm{e}^{-\nu/2}\frac{\partial\Delta Y}{\partial t} =\frac{\gamma_{Y}}{n_{b}^{2}}(Q_{b}-1)\Delta n_{b}+\frac{\gamma_{ Y}}{n_{b}}Q_{Y}\Delta Y, \tag{22}\] where \[Q_{b}\equiv\frac{\partial\ln\gamma_{Y}}{\partial\ln n_{b}},\quad Q_{Y}\equiv \frac{\partial\ln\gamma_{Y}}{\partial Y}. \tag{23}\] Assuming harmonic time dependence \(\Delta Y\propto e^{-i\omega t}\) where \(\omega\) is the oscillation angular frequency, this equation can be rearranged as \[\Delta Y=\frac{\gamma_{Y}(1-Q_{b})\left(\gamma_{Y}Q_{Y}-i\omega\mathrm{e}^{- \nu/2}n_{b}\right)}{(\gamma_{Y}Q_{Y})^{2}+\omega^{2}\mathrm{e}^{-\nu}n_{b}^{2} }\frac{\Delta n_{b}}{n_{b}}. \tag{24}\] In the limit that the reactions are much faster than the oscillation period, we can take \(\omega\to 0\) in Eq. (24) and obtain \[\Delta Y\approx-\frac{Q_{b}-1}{Q_{Y_{s}}}\frac{\Delta n_{b}}{n_{b}}\equiv-Z \frac{\Delta n_{b}}{n_{b}}, \tag{25}\] where \(Z\) is a parameter characterizing the rate of restoration of equilibrium. It can in principle be calculated given a microscopically-computed \(\gamma_{Y}\). Not making the assumption \(\omega\to 0\) in Eq. (24) gives rise to dissipation in the form of bulk viscosity, which makes the \(\omega\) values complex and violates the assumptions of Sturm-Liouville theory [40]. The weak reactions that restore equilibrium in the bulk of the star are much slower than typical oscillation period except at very high temperatures [41]. Hence we can take \(\Delta Y\approx 0\) in the bulk of the star to very good approximation, and can ignore the \(\Delta Y\) term in Eq. (18) and (20). But the reactions that occur at the phase transition, including quark (de)confinement, could be faster than the oscillation period. So we should retain \(\Delta Y\) near the phase transition. For a nuclear matter to deconfined quark matter phase transition, \(Y\) is the strange quark fraction, which is zero in nuclear matter and nonzero in quark matter. Combining Eq. (10), (25), and \[\frac{\Delta\rho}{\rho+P}=\frac{\Delta n_{b}}{n_{b}}, \tag{26}\] we obtain \[\Delta Y=-\frac{Z}{\Gamma_{1}-\beta_{Y_{s}}Z}\frac{\Delta P}{P}. \tag{27}\] Inserting this into Eq. (15) and then eliminating \(\mathcal{F}^{+}\) from Eq. (16) gives \[[\xi]^{+}_{-}=\frac{-Z}{\Gamma_{1}^{+}-\beta_{Y}^{+}Z}\frac{\Delta P}{rP}\left( \frac{\mathrm{d}Y}{\mathrm{d}r}\right)^{-1}_{+}\left(\frac{\rho^{-}-\rho^{+}}{ \rho^{-}+P}\right). \tag{28}\] We have used that \(\Delta P\), \(P\) and \(r\) are continuous across the junction. This equation and Eq. (6c) form a new set of junction conditions, which we term the _reactive_ conditions. The reactive junction conditions are a physically-motivated set of junction conditions that are consistent with the generalized junction conditions Eq. (6a)-(6c) and which interpolate between the rapid and slow conditions. First, in the limit \(Z\to 0\), Eq. (28) clearly reduces to \([\xi]^{+}_{-}=0\), the junction condition for \(\xi\) in the slow case. To recover the rapid case, we note that \[\frac{\mathrm{d}P}{\mathrm{d}r} =\frac{\Gamma_{1}P}{n_{b}}\frac{\mathrm{d}n_{b}}{\mathrm{d}r}+P \beta_{Y}\frac{\mathrm{d}Y}{\mathrm{d}r}\] \[=P\left(\Gamma_{1}\left(\frac{\mathrm{d}Y}{\mathrm{d}\ln n_{b}} \right)^{-1}\!\!+\beta_{Y}\right)\frac{\mathrm{d}Y}{\mathrm{d}r}. \tag{29}\] Using this to eliminate \(\beta_{Y}\) from Eq. (28) gives \[\left[\xi\right]^{+}_{-}=\frac{-Z\frac{\Delta P}{rP}\left(\frac{\rho^{-}-\rho ^{+}}{\rho^{-}+P}\right)}{\Gamma_{1}^{+}\left[1+Z\left(\frac{\mathrm{d}Y}{ \mathrm{d}\ln n_{b}}\right)^{-1}_{+}\right]\left(\frac{\mathrm{d}Y}{\mathrm{d }r}\right)_{+}-Z\left(\frac{\mathrm{d}\ln P}{\mathrm{d}r}\right)_{+}}.\] In the case \(Z\to-(\mathrm{d}Y/\mathrm{d}\ln n_{b})_{+}\), Eq. (28) becomes \[\left[\xi\right]^{+}_{-} =\frac{\Delta P}{r}\left(\frac{\mathrm{d}P}{\mathrm{d}r}\right) ^{-1}_{+}\left(1-\frac{\rho^{+}+P}{\rho^{-}+P}\right)\] \[=\frac{\Delta P}{r}(\rho^{+}+P)\left(\frac{\mathrm{d}P}{\mathrm{d }r}\right)^{-1}_{+}\left(\frac{1}{\rho^{+}+P}-\frac{1}{\rho^{-}+P}\right)\] \[=\left.\left[\frac{\Delta P}{r}\left(\frac{\mathrm{d}P}{\mathrm{d }r}\right)^{-1}\right]^{+}_{-},\] which is the junction condition on \(\xi\) in the rapid case (the second equation in Eq. (14) with \(\alpha=1\)). To show this we used from the TOV equation that \[\frac{1}{\rho+P}\frac{\mathrm{d}P}{\mathrm{d}r}=-\frac{m+4\pi r^{3}P}{r^{2}(1- 2m/r)}, \tag{30}\] is continuous across the phase transition because \(P\), \(r\) and the enclosed mass \(m=m(r)\) are continuous across the phase transition. \(Z=-(\mathrm{d}Y/\mathrm{d}\ln n_{b})_{+}\) being the rapid limit is expected because this says that the relation between the perturbations of \(Y\) and \(n_{b}\) as given by Eq. (25) is identical to its value in chemical equilibrium, and the rapid case assumes that the fluid elements are always in chemical equilibrium. The range of physically meaningful values of \(Z\) is \(-(\mathrm{d}Y/\mathrm{d}\ln n_{b})_{+}\leq Z\leq 0\). This is because as the reaction rate increases, \(Z\) decreases from \(0\) to \(-(\mathrm{d}Y/\mathrm{d}\ln n_{b})_{+}\). \(Z>0\) would correspond to a slower reaction rate than infinitely slow and is unphysical, and \(Z<-(\mathrm{d}Y/\mathrm{d}\ln n_{b})_{+}\) would be a faster reaction rate than infinitely fast. We note that permitting small values of \(Z\) which imply slow phase-changing reactions is inconsistent with the assumption that underlined the derivation of the reactive junction conditions. However, we will see that values of \(Z\) within the same order of magnitude as the rapid-case recovering \(Z=-(\mathrm{d}Y/\mathrm{d}\ln n_{b})_{+}\) still give rise to fundamentally different behavior than the rapid case and stabilize stars similar to what occurs in the slow case. In deriving Eq. (28), we take a linear combination of Eq. (6a-6b) to eliminate \(\mathcal{F}^{-}\). This is not a unique choice and we could alternatively eliminate \(\mathcal{F}^{+}\) from the equations, which would give a similar boundary condition to Eq. (28) but one which depends on a chemical species fraction and its gradient on the lower density side of the phase transition. This boundary condition would depend on a different parameter characterizing the equilibration rate in the low density phase which we call \(Z^{-}\), and the physically allowed range of \(Z^{-}\) would be different from the allowed range of \(Z\). If we choose to satisfy Eq. (6a-6b) independently, the value of either \(Z^{-}\) or \(Z\) would be constrained, leaving the other as free parameter (which should in principle be a quantity that can be computed). That \(Z\) and \(Z^{-}\) must be related to each other this way is not surprising since the restoration of equilibrium involves reactions depending on the chemical species fractions on both sides of the transition. ## III Equations of state ### Three-phase barotropic EoS To study slow-stable stars with high-order stellar multiplets, RS23 used the three-phase, chemically equilibrated (i.e, barotropic) EoS with constant equilibrium sound speeds for the quark phases described in Ref. [20]. The low-density nuclear phase is joined to two color-superconducting quark phases (termed 2SC and CFL respectively) by the Maxwell construction with large density discontinuities. We use one parametrization of this EoS given in Table 1 to study the simplest non-equilibrium effects on stability with a barotropic EoS. The definition of the EoS parameters matches Alford and Sedrakian [20] or RS23. This parametrization supports BTM-stable triplet stars, and stellar masses up to \(1.88M_{\odot}\), consistent with the neutron star maximum mass constraints within \(3\sigma\) confidence intervals [42; 43; 44; 45; 46; 47; 48; 49]. The EoS is plotted in FIG. (1). The non-equilibrium effects are modeled by choosing \(\Gamma_{1}\neq\Gamma\). Since the nuclear phase of the EoS is based on a tabulated model, \(\Gamma\) here is derived from finite differencing \(P(\rho)\). For simplicity, we let \(\Gamma_{1}=\Gamma\) in this region and restrict our study of non-equilibrium effects to the two quark phases. We do this by specifying constant values of \(c_{s}^{2}\) in the two quark phases subject to Eq. (13) i.e., \(c_{\mathrm{eq1}}^{2}<c_{\mathrm{s1}}^{2}<1\) and \(c_{\mathrm{eq2}}^{2}<c_{\mathrm{s2}}^{2}<1\). The different choices of the \(c_{s}^{2}\) values we examine are listed in Table 2. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\rho_{1}\) & \(P_{1}\) & \(\rho_{2}\) & \(\Delta\rho_{1}\) & \(\Delta\rho_{2}\) & \(c_{\mathrm{eq1}}^{2}\) & \(c_{\mathrm{eq2}}^{2}\) \\ \hline 420.7 & 77.7 & 774.1 & 263.6 & 168.3 & 0.75 & 0.95 \\ \hline \end{tabular} \end{table} Table 1: Parametrization for the nuclear plus two quark phase EoS used in this paper. \(\rho_{1}\) and \(\rho_{2}\) are the energy densities at the low end of the nuclear-2SC and 2SC-CFL phase transitions, \(P_{1}\) is the pressure at the nuclear-2SC phase transition, \(\Delta\rho_{1}\) and \(\Delta\rho_{2}\) are the energy density discontinuities of the nuclear-2SC and 2SC-CFL phase transitions; all are given in MeV/fm\({}^{3}\). \(c_{\mathrm{eq1}}^{2}\) and \(c_{\mathrm{eq2}}^{2}\) are the equilibrium sound speeds squared in the 2SC and CFL quark phases in units of \(c^{2}\). ### Two-phase nonbarotropic EoS When applying the reactive junction condition, an EoS which includes information about chemical species fractions must be used. For this purpose we use a composite EoS with a single first-order phase transition between nuclear matter and deconfined, unpaired, three-flavor quark matter. The crust is taken from the DDME2 EoS [50], which is joined continuously to the Zhao-Lattimer EoS [51] in the form used in Ref. [52] for the nuclear (neutron-proton-electron matter) phase, though we do not include muons. The quark phase EoS is based on the vMIT model as used in Refs. [51; 52], but joined to the nuclear matter phase using a Maxwell construction. We use the nuclear phase parameters given by EoS XOA in Ref. [52]. Our treatment of the quark matter phase differs somewhat from the references and so is discussed in detail below. We also describe the calculation of \(\Gamma_{1}\) and the \(\beta_{Y}\). In the quark matter phase, we start with a relativistic mean-field model with pressure \[P=\sum_{q}P_{q}+P_{e}-B+\frac{1}{2}m_{V}^{2}V^{2}, \tag{31}\] where the \(P_{q}\) are the quark pressure contributions \(q=\{u,d,s\}\), \(P_{e}\) is the electron pressure contribution, \(B\) is the bag constant and \(V\) is the vector meson field with mass \(m_{V}\). We assume massless up and down quarks \(m_{u}=m_{d}=0\) and electrons \(m_{e}=0\), and ignore muons. \(P_{q}\) and \(P_{e}\) are \[P_{q} =\frac{\mu_{q}^{*4}}{4\pi^{2}},\quad q=u,d, \tag{32a}\] \[P_{s} =\frac{1}{8\pi^{2}}\Bigg{[}\mu_{s}^{*}(2\mu_{s}^{*2}-5m_{s}^{2}) \sqrt{\mu_{s}^{*2}-m_{s}^{2}}\] \[\qquad\quad+3m_{s}^{4}\ln\left(\frac{\mu_{s}^{*}+\sqrt{\mu_{s}^{*2 }-m_{s}^{2}}}{m_{s}}\right)\Bigg{]},\] (32b) \[P_{e} =\frac{\mu_{e}^{4}}{12\pi^{2}}, \tag{32c}\] for strange quark mass \(m_{s}\), electron chemical potential \(\mu_{e}\) and where \[\mu_{q}^{*}=\mu_{q}-g_{V}V, \tag{33}\] is the effective chemical potential for each quark species for vector meson coupling constant \(g_{V}\). \(\mu_{q}\) is the bare quark chemical potential. The number densities are then \[n_{q} =\left.\frac{\partial P}{\partial\mu_{q}}\right|_{\mu_{q}^{*} \neq\mu_{q},V}=\frac{\partial P}{\partial\mu_{q}^{*}}\left.\frac{\partial\mu_ {q}^{*}}{\partial\mu_{q}}\right|_{V}=\frac{\mu_{q}^{*3}}{\pi^{2}},\quad q=u,d, \tag{34a}\] \[n_{s} =\frac{(\mu_{s}^{*2}-m_{s}^{2})^{3/2}}{\pi^{2}},\] (34b) \[n_{e} =\frac{\mu_{e}^{3}}{3\pi^{2}}. \tag{34c}\] \begin{table} \begin{tabular}{|c|c|c|} \hline Name & \(c_{s1}^{2}\) & \(c_{s2}^{2}\) \\ \hline 1 & 0.75 & 0.95 \\ \hline 2 & 0.8 & 0.95 \\ \hline 3 & 0.9 & 0.95 \\ \hline 4 & 0.8 & 1 \\ \hline 5 & 0.9 & 1 \\ \hline 6 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 2: Choices of constant adiabatic sound speeds squared \(c_{s}^{2}\) in units of \(c^{2}\) for the 2SC phase (\(c_{s1}^{2}\)) and CFL phase (\(c_{s2}^{2}\)). Note configuration 1 is equivalent to choosing \(c_{s}^{2}=c_{eq}^{2}\) in both phases. Figure 1: Equations of state used in this paper: three-phase EoS with nuclear phase and two quark phases with constant equilibrium sound speeds as described in Section III.1 (solid line), and the two-phase EoS with nuclear plus quark phase as described in Section III.2 (dashed line). \begin{table} \begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \(m_{s}\) & 100 MeV \\ \hline \(B\) & (190 MeV)\({}^{4}\) \\ \hline \(a_{V}\) & \(1.541\times 10^{-5}\) MeV\({}^{-2}\) \\ \hline \(P_{Q}\) & 167.6 MeV/fm\({}^{3}\) \\ \hline \(\rho_{Q}\) & 627.2 MeV/fm\({}^{3}\) \\ \hline \(\Delta\rho_{Q}\) & 480.2 MeV/fm\({}^{3}\) \\ \hline \end{tabular} \end{table} Table 3: Parametrization of quark matter EoS used in this paper, and properties of the resulting first-order phase transition using the Maxwell construction between nuclear and quark matter. \(m_{s}\) is the strange quark mass, \(B\) is the bag constant, \(a_{V}\) is the vector meson coupling constant defined in Eq. (39), \(P_{Q}\) is the phase transition pressure, \(\rho_{Q}\) is the energy density at the low-density end of the phase transition, \(\Delta\rho_{Q}\) is the energy density discontinuity across the phase transition. \(m_{s}\), \(B\) and \(a_{V}\) differ from those values chosen in Ref. [52]. We determine the value of \(V\) by maximizing the pressure with respect to it: \[\frac{\partial P}{\partial V}=0=\sum_{q}\frac{\partial P}{\partial\mu_{q}^{*}}\ \frac{\partial\mu_{q}^{*}}{\partial V}\bigg{|}_{\mu_{q}}+m_{V}^{2}V=-g_{V}\sum_{q }n_{q}+m_{V}^{2}V. \tag{35}\] Since \(\sum_{q}n_{q}=3n_{b}\), in equilibrium we find \[V=3\left(\frac{g_{V}}{m_{V}^{2}}\right)n_{b}. \tag{36}\] Electrical charge neutrality and weak equilibrium require that \[\frac{2}{3}n_{u} =\frac{1}{3}(n_{d}+n_{s})+n_{e}. \tag{37a}\] \[\mu_{u} =\mu_{d}+\mu_{e},\] (37b) \[\mu_{s} =\mu_{d}. \tag{37c}\] Requiring that Eq. (37a-37c) simultaneously allows us to calculate the chemical potentials and number densities in equilibrium. The energy density \(\rho\) is \[\rho=\sum_{i}\left.\frac{\partial P}{\partial\mu_{i}}\right|_{V}\mu_{i}-P= \sum_{i}n_{i}\mu_{i}^{*}+n_{e}\mu_{e}+a_{V}n_{b}^{2}-P. \tag{38}\] where we have inserted Eq. (36) and defined \[a_{V}\equiv\left(\frac{3g_{V}}{m_{V}}\right)^{2}. \tag{39}\] The quark matter EoS parametrization is given in Table 3. We choose different parameters from those in Ref. [52]: this difference arises from requiring the astrophysical constraint of a \(2M_{\odot}\) star be met while having a first-order phase transition, whereas Ref. [52] considers a crossover phase transition. The pressure \(\dot{P}_{Q}\), energy density at lower end of phase transition \(\rho_{Q}\) and energy density discontinuity at the phase transition \(\Delta\rho_{Q}\) are also given in this table. This combined crust-nuclear-quark matter EoS supports a \(>2M_{\odot}\) hybrid star, and is plotted in FIG. 1. We have checked that choosing EoS parameters such that there is no stable hybrid star branch does not qualitatively change our findings. To compute the adiabatic index, we express \(P\) as a function of \(\rho\) and the various particle species fractions. The total baryon number density is given in terms of the quark number densities by \[n_{b}=\frac{1}{3}(n_{u}+n_{d}+n_{s}). \tag{40}\] Define quark flavor fractions \(Y_{q}\equiv n_{q}/(3n_{b})\), and \(Y_{e}\equiv n_{e}/n_{b}\). Using \(1=Y_{u}+Y_{d}+Y_{s}\) and Eq. (37a), we find \[Y_{u}=\frac{1+Y_{e}}{3},\qquad Y_{d}=\frac{2-Y_{e}}{3}-Y_{s}, \tag{41}\] which allows us to express all quark and electron number densities in terms of \(n_{b}\), \(Y_{e}\) and \(Y_{s}\) only. Doing so, the resulting adiabatic index is \[\Gamma_{1}=\frac{\rho+P}{P}\left.\frac{\partial P}{\partial\rho}\right|_{Y_{e} Y_{s}}, \tag{42}\] where \[\frac{\partial P}{\partial\rho}\bigg{|}_{Y_{e},Y_{s}}=\frac{1}{ \mu_{n}} \Bigg{[} \left(1+Y_{e}\right)n_{u}\left(\frac{\partial n_{u}}{\partial\mu_ {u}^{*}}\right)^{-1}\] \[+\left(2-Y_{e}-3Y_{s}\right)n_{d}\left(\frac{\partial n_{d}}{ \partial\mu_{d}^{*}}\right)^{-1}\] \[+3Y_{s}n_{s}\left(\frac{\partial n_{s}}{\partial\mu_{s}^{*}} \right)^{-1}+Y_{e}n_{e}\left(\frac{\partial n_{e}}{\partial\mu_{e}}\right)^{- 1}\Bigg{]}\] \[+\frac{a_{V}n_{b}}{\mu_{n}}. \tag{43}\] where \(\mu_{n}=\mu_{u}+2\mu_{d}\) is the neutron chemical potential (note that the bare quark chemical potentials appear here). The partial derivatives of the number densities are readily computed from Eq. (34a-34b). To compute \(\beta_{Y_{s}}\) and \(\beta_{Y_{e}}\) we also need \[\frac{\partial P}{\partial Y_{s}}\bigg{|}_{\rho,Y_{e}} =3n_{b}\left[n_{s}\left(\frac{\partial n_{s}}{\partial\mu_{s}^{*} }\right)^{-1}-n_{d}\left(\frac{\partial n_{d}}{\partial\mu_{d}^{*}}\right)^{- 1}\right], \tag{44}\] \[\frac{\partial P}{\partial Y_{e}}\bigg{|}_{\rho,Y_{s}} =n_{b}\Bigg{[}n_{u}\left(\frac{\partial n_{u}}{\partial\mu_{u}^ {*}}\right)^{-1}+n_{e}\left(\frac{\partial n_{e}}{\partial\mu_{e}}\right)^{- 1}\] \[-n_{d}\left(\frac{\partial n_{d}}{\partial\mu_{d}^{*}}\right)^{- 1}\Bigg{]}. \tag{45}\] Eq. (45) evaluates to zero for massless \(u\) and \(d\) quarks and electrons in weak equilibrium. A similar procedure is done for the nuclear phase, though with only one relevant chemical species fraction, the proton fraction \(Y=n_{p}/n_{b}\). Because of the \(\propto\mathrm{d}Y/\mathrm{d}r\) term in Eq. (20), the chemical fractions need to be computed in the entire background star and not simply at the nuclear-quark phase transition as needed to use the reactive junction condition. ## IV Effects on stellar stability ### Three-phase barotropic EoS We solved Eq. (1-2) subject to boundary conditions Eq. (4-5) and with \(\Gamma\to\Gamma_{1}\) for stellar models constructed with the EoS presented in Section III.1. Slow and rapid junction conditions as described following Eq. (7) were imposed at the phase transitions. The calculation was performed using the shooting method. FIG. 2-5 shows the fundamental radial mode frequency \(f_{0}=\omega_{0}/(2\pi)\) as a function of stellar central pressure \(P_{c}\) for this EoS and the six different choices of \(c_{s}^{2}\) in Table 2. The four different permutations of slow and rapid junction conditions at the two phase transitions are considered; the configurations are referenced with the phase transition rate at the lower density transition (nuclear-2SC) first. To show more detail of the \(f_{0}\) curves, the \(f_{0}\) values within the nuclear phase are only shown for the slow-slow and slow-rapid configurations (FIG. 2-3). The slow-slow and slow-rapid configurations both clearly support slow-stable quintuplets consisting of three BTM-stable stars and two slow-stable stars within the gray shaded band around \(M=1.77M_{\odot}\), and two triplets consisting of two BTM-stable and one slow-stable star at slightly higher and lower masses shaded light blue. In chemical equilibrium (case 1), the rapid-slow configuration supports slow-stable quadruplet stars, while the rapid-rapid configuration only supports triplet stars. in greater detail in FIG. 6. However, the chosen values of \(c_{s}^{2}\) are unable to stabilize to sufficiently low \(P_{c}\) below the \(P_{c}=110\) MeV/fm\({}^{3}\) local minimum in \(M\) to support stars with masses in the quintuplet band at \(P_{c}\) between this local minimum and the \(P_{c}=82\) MeV/fm\({}^{3}\) local maximum in \(M\). Thus the rapid-rapid and rapid-slow configurations only support stable quadruplet stars. The additional range of stable \(P_{c}\) values above the local maximum in \(M\) at \(P_{c}=148\) MeV/fm\({}^{3}\) does allow a separate stable triplet of stars in the rapid-rapid configuration, matching a similar stable triplet which is present for the other three configurations at masses just above that for which the slow-slow and slow-rapid configurations support quintuplet stars. These observations show that non-equilibrium effects can mimic the slow-stabilization effects through by extending the range of \(P_{c}\) which correspond to stable objects, though the differences in the ranges of \(P_{c}\) values stabilized by being out of chemical equilibrium means that the different cases can still be distinguished. ### Two-phase nonbarotropic EoS When computing the fundamental radial modes of the stars with the two-phase EoS, we solved Eq. (18-20) while imposing the boundary conditions Eq. (5) and Eq. (4) with \(\Gamma\to\Gamma_{1}\). At the phase transition we imposed the slow, rapid and reactive junction condition, with the latter given by Eq. (6c) and (28). In using Eq. (28) we took \(Y\to Y_{s}\) as it is the species fraction which is zero below the transition and nonzero above it. We considered the crust part of the EoS as barotropic, setting \(\Gamma_{1}=\Gamma\) and ignoring species fraction gradients there. The reactive junction condition required a value for parameter \(Z\). Since the physics of the deconfinement transition is not fully understood, instead of computing \(Z\) microphysically we choose different values for this parameter within the range \(-(\mathrm{d}Y_{s}/\mathrm{d}\ln n_{b})_{+}\leq Z\leq 0\). A typical value for \((\mathrm{d}Y_{s}/\mathrm{d}\ln n_{b})_{+}\) for our equation of state is \(\approx-0.0097\). FIG. 7 shows the fundamental radial mode frequency \(f_{0}\) for the different choices of junction condition. This clearly demonstrates that the rapid and slow cases are the limiting cases of the reactive junction condition, which interpolates between these two cases depending on the value of \(Z\). As was found in RS23 for the intermediate junction condition Eq. (14) for \(0\leq\alpha<1\), any change of \(Z\) away from \(-(\mathrm{d}Y_{s}/\mathrm{d}\ln n_{b})_{+}\) results in a star with some previously unstable central pressure being stabilized. In this sense, the reactive junction condition is similar to the slow case, even if a smaller range of central pressures in the BTM-unstable range is stabilized compared to the slow case. Stars with a rapid phase transition will support a reaction mode [29], a radial mode that does not correspond to a mode of the single-phase star, resulting in a discontinuity in the mode frequency for fixed radial node number. The reaction mode is often, but not always, the fundamental mode. When using the reactive junction condition, the case \(Z\neq 0\) also supports a reaction mode. This is clearly shown by examining FIG. 7 and noting the discontinuities in \(f_{0}\) at the phase transition when the reactive junction condition is imposed: this is most pronounced for \(Z=-0.008,-0.006\). The fundamental modes are not the reaction mode for the other values of \(Z\), so for these values a higher-order harmonic is the reaction mode. To illustrate this point, in FIG. 8 we show the fundamental and first and second harmonic radial modes for the reactive junction condition with different values of \(Z\). The plot is zoomed in to central pressures zoomed in near Figure 5: Same as FIG. 2 except for a rapid-rapid phase transition configuration. Figure 6: Same as FIG. 5 except zoomed in to show detail around the first two local maxima in \(M\). The local minima in \(M\) are indicated with vertical teal lines. the phase transition, and the results using the rapid and slow junction conditions are also shown for comparison. Note that \(f_{0}\) for the rapid junction condition does not become imaginary at exactly the maximum mass because of the non-equilibrium effect \(\Gamma_{1}\neq\Gamma\), though the range of \(P_{c}\) with decreasing stellar mass that correspond to stable stars is so small that it is only visible in this zoomed-in plot and not in FIG. 7. The reaction mode is clearly the fundamental mode for the rapid junction condition and the reactive junction condition with \(Z=-0.008\), but it is the first harmonic for the reactive junction condition with \(Z=-0.004\) and the second harmonic for the reactive junction condition with \(Z=-0.001\). Since the reaction mode effectively slots into the usual mode spectrum and pushes the other modes to higher frequency, this suggests that in the slow limit the reaction mode is raised to an infinitely high harmonic. To clarify this point further, in FIG. 9 we plot the frequencies of the fundamental and first three harmonic modes as a function of \(Z\) for fixed \(P_{c}\) just above the phase transition. The avoided crossings between the frequency curves for constant radial node number \(n\) define the ranges of \(Z\) for which the reaction mode is a particular mode. For \(Z\) less than \(\approx-0.006\), the value corresponding to the avoided crossing between the \(n=0\) and \(n=1\) curves, the fundamental mode is the reaction mode; for \(Z\) between \(\approx-0.0059\) and \(\approx-0.0018\), the avoided crossing between the \(n=1\) and \(n=2\) curves, the first harmonic mode is the reaction mode, etcetera. ## V Conclusion We have studied the effects of non-equilibrium physics on the stability of hybrid stars with first-order phase transitions. In the first example, we used a three-phase barotropic EoS with the two denser quark phases modeled using constant equilibrium sound speeds. Here the out-of-equilibrium physics was modeled by choosing values for the adiabatic sound speed greater than the equilibrium sound speeds in the quark phases. This results in stable hybrid stars over an extended range of central pressures compared to the always-equilibrated case, a result consistent with studies of out-of-chemical equilibrium stability of white dwarfs and neutron stars. This extended range of stability permits the existence of higher-order stellar multiplets than those that are BTM-stable even in the case of only rapid phase transitions, since stars with central pressures below/above those corresponding to the local minima/maxima of the stellar mass \(M\) are stabilized. For the EoS we examined, this permitted stable quadruplet stars with rapid phase transitions Figure 7: Fundamental radial mode frequency \(f_{0}\) (left) and stellar mass \(M\) (right) as functions of central pressure \(P_{c}\) for the nuclear plus quark matter EoS star. \(f_{0}<0\) corresponds to an imaginary (unstable) frequency. Mode frequencies are labeled by the junction conditions used: \(s\) for slow, \(r\) for rapid, and \(Z\) for the reactive junction condition with the value of the \(Z\) parameter used shown. The different values of \(-(\mathrm{d}Y_{s}/\mathrm{d}\ln n_{b})_{+}\leq Z\leq 0\) interpolate between the rapid and slow cases. A solid vertical line indicates the central pressure corresponding to the phase transition (gray), which occurs at almost the same central pressure as that for the maximum mass star. The phase in the center of the star is labeled at the top of the plot. Figure 8: Radial mode frequency \(f\) for fundamental and first two harmonic modes as a function of central pressure \(P_{c}\) for the nuclear plus quark matter EoS star and the reactive junction condition. Mode frequencies are labeled \(r\) (rapid junction condition), \(s\) (slow junction condition) or the value of \(Z\) used with the reactive junction condition, with solid, dashed and dot-dashed lines for the fundamental, first harmonic and second harmonic modes respectively. Solid vertical lines are placed at the central pressures corresponding to the phase transition (gray) and the maximum mass (indigo). Note that some modes are nearly overlapping. when the BTM criterion would have predicted only stable triplet stars. In the second part of this paper, we have introduced a new junction condition to be applied when computing the radial normal modes of a compact hybrid star with strong first-order phase transitions. We have shown that this reactive junction condition interpolates between the slow and rapid junction conditions as limiting cases, but also satisfies the generalized form of junction conditions for radial oscillations of a relativistic star. It can only be applied with an equation of state that does not assume chemical equilibration and includes explicit chemical fraction-dependence. We chose a two-phase EoS with nuclear and deconfined quark matter separated by a first-order phase transition to apply this new junction condition. For different values of parameter \(Z\), which is a function of the rate of particle creation (for our model, the strange quarks), we showed that it interpolates between the slow and rapid limiting cases, providing a more physically reasonable junction condition. We also showed that the reaction mode which appears in the radial mode spectrum when using the rapid junction condition persists when using the reactive junction condition, and becomes a higher harmonic as the parameter \(Z\) is made smaller in magnitude (less negative). Extensions of this work include the application of the reactive junction condition to a hybrid star with multiple phase transitions- it could not be applied to the three-phase EoS we used in the first part of the paper because that EoS assumed chemical equilibrium. The parameter \(Z\) appearing in the reactive junction condition depends on the physics of the deconfinement transition which we did not examine in detail. Computing it microscopically and using this value would allow a realistic determination of the stabilized range of \(P_{c}\) for a given EoS. The physics of deconfinement could thus be constrained via the observation of reactively-stabilized compact stars with lower masses and radii than those observed for the maximum mass star. ## Acknowledgements This work was supported by the Institute for Nuclear Theory's U.S. Department of Energy grant No. DE-FG02-00ER41132. G. G. S. participated in the National Science Foundation-funded LSAMP program at the University of Washington (Grant No. EES-1911026) while working on this paper. P. B. R. would like to thank S. Reddy, S. P. Harris and T. Zhao for helpful discussion. We also thank the anonymous referee for helpful comments. All plots were made using the Python package matplotlib[53].
2308.16457
Stack-sorting simplices: geometry and lattice-point enumeration
We study the polytopes that arise from the convex hulls of stack-sorting on particular permutations. We show that they are simplices and proceed to study their geometry and lattice-point enumeration. First, we prove some enumerative results on $Ln1$ permutations, i.e., permutations of length $n$ whose penultimate and last entries are $n$ and $1$, respectively. Additionally, we then focus on a specific permutation, which we call $L'n1$, and show that the convex hull of all its iterations through the stack-sorting algorithm share the same lattice-point enumerator as that of the $(n-1)$-dimensional unit cube and lecture-hall simplex. Lastly, we detail some results on the real lattice-point enumerator for variations of the simplices arising from stack-sorting $L'n1$ permutations. This then allows us to show that $L'n1$ simplices are Gorenstein of index $2$.
Eon Lee, Carson Mitchell, Andrés R. Vindas-Meléndez
2023-08-31T04:52:33Z
http://arxiv.org/abs/2308.16457v1
# Stack-sorting simplices: geometry and lattice-point enumeration ###### Abstract. We study the polytopes that arise from the convex hulls of stack-sorting on particular permutations. We show that they are simplices and proceed to study their geometry and lattice-point enumeration. First, we prove some enumerative results on \(Ln1\) permutations, i.e., permutations of length \(n\) whose penultimate and last entries are \(n\) and \(1\), respectively. Additionally, we then focus on a specific permutation, which we call \(L^{\prime}n1\), and show that the convex hull of all its iterations through the stack-sorting algorithm share the same lattice-point enumerator as that of the \((n-1)\)-dimensional unit cube and lecture-hall simplex. Lastly, we detail some results on the real lattice-point enumerator for variations of the simplices arising from stack-sorting \(L^{\prime}n1\) permutations. This then allows us to show that \(L^{\prime}n1\) simplices are Gorenstein of index \(2\). ## 1. Introduction The study of sorting a permutation using stacks was first introduced by Knuth in the 1960s [7]. Classically, the aim of the stack-sorting problem is to sort a permutation using a last-in/first-out algorithm. In its simplest form, a stack is used to rearrange a permutation \(\boldsymbol{\pi}=\pi_{1}\pi_{2}\cdots\pi_{n-1}\pi_{n}\) as follows. The elements of \(\boldsymbol{\pi}\) are pushed onto an originally empty stack and an output permutation is formed by popping elements from the stack. In this paper, we consider the convex hull of all the permutations that arise in each step of the stack-sorting algorithm, given a particular kind of input permutation. This paper is organized as follows. All undefined terms or notation will be introduced in the relevant sections. In Section 2, we present necessary preliminaries and background. Subsections 2.1 and 2.2 present the stack-sorting algorithm and a brief overview of polyhedral geometry and Ehrhart theory, respectively. We continue with background on integral and unimodular equivalence and lecture-hall simplices, in Subsections 2.3 and 2.4, respectively. We conclude this section with a notation table in Subsection 2.5. Section 3 presents some enumerative and structural results on \(Ln1\) permutations. For \(S_{n}\), the group of all permutations on \([n]\), we label a permutation \(\boldsymbol{\pi}\in S_{n}\) as a \(Ln1\) permutation if it has the form \(Ln1\), where \(L\) is a permutation of \(\{2,3,...\)\(,n-1\}\). We denote the set of all permutations in \(S_{n}\) of this form as \(\mathcal{L}n1\). The main theorem of this section is the following: **Theorem 3.12**.: A permutation \(\boldsymbol{\pi}\in\mathcal{L}n1\) if and only if \(\boldsymbol{\pi}\) is exactly \((n-1)\)-stack sortable. In Section 4, we initiate the study of the geometry of \(Ln1\) families. We take a \(Ln1\)_family_, \(\mathcal{F}_{\boldsymbol{\pi}}\), to be the set of \(n\) permutations obtained from a fixed \(\boldsymbol{\pi}\in\mathcal{L}n1\) and all its iterations under the stack-sorting algorithm. We denote the special \(\mathcal{L}n1\) permutation where \(L=23\,...\,(n-1)\) as \(L^{\prime}n1\). The main results of this section are the following proposition and theorem: **Proposition 4.2**.: The convex hull of any \(Ln1\) family forms a \((n-1)\)-simplex in \(\mathbb{R}^{n}\). **Theorem 4.3**.: The \(L^{\prime}n1\) simplex is hollow. Specifically, all non-vertex integer points in \(\triangle_{n}\) lie on the facet formed from the convex hull of \(\mathcal{F}_{L^{\prime}n1}-\{12\,...\,n\}\). Lastly, Section 5 deals with the Ehrhart theory of \(L^{\prime}n1\) simplices. The main result of this section is the following: **Theorem 5.2**.: The \(L^{\prime}n1\) simplex \(\triangle_{n}\) and \((n-1)\)-dimensional lecture hall simplex \(P_{n-1}\) are integrally equivalent. In particular, \[L_{\mathbb{Z}}(\triangle_{n};t)=L_{\mathbb{Z}}(P_{n-1};t)=(t+1)^{n-1}.\] We conclude by exploring the following additional results, including developing a recursive relationship and proving the Gorenstein property for translates of the \(L^{\prime}n1\) simplex. In what follows, allow \(\boldsymbol{\pi}_{n}\) to denote \(L^{\prime}n1\). **Theorem 5.9**.: For all \(\lambda\in\mathbb{R}_{\geq 0}\), \[L_{\mathbb{R}}(\triangle_{n+1}-\boldsymbol{\pi}_{n+1};\lambda)=\sum_{k=0}^{ \lfloor n\lambda\rfloor}\,L_{\mathbb{R}}\left(\triangle_{n}-\boldsymbol{\pi} _{n};\frac{k}{n}\right).\] **Theorem 5.12**.: For all \(\lambda\in\mathbb{R}_{\geq 0}\), \[L_{\mathbb{R}}(\triangle_{n}-\boldsymbol{\pi}_{n};\lambda)=L_{\mathbb{R}}(( \triangle_{n}-\boldsymbol{\pi}_{n})^{\circ};\lambda+2).\] Hence, any integer translate of \(\triangle_{n}-\boldsymbol{\pi}_{n}\), in particular \(\triangle_{n}\), is Gorenstein of index \(2\). ## 2. Background & Preliminaries ### The stack-sorting algorithm The following algorithm for sorting an input sequence was first outlined in Donald Knuth's influential work, _The Art of Computer Programming_[7]. **Definition 2.1**.: Given a permutation \(\boldsymbol{\pi}\) on an ordered set, the _stack-sorting algorithm_ is defined as follows: 1. Initialize an empty _stack_ (i.e., a last-in, first-out collection of elements). 2. For each input value \(x\in\boldsymbol{\pi}\), starting with the first element, \(\pi_{1}\): * If the stack is non-empty and \(x\) is greater than the _top_ (most-recently added) element \(y\) of the stack, then _pop \(y\) from the stack_ (i.e., move \(y\) to the output) and repeat. * Otherwise, _push \(x\) onto the stack_ (move \(x\) to the top of the stack). 3. Once every element has been pushed and there is no input left to consider, pop all remaining in the stack from last-in to first-in. For a permutation \(\boldsymbol{\pi}\), we denote the output of the stack-sorting algorithm as \(s(\boldsymbol{\pi})\). **Example 2.2**.: The algorithm would sort the permutation \(\boldsymbol{\pi}=213\) as follows: 1. We begin with an empty stack \(S=\{\}\). 2. The first element in the permutation is \(2\). The stack is empty; thus, we push \(2\) onto the stack and obtain \(S=\{2\}\). * The next element is \(1\). The stack is non-empty but \(1\) is not greater the the top of the stack, \(2\). Therefore, we push \(1\) onto the stack and obtain \(\mathcal{S}=\{2,1\}\). * The last element is \(3\). The stack is non-empty and \(3\) is greater than the top, \(1\). Thus, we pop \(1\) from the stack and obtain the first element of \(\mathcal{s}(\pi)\) to be \(1\). We repeat and the exact same thing occurs with \(2\). We deduce that \(\mathcal{s}(\pi)\) begins with \(12\). 3. We push \(3\) onto an empty stack and then pop it to obtain that \(\mathcal{s}(\pi)=123\). **Definition 2.3**.: A permutation \(\pi\) on \([n]:=\{1,2,...\,,n\}\) is _\(t\)-stack-sortable_ if \(t\) iterations of the stack-sorting algorithm yields the identity permutation, i.e., \(\mathcal{s}^{t}(\pi)=12\cdots n\). Furthermore, \(\pi\) is _exactly_\(t\)-stack-sortable if \(\mathcal{s}^{t}(\pi)=12\cdots n\) and \(m<t\) implies \(\mathcal{s}^{m}(\pi)\neq 12\cdots n\). The identity permutation itself is defined to be \(0\)-stack-sortable, and \(\mathcal{s}(12\cdots n)=12\cdots n\) for all \(n\). All permutations on \([n]\) will reach the identity after at most \(n-1\) iterations of the algorithm. If \(\pi\) is exactly \((n-1)\)-stack-sortable, then \(\pi\) is said to be _maximal_ with respect to the algorithm. Thus, revisiting Example 2.2, \(\pi=213\) is exactly \(1\)-stack-sortable, is also \(t\)-stack-sortable for any \(t\geq 1\), and thus is not maximal. ### Polyhedral geometry & Ehrhart theory In this paper, we investigate the properties of a family of _convex polytopes_, in particular, a family of lattice simplices. We present some preliminaries on polyhedral geometry and Ehrhart theory, i.e., the study of lattice-points in dilations of polytopes. A _convex polytope_\(P\) is the _convex hull_ of a set of points \(\mathcal{C}=\{\mathbf{x}_{1},...\,,\mathbf{x}_{k}\}\), that is, \[P=\mathsf{conv}(\mathcal{C}):=\left\{\sum_{i=1}^{k}\lambda_{i}\mathbf{x}_{i}: \lambda_{i}\in\mathbb{R}_{\geq 0}\text{ and }\sum_{i=1}^{k}\lambda_{i}=1\right\}. \tag{1}\] Any expression of the form \(\sum_{i=1}^{k}\lambda_{i}\mathbf{x}_{i}\), with the same conditions as imposed on the \(\lambda_{i}\) in (1), is called a _convex combination_ of the points \(\mathbf{x}_{i}\). The convex hull of \(\mathcal{C}\) is the set of all convex combinations of points in \(\mathcal{C}\). An _affine subspace_ or affine space is a linear subspace with a possible translation by a fixed vector to its elements. In other words, it is not required that the origin be part of a set for it to be an affine subspace. Every linear subspace is an affine subspace. Figure 1. A visualization of step \(2\) of the stack-sorting algorithm as presented in Example 2.2. **Example 2.4**.: Any line in \(\mathbb{R}^{2}\) is an affine subspace of \(\mathbb{R}^{2}\). The same applies for any plane in \(\mathbb{R}^{3}\). Notice these objects partition their respective spaces into two parts, those on one side of the line (plane) and those on the other. A line in \(\mathbb{R}^{2}\) is defined by a linear equation \(\mathsf{a}\mathsf{x}+\mathsf{b}\mathsf{y}+\mathsf{c}=\mathsf{0}\) and a plane in \(\mathbb{R}^{3}\) is defined by \(\mathsf{a}\mathsf{x}+\mathsf{b}\mathsf{y}+\mathsf{c}\mathsf{z}+\mathsf{d}= \mathsf{0}\), where \(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d}\in\mathbb{R}\) and \(\mathsf{x},\mathsf{y},\mathsf{z}\) are variables. At least one coefficient attached to a variable must be nonzero. \(\diamondsuit\) The concept from Example 2.4 can be generalized to define a _hyperplane_ in \(\mathbb{R}^{n}\) as an \((n-1)\)-dimensional affine subspace determined by the solution space of a linear equation \[\mathsf{a}_{0}+\mathsf{a}_{1}\mathsf{x}_{1}+\cdots+\mathsf{a}_{n}\mathsf{x}_ {n}=\mathsf{0},\] where each \(\mathsf{a}_{i}\in\mathbb{R}\) and at least one of \(\mathsf{a}_{1}\) through \(\mathsf{a}_{n}\) is nonzero. Any hyperplane of \(\mathbb{R}^{n}\) partitions \(\mathbb{R}^{n}\) into two parts called _halfspaces_; namely, without loss of generality, \[\mathsf{a}_{0}+\mathsf{a}_{1}\mathsf{x}_{1}+\cdots+\mathsf{a}_{n}\mathsf{x}_ {n}\geq\mathsf{0}\quad\text{ and }\quad\mathsf{a}_{0}+\mathsf{a}_{1}\mathsf{x}_{1}+\cdots+\mathsf{a}_{n} \mathsf{x}_{n}<\mathsf{0}.\] A polytope \(P\) can also be defined as the bounded intersection of finitely many _halfspaces_; equivalently, this is the bounded solution set of a system of linear inequalities. A polytope \(P\) is contained in the _affine hull_ of \(C\), denoted by \(\mathsf{aff}(C)\), which is similar to (1), but with the looser condition that \(\lambda_{i}\in\mathbb{R}\). We can characterize \(\mathsf{aff}(C)\) as the smallest affine subspace containing \(\mathsf{conv}(C)\). Moreover, the set \(C=\{\mathbf{x}_{1},...\,,\mathbf{x}_{k}\}\) is _affinely independent_ if \(\mathsf{aff}(C)\) is a \((k-1)\)-dimensional space. Equivalently, \(C\) is affinely independent if and only if vectors, \[\mathbf{x}_{1}-\mathbf{x}_{i},...\,,\mathbf{x}_{i-1}-\mathbf{x}_{i},\mathbf{ x}_{i+1}-\mathbf{x}_{i},...\,,\mathbf{x}_{k}-\mathbf{x}_{i}\ \in\mathbb{R}^{n},\] are linearly independent in \(\mathbb{R}^{n}\) for any \(i\in[k]\). The _dimension_ of a polytope \(P\), denoted \(\mathsf{dim}(P)\), is the dimension of its affine hull. We say a polytope \(P\subset\mathbb{R}^{n}\) is _full-dimensional_ if \(\mathsf{dim}(P)=n\). The convex hull of \(n+1\) affinely independent points forms a special polytope called a _simplex_ of dimension \(n\). A simplex generalizes the concept of a triangle in \(\mathbb{R}^{2}\) or the tetrahedron in \(\mathbb{R}^{3}\). Changing every inequality to strict inequalities in the minimal halfspace description of \(P\) yields the _interior_ of \(P\), denoted \(P^{\circ}\). Changing every inequality to an equality in a minimal halfspace description yields the _boundary_ of \(P\), denoted \(\partial P\). Note that \(P=P^{\circ}\uplus\partial P\), where \(\uplus\) denotes a disjoint union. A polytope is _hollow_ if all its integer points lie on its boundary or, equivalently, it has no integer points in its interior. A polytope is said to be a _lattice polytope_ if all of its vertices have integral coordinates. The _Ehrhart function_ (or _lattice-point enumerator_ or _discrete volume_) of any polytope \(P\) is defined as \[L_{\mathbb{Z}}(P;t):=|tP\cap\mathbb{Z}^{n}|,\] where \(tP=\{\mathsf{tx}:\ \mathsf{x}\in P\}\) and \(t\) is an integer. When \(P\) is a lattice polytope, the Ehrhart function is a polynomial in \(t\) of degree equal to the dimension of \(P\), which we refer to as the _Ehrhart polynomial_. We can encode the information of an Ehrhart polynomial in a generating series to obtain the _Ehrhart series_ of a polytope: \[\mathsf{Ehr}_{\mathbb{Z}}(P;z):=1+\sum_{n\in\mathbb{Z}_{+}}L_{\mathbb{Z}}(P;n )z^{n}=\frac{h_{\mathbb{Z}}^{*}(P;z)}{(1-z)^{n+1}}.\] where \(n\) is the dimension of \(P\) and \(h^{*}_{\mathbb{Z}}(P;z)=1+h^{*}_{1}z+\cdots+h^{*}_{n}z^{n}\) is a polynomial in \(z\) of degree at most \(n\) called the \(h^{*}\)_-polynomial_ of \(P\). The coefficients \(h^{*}_{i}\) of this polynomial are nonnegative integers and have relevant combinatorial interpretations. The interested reader can consult [2] for more information. A fundamental result in Ehrhart theory is one of _reciprocity_, known as _Ehrhart-Macdonald reciprocity_. For any convex lattice polytope \(P\) of dimension \(d\): \[L_{\mathbb{Z}}(P;-t)=(-1)^{n}L_{\mathbb{Z}}(P^{\circ};t).\] This not only provides a relationship between a polytope and its interior, but also provides an interpretation for negative values of \(t\). Evaluating an Ehrhart polynomial at \(-t\) gives the lattice count for the \(t^{\text{th}}\) dilate of its interior up to a sign. A lattice polytope \(P\subset\mathbb{R}^{d}\) is said to be _Gorenstein of index \(k\)_ if there exists a positive integer \(k\) such that \[L_{\mathbb{Z}}(P^{\circ};k-1)=0,\quad L_{\mathbb{Z}}(P^{\circ};k)=1,\quad\text { and }L_{\mathbb{Z}}(P^{\circ};t)=L_{\mathbb{Z}}(P;t-k) \tag{2}\] for all integers \(t>k\)[2]. More generally, for any convex polytope \(P\subseteq\mathbb{R}^{d}\) (i.e., \(P\) has any vertices in \(\mathbb{R}^{n}\)), the real Ehrhart counting function for all dilates \(\lambda\in\mathbb{R}_{\geq 0}\) is defined as: \[L_{\mathbb{R}}(P;\lambda):=|\lambda P\cap\mathbb{Z}^{n}|.\] For more on Ehrhart theory, one can consult [2] for an in-depth treatment of the material. A larger survey on the combinatorics of polytopes can be found in [8]. Additionally, one can learn more about rational Ehrhart theory from [1]. **Example 2.5**.: We define the \(n\)_-dimensional unit cube_ as the convex hull over all binary strings of length \(n\), i.e., the binary strings form the vertex set. The two-dimensional unit cube \(\square^{2}\) contains \(4\) points in its first dilate, \(9\) in its second, \(16\) in its third, etc. Without much difficulty, one can show directly or inductively that: \[L_{\mathbb{Z}}(\square^{2};t)=\left|t\square^{2}\cap\mathbb{Z}^{2}\right|=(t+ 1)^{2}.\] Figure 2. The first three integral dilates of the two-dimensional unit cube \(\square^{2}\). Analogous argumentation can be used to show \(L_{\mathbb{Z}}(\square^{3};t)=(t+1)^{3}\); and, in fact, that \[L_{\mathbb{Z}}(\square^{n};t)=(t+1)^{n}.\] We can encode the lattice-point enumerator in a generating function to obtain the Ehrhart series of \(\square^{n}\): \[\sum_{n\in\mathbb{N}}L_{\mathbb{Z}}(\square^{n};t)z^{n}=\frac{\sum_{k=0}^{n-1} A(n,k)z^{k}}{(1-z)^{n-1}},\] where \(A(n,k)\) denotes the Eulerian number, i.e., the number of permutations of \(1\) through \(n\) having exactly \(k\) descents. Further, by reciprocity, we have: \[L_{\mathbb{Z}}(\square^{n};-t)=(1-t)^{n}=(-1)^{n}(t-1)^{n}=(-1)^{n}L_{\mathbb{ Z}}(\square^{n\circ};t),\] and can conclude the interior of the \(t\)-dilate has exactly as many lattice points as the entirety of the \((t-2)\)-dilate. For example, the interior of \(3\square^{2}\) has \(4\) lattice points, the same number as all of \(\square^{2}\). We also notice that the interior of the first dilate of \(\square^{n}\) is empty and the interior of the second dilate has exactly one integer point. These are the conditions from (2) for \(k=2\), which implies \(\square^{n}\) is Gorenstein of index \(2\), an important property that will appear again later in the paper. \(\diamondsuit\) ### Integral and unimodular equivalence We recall the definition of a unimodular transformation. **Definition 2.6**.: A _unimodular transformation_ in \(\mathbb{R}^{n}\) is a linear transformation \(U\), i.e., a \(n\) by \(n\) matrix, with coefficients in \(\mathbb{Z}\) such that \(\mathsf{det}(U)=\pm 1\). The following result from [6] provides a heuristic for determining whether two polytopes have the same lattice-point count. **Proposition 2.7**.: _If a linear transformation on a lattice polytope \(P\) is unimodular, then it preserves the lattice._ In other words, the resulting polytope will have the same integer point count as \(P\). However, not all polytopes that have equivalent lattice counts have a unimodular transformation between them. For example, if the two polytopes have square matrix representations (vertices as columns of a matrix), they must have the same determinant up to a sign for a unimodular transformation to exist between them. **Definition 2.8**.: Two lattice polytopes \(P\subset\mathbb{R}^{m}\) and \(Q\subset\mathbb{R}^{n}\) are _integrally equivalent_ if there exists an affine transformation \(\varphi:\mathbb{R}^{m}\to\mathbb{R}^{n}\) whose restriction to \(P\) preserves the lattice. In other words, \(\varphi\) is a bijection \[P\cap\mathbb{Z}^{m}\longleftrightarrow Q\cap\mathbb{Z}^{n}.\] In particular, \(L_{\mathbb{Z}}(P;t)=L_{\mathbb{Z}}(Q;t)\) for all \(t\in\mathbb{N}\). We now have an effective method to show two \(n\)-polytopes are integrally equivalent: we can search for a unimodular transformation from the lattice points of one polytope to the other. ### Lecture-hall simplices We begin this subsection by recalling the definition of lecture-hall partitions, which were studied in [5]. They were further studied in the context of lecture-hall simplices in [4]. **Definition 2.9**.: A _lecture-hall partition_ of length \(n\) is a sequence \(\{\alpha_{1},...\,,\alpha_{n}\}\in\mathbb{Z}^{n}\) such that \[0\leq\alpha_{1}\leq\frac{\alpha_{2}}{2}\leq\cdots\leq\frac{\alpha_{n}}{n}.\] We can construct the _lecture-hall simplex_ of dimension \(n\) as \[P_{n}:=\mathsf{conv}\left\{\boldsymbol{\alpha}\in\mathbb{Z}^{n}:0\leq\alpha_ {1}\leq\frac{\alpha_{2}}{2}\leq\cdots\leq\frac{\alpha_{n}}{n}\leq 1\right\}.\] Dilations of \(P_{n}\) yields the result \(tP_{n}=\mathsf{conv}\left\{\boldsymbol{\alpha}\in\mathbb{Z}^{n}:0\leq\alpha_ {1}\leq\frac{\alpha_{2}}{2}\leq\cdots\leq\frac{\alpha_{n}}{n}\leq t\right\}\). We also have that \(P_{n}\subset 2P_{n}\subset\cdots\subset tP_{n}\) because \((0,0,...\,,0)\in P_{n}\). This fixes a vertex and thus avoids any translations of the polytope away from previous dilates, an observation that will become useful later on. **Example 2.10**.: For \(n=4\), we must have \(0\leq\alpha_{1}\leq\cdots\leq\frac{\alpha_{4}}{4}\leq 1\). The first points we can identify in this simplex \(P_{n}\) are \((0,0,0,0)\) as well as \((1,2,3,4)\). We can also take the four points \((0,0,0,n)\) for \(n\in[4]\). We have \((0,0,1,2)\), \((0,0,1,3)\), and \((0,0,1,4)\); \((0,0,2,3)\) and \((0,0,2,4)\); as well as \((0,0,3,4)\). Finally, we can take \((0,1,2,3)\), \((0,1,2,4)\), \((0,1,3,4)\), and \((0,2,3,4)\). This consists of a total of \(16\) lattice points and these are the only points in the simplex in the first dilate. Note, not all of these points are vertices, for example, \((0,0,0,2)\) will be on the line between the origin and \((0,0,0,4)\). This gives \(L_{\mathbb{Z}}(P_{4};1)=16=2^{4}=(1+1)^{4}\). In fact, we will later find the distribution of points in the lecture-hall simplex and our \(L^{\prime}n1\) simplex to be remarkably similar. \(\diamondsuit\) In general, the vertex set for \(P_{n}\) consists of exactly the \(n+1\) affinely independent points: \[(0,0,...\,,0),(0,0,...\,,n),...\,(0,2,...\,,n),\text{ and }(1,2,...\,,n).\] More information about \(P_{n}\), including the proof of the following proposition can be found in [4]. **Proposition 2.11** (Theorems 1 and 2.2, [4]).: _Let \(P_{n}\) be the lecture-hall simplex of dimension \(n\)._ \[L_{\mathbb{Z}}(P_{n};t)=(t+1)^{n}.\] Observe that the lattice-point enumerator is equivalent for both the lecture-hall simplex \(P_{n}\) and the \(n\)-dimensional unit cube \(\square^{n}\) from Example 2.5. We will show that our \(L^{\prime}n1\) simplices (defined in Section 4) are also in bijection with these polytopes, and hence have the same lattice-point-enumerator. ### Notation We conclude our preliminaries with a notation table for reference. Note that it includes notation for terms that will be defined in later sections. ## 3. Stack-Sorting on \(Ln1\) permutations In this section we explore the behavior of the stack-sorting algorithm on a special family of permutations, one we will find to act in an interesting way with respect to the algorithm. **Definition 3.1**.: A permutation \(\pi\in S_{n}\) is a \(Ln1\)_permutation_ if \(\pi\) has the form \(Ln1\), such that \(L\) is a permutation of \(\{2,3,...\,,n-1\}\), i.e., a permutation that exactly ends with \(n\) and then \(1\). We denote the set of all permutations of such form as \(\mathcal{L}n1\) and write \(\pi\in\mathcal{L}n1\) if it takes the appropriate form. Notice that there are \((n-2)!\) distinct permutations in \(\mathcal{L}n1\) as they can be counted by the number of different possibilities for \(L\), the number of permutations on a set of \(n-2\) elements. We denote the special \(Ln1\) permutation where \(L=23\,...\,(n-1)\), i.e., where \(L\) is already sorted in ascending order, as \(L^{\prime}n1\). **Example 3.2**.: For \(n=4\), \(2341\) and \(3241\) are the two permutations that comprise the set \(\mathcal{L}41\). \(L^{\prime}n1\) will always refer to a unique permutation; as an example, \(L^{\prime}51=23451\). We have that \(L^{\prime}51\) is one of \((5-2)!=6\) permutations in the set \(\mathcal{L}51\). **Lemma 3.3**.: _Let \(\pi=\pi_{1}\pi_{2}\,...\,\pi_{n}\) be a permutation of \(n\) distinct ordered elements and let \(x=\max\{\pi_{1},\pi_{2},...\,,\pi_{n}\}\). If \(L\) and \(R\) are the sub-permutations of \(\pi\) such that \(\pi=LxR\), then the stack-sorting algorithm yields \(s(\pi)=s(L)s(R)x\)._ \begin{table} \begin{tabular}{|c|c|} \hline **Symbol** & **Meaning** \\ \hline \([n]\) & the set \(\{1,2,...\,,n\}\) \\ \hline \(S_{n}\) & the set of all permutations on \([n]\) \\ \hline \(s(\pi)\) & the output from stack-sorting a permutation \(\pi\) \\ \hline conv & convex hull \\ \hline aff & affine hull \\ \hline \(P^{\circ}\) & the interior of the polytope \(P\) \\ \hline \(\partial P\) & the boundary of the polytope \(P\) \\ \hline \(L_{\mathbb{Z}}(P;t)\) & the number of lattice points in \(tP\) where \(t\in\mathbb{Z}\) \\ \hline \(\operatorname{\mathsf{Ehr}}_{\mathbb{Z}}(P;z)\) & the ordinary generating function for \(L_{\mathbb{Z}}(P;t)\) \\ \hline \(h_{\mathbb{Z}}^{*}(P;z)\) & the polynomial numerator in the rational form of \(\operatorname{\mathsf{Ehr}}_{\mathbb{Z}}(P;z)\) \\ \hline \(\square^{n}\) & the \(n\)-dimensional unit cube \\ \hline \(\bar{P}_{n}\) & the \(n\)-dimensional lecture-hall simplex \\ \hline \(\mathcal{L}n1\) & the set of permutations on \([n]\) ending in \(n1\) \\ \hline \(L^{\prime}n1\) or \(\pi_{n}\) & the permutation \((2,3,...\,,n,1)\) \\ \hline \(\mathcal{F}_{L^{\prime}n1}\) & the \(L^{\prime}n1\) family \\ \hline \(\triangle_{n}\) & the \((n-1)\)-dimensional \(L^{\prime}n1\) simplex \\ \hline \(\mathbf{e}\) & the permutation \((1,2,...\,,n)\) \\ \hline \(P^{\prime}\) & the projection of \(P\subset\mathbb{R}^{n}\) onto the hyperplane defined by \(x_{n}=0\) \\ \hline \(\bar{P}\) & the projection of \(P\subset\mathbb{R}^{n}\) into \(\mathbb{R}^{n-1}\) by removing each point’s last coordinate \\ \hline \(\hat{P}\) & the lift of \(P\subset\mathbb{R}^{n}\) into \(\mathbb{R}^{n+1}\) by appending a last coordinate of \(0\) to each point \\ \hline \end{tabular} \end{table} Table 1. Notation Table This result is a generalization of [3, Lemma 1.2]. In other words, the algorithm sorts everything to the left of \(x\) (the maximum element) first, regardless of the contents to its right. Then, everything to the right of \(x\) is sorted and \(x\) takes the \(n^{\text{th}}\) and final place in the output permutation. Proof.: Consider an arbitrary \(\pi_{i}\in L\). Since \(x\not\in L\) and thus \(\pi_{i}\neq x\), we must have \(\pi_{i}<x\). This implies that when all of \(L\) has been pushed on the stack and \(x\) is taken as input by the algorithm, all of \(L\) will be popped _before_\(x\) is pushed on the stack. After all of \(L\) has been popped, \(x\) will be pushed onto an empty stack. Similarly, it can be determined that if \(\pi_{j}\in R\), then \(\pi_{j}<x\). Because there is no element greater than \(x\) that can pop it from the stack, all of \(R\) will be pushed and then subsequently popped in some order before \(x\) (at the bottom) is finally popped. Hence, the algorithm will sort \(R\) entirely into \(s(R)\) before popping \(x\); by combining these two results, we obtain that \(s(\pi)=s(L)s(R)x\). As a consequence of this lemma, we obtain the following two corollaries. **Corollary 3.4**.: _If \(\pi\in\mathcal{L}n1\), then \(s(\pi)=s(L)1n\)._ **Corollary 3.5**.: _If \(\pi\in\mathcal{L}n1\), then \(s(\pi)\) exactly ends with \(n-1\), \(1\), then \(n\)._ Note that Corollary 3.5 is simply an extra application of Lemma 3.3 to Corollary 3.4, specifically to \(s(L)\). Both corollaries can be readily extended to a biconditional. **Theorem 3.6**.: _Let \(\pi\in\mathcal{L}n1\). For all \(i\in[n-2]\), \(s^{i}(\pi)\) exactly ends with_ \[(n-i)1(n-i+1)\,...\,(n-2)(n-1)n.\] Proof.: The base case when \(i=1\) is the statement of Corollary 3.5. Assume the theorem holds for \(s^{i-1}(\pi)\), i.e., it begins with some permutation \(\rho\) of \(\{2,3,...\,,n-i\}\) and ends exactly with \((n-i+1)1(n-i+2)\,...\,(n-1)n\). Consider another application of the stack-sorting algorithm to \(s^{i-1}(\pi)\). Notice that \((n-i+1)\) is greater than all elements in \(\rho\). Thus, when \((n-i+1)\) is considered by the algorithm, all elements in \(\rho\) will be popped before \((n-i+1)\) is pushed. By Lemma 3.3, we know the final element in \(s(\rho)\) will be \(n-i\). Now, the algorithm sorts \((n-i+1)1(n-i+2)\,...\,(n-1)n\) into \(1(n-i+1)(n-i+2)\,...\,(n-1)n\). Therefore, we obtain that \(s^{i}(\pi)\) exactly ends with \((n-i)1(n-i+1)\,...\,(n-1)n\) as desired. Further, note that when \(i=n-1\) we finally obtain the identity permutation after the maximal number of iterations. **Example 3.7**.: For \(n=6\), Theorem 3.6 tells us that \(\pi\in\mathcal{L}n1\) ends with \(61\), as expected; \(s(\pi)\) ends with \(516\), as stated in Corollary 3.5. Further, \(s^{2}(\pi)\) ends with \(4156\), \(s^{3}(\pi)\) ends with \(31456\), and thus \(s^{3}(\pi)=231456\). We conclude \(s^{4}(\pi)=213456\) and \(s^{5}(\pi)=123456\). **Remark 3.8**.: For all \(n\geq 4\), if \(\pi\in\mathcal{L}n1\), then \(s^{n-3}(\pi)=2314\cdots n\), \(s^{n-2}(\pi)=213\cdots n\), and of course \(s^{n-1}(\pi)=123\cdots n\). This tells us that regardless of what \(\pi\in\mathcal{L}n1\) is, the final three iterations of the algorithm are all the same. The permutations "meet" at this point. However, before those last few iterations, we cannot explicitly describe what \(s^{i}(\pi)\) will start with without specifying \(L\). Specifying \(L=L^{\prime}\) yields the following useful result. **Corollary 3.9**.: _Let \(\pi=L^{\prime}n1\). For \(i\in[n-1]\),_ \[s^{i}(\pi)=23\cdots(n-i)1(n-i+1)\cdots(n-1)n.\] Inputting any sequence in ascending order will have the stack-sorting algorithm output the original sequence. Thus, the initial segment \(\rho\) of the permutation will always be \(23\,...\,(n-i-1)\). We effectively learn, not only about the maximality of \(Ln1\), but also enumerate the general form of each algorithm iteration in the special case of \(L^{\prime}n1\). **Definition 3.10**.: We define the \(L^{\prime}n1\)_family_ to be the set of \(n\) permutations obtained from \(L^{\prime}n1\) and its iterations under the stack-sorting algorithm. We denote this family by \(\mathcal{F}_{L^{\prime}n1}\). Generally, any \(Ln1\) permutation with its iterations up to and including the identity form a \(Ln1\)_family_. **Example 3.11**.: Consider \(L^{\prime}51=23451\). By Corollary 3.9 we obtain that \(s(L^{\prime}51)=23415\), \(s^{2}(L^{\prime}51)=23145\), \(s^{3}(L^{\prime}51)=21345\), and \(s^{4}(L^{\prime}51)=12345\). Notice that \(L^{\prime}51\) is maximal and \[\mathcal{F}_{L^{\prime}51}=\{23451,23415,23145,21345,12345\}.\] We are essentially just moving the '1' one place to the left each time. The following theorem tells us that maximal permutations and \(Ln1\) permutations are equivalent. **Theorem 3.12**.: _A permutation \(\pi\in\mathcal{L}n1\) if and only if \(\pi\) is exactly \((n-1)\)-stack sortable._ Proof.: If \(\pi\in\mathcal{L}n1\), the desired consequence is a result of Theorem 3.6, which dictates that \(s^{n-2}(\pi)=213\cdots n\). It follows \(s^{n-1}(\pi)=123\cdots n\), therefore \(\pi\) is exactly \((n-1)\)-stack-sortable. If \(\pi\not\in\mathcal{L}n1\), then \(1\) is not in the last place of \(\pi\)_or_ some \(k<n\) is directly left of \(1\) in \(\pi\). We consider these two cases. _Case 1_.: If \(1\) is not the last digit in \(\pi\), then \(1\) will reach the first position of the output in less than \(n-1\) iterations of the algorithm. This is because the \(1\) will always move at least one place to the left if it is not already leftmost; the top of the stack will always be greater than \(1\), so we will always push it onto the stack where it will be immediately popped before the previous top element. The remaining \(n-1\) elements will also have been sorted in less than \(n-1\) iterations because the maximal possible number of iterations it takes to sort the remaining sequence of \(n-1\) elements is \(n-2\). Thus, \(\pi\) can be stack-sorted in less than \(n-1\) iterations and is not exactly \((n-1)\)-stack-sortable. _Case 2_.: If \(\pi=Lk1\) such that \(k<n\) and \(L\) is a permutation of \(\{1,2,...\,,k-1,k+1,...\,,n\}\), then \(1\) will move more than one space left after the first iteration of the algorithm. This is because \(n\) will remain on the bottom of the stack (possibly more elements) while \(k\) and \(1\) are both pushed and then cleared first, therefore \(s(\pi)\) will contain \(1\) to the left of both \(k\) and \(n\). Similar to the reasoning in the prior case, \(1\) will reach the first position of the output in less than \(n-2\) more iterations, while the remaining elements will also be sorted in ascending order in less than \(n-2\) more iterations by maximality. As before, \(\pi\) can be stack-sorted in less than \(n-1\) iterations and is not exactly \((n-1)\)-stack-sortable. ## 4. Geometry of \(Ln1\) families In this section we interpret our permutations as points in \(\mathbb{R}^{n}\), which leads us to ask: What is the geometry arising by taking the convex hull of stack-sorting iterations? **Proposition 4.1**.: _Every \(Ln1\) family forms an affinely independent set in \(\mathbb{R}^{n}\)._ Proof.: Consider \(\boldsymbol{\pi}\in\mathcal{L}n1\). Let \(\mathbf{e}=123\cdots n=\mathbf{s}^{n-1}(\boldsymbol{\pi})\). Subtracting \(\mathbf{e}\) component-wise from each of the other family members and utilizing Theorem 3.6, we deduce that \(\mathbf{s}^{i}(\boldsymbol{\pi})-\mathbf{e}\) must end with \(1(1-n+i)0\dots 000\), where the \(1\) is in the \((n-i-1)^{\text{th}}\) position. Note that \[\mathbf{s}^{n-2}(\boldsymbol{\pi})-\mathbf{e}=1(-1)00\cdots 00\text{ and } \mathbf{s}^{n-3}(\boldsymbol{\pi})-\mathbf{e}=11(-2)0\cdots 00\] by Remark 3.8. As vectors in \(\mathbb{R}^{n}\), these two are linearly independent as no scalar multiple of one vector can produce the other - there is no way to obtain a nonzero number in the third coordinate of \(\mathbf{s}^{n-2}(\boldsymbol{\pi})-\mathbf{e}\). Assume \(\mathcal{S}=\{\mathbf{s}^{n-2}(\boldsymbol{\pi})-\mathbf{e},\mathbf{s}^{n-3} (\boldsymbol{\pi})-\mathbf{e},...,\mathbf{s}^{n-k+1}(\boldsymbol{\pi})- \mathbf{e}\}\) is a linearly independent set in \(\mathbb{R}^{n}\) and consider \(\mathcal{S}\cup\{\mathbf{s}^{n-k}(\boldsymbol{\pi})-\mathbf{e}\}\). We know that \(\mathbf{s}^{n-k}(\boldsymbol{\pi})-\mathbf{e}\) is the only vector in this set with a nonzero number in the \(k^{\text{th}}\) coordinate. Therefore, no linear combination of the vectors in \(\mathcal{S}\) yields \(\mathbf{s}^{n-k}(\boldsymbol{\pi})-\mathbf{e}\) and \(\mathcal{S}\cup\{\mathbf{s}^{n-k}(\boldsymbol{\pi})-\mathbf{e}\}\) is linearly independent. We can conclude the original \(Ln1\) family is an affinely independent set in \(\mathbb{R}^{n}\). As a corollary we obtain the following proposition. **Proposition 4.2**.: _The convex hull of any \(Ln1\) family forms a \((n-1)\)-simplex in \(\mathbb{R}^{n}\)._ When the family is \(\mathcal{F}_{\mathcal{L}^{\prime}n1}\), we will denote its convex hull as \(\triangle_{n}\) and refer to it as the \(L^{\prime}n1\)_simplex_. Notice that the simplex formed is one dimension less than the dimension of its ambient space. This is not a trivial detail, it will necessitate some caution as we proceed to develop the Ehrhart theory of \(\triangle_{n}\), especially when comparing it to the unit cube and lecture-hall simplex which are both full-dimensional. **Theorem 4.3**.: _The \(L^{\prime}n1\) simplex is hollow. Specifically, all non-vertex integer points in \(\triangle_{n}\) lie on the facet formed from the convex hull of \(\mathcal{F}_{\mathcal{L}^{\prime}n1}\setminus\{\mathbf{e}\}\)._ Proof.: Note that all elements of \(\mathcal{F}_{\mathcal{L}^{\prime}n1}\), treated as points in \(\mathbb{R}^{n}\), have \(2\) as their fist coordinate entry except for \(\mathbf{e}=(1,2,...\,,n)\). This implies that if we take a convex combination of points in \(\mathcal{F}_{\mathcal{L}^{\prime}n1}\), the first coordinate entry is of the form \[\lambda_{1}+2(\lambda_{2}+\dots+\lambda_{n})=k,\] where \(k\) is a real number and \(0<k\leq 2\) because of the constraints on \(\lambda\) from Equation (1). Further note that \(\lambda_{2}+\dots+\lambda_{n}=1-\lambda_{1}\). Hence, \(2-\lambda_{1}=k\), which we can rewrite as \(\lambda_{1}=2-k\). For our first-coordinate entry to be an integer, we can only consider \(k=1\) and \(2\). If \(k=1\), then \(\lambda_{1}=1\) and we obtain the vertex point \(\mathbf{e}\). If \(k=2\), then \(\lambda_{1}=0\) and the integer point exists on the facet containing all vertices except \(\mathbf{e}\). Holowness is not a property special to \(\triangle_{n}\). The unit cube and lecture-hall simplex also satisfy this property. Because we know their Ehrhart polynomials, we can argue this much more efficiently using reciprocity: \[(-1)^{d}L_{\mathbb{Z}}(P_{n}^{\circ};1)=L_{\mathbb{Z}}(P_{n};-1)=(-1+1)^{n}=0.\] **Remark 4.4**.: The lecture-hall simplex is even more so like \(\triangle_{n}\) because it is hollow in the exact same way, all non-vertex points are on the facet formed from the convex hull of its vertices excluding \((1,2,...,n)\). An almost identical argument to Theorem 4.3 can show this to be the case. **Remark 4.5**.: A lattice polytope \(P\) is hollow if and only if \(-1\) is a root of its Ehrhart polynomial. More generally, the dilate \(tP\) is hollow if and only if \(-t\) is a root of the Ehrhart polynomial of \(P\). ## 5. Ehrhart theory of \(L^{\prime}n1\) simplices We now develop the Ehrhart theory of our family of \(L^{\prime}n1\) simplices. There is a slight caveat that prevents us from constructing unimodular transformations directly between \(\triangle_{n}\) and \(\square^{n-1}\) or \(P_{n-1}\) because \(\triangle_{n}\), unlike the other objects being full-dimensional in \(\mathbb{R}^{n-1}\), is \((n-1)\)-dimensional in \(\mathbb{R}^{n}\). This means a matrix representation of \(\triangle_{n}\) will be \(n\) by \(n\), while that of say \(P_{n-1}\) will be \(n-1\) by \(n\) or \(\square^{n-1}\) will be \(n-1\) by \(2^{n-1}\). Any transformation between them is a non-square matrix and thus cannot be unimodular. We transform from \(\triangle_{n}\) to \(P_{n-1}\) instead of \(\square^{n-1}\) because the former is also a simplex of the same dimension, which implies they both have \(n\) vertices. We then need to make a correction for the difference in their ambient spaces. One option, the one we will proceed with, is to lift \(P_{n-1}\) to \(\mathbb{R}^{n}\) by appending a new last coordinate that does not affect the lattice point count. We will establish a unimodular transformation from this new construction to \(\triangle_{n}\). Another option is to project \(\triangle_{n}\) onto a hyperplane, ensuring the lattice count remains preserved, and then transform this construction in \(\mathbb{R}^{n-1}\) to \(P_{n-1}\). Regardless, to construct a unimodular transformation we need to compare two polytopes living in the same ambient space. Before we proceed, a quick observation about the recursive structure of the lecture-hall simplex will be insightful. Let \(\mathbf{0_{n}}\) be the zero vector in \(\mathbb{R}^{n}\). **Proposition 5.1**.: _Let \(V_{n}\) denote the vertex set of \(P_{n}\) and \(Q_{n}:=\mathsf{conv}(V_{n}\setminus\{\mathbf{0_{n}}\})\). Then_ \[L_{\mathbb{Z}}(P_{n};t)=L_{\mathbb{Z}}(Q_{n+1};t).\] _In other terms, the polytopes \(\mathsf{conv}(V_{n})\) and \(\mathsf{conv}(V_{n+1}\setminus\{\mathbf{0_{n+1}}\})\) are integrally equivalent._ Proof.: Consider \(V_{n}\), which consists of the points \(\mathbf{0_{n}},(0,0,...,n),...,(0,2,...,n),(1,2,...,n)\). Append a last coordinate of \(n+1\) to each vertex to obtain a polytope in \(\mathbb{R}^{n+1}\) that is integrally equivalent to the original. Notice this new set is \(V_{n+1}\) without \(\mathbf{0_{n+1}}\). **Theorem 5.2**.: _The \(L^{\prime}n1\) simplex \(\triangle_{n}\) and \((n-1)\)-dimensional lecture hall simplex \(P_{n-1}\) are integrally equivalent. In particular,_ \[L_{\mathbb{Z}}(\triangle_{n};t)=L_{\mathbb{Z}}(P_{n-1};t).\] We employ the notation \(S+\mathbf{v}\) when appropriate to denote adding the point \(\mathbf{v}\) to each point in the set \(S\). If \(M\) is a matrix, \(M+\mathbf{v}\) adds \(\mathbf{v}\) to each column of \(M\). Proof.: Let \(M\) be the \(n\) by \(n\) matrix representation of \(\triangle_{n}\) with column \(i\) as \(s^{n-i}(L^{\prime}n1)\). Further, let \(L\) be the \(n\) by \(n\) lower-triangular matrix of \(-1\)'s, let \(\nu_{k}=\sum_{i=2}^{k}i\), and let \(\nu_{k}=(\nu_{2},\nu_{3},...,\nu_{k+1})\). We have that \(L\cdot M+\nu_{n}\) has columns \((1,2,...\,,n),(0,2,...\,,n),...\,(0,0,...\,,n)\). This is the vertex set \(V_{n}\) of \(P_{n}\) without \(\mathbf{0_{n}}\). Recall from Proposition 5.1 that \(Q_{n}=\mathsf{conv}(V_{n}\setminus\{\mathbf{0_{n}}\})\) yields a polytope integrally equivalent to \(P_{n-1}\). We now have a unimodular transformation from \(\triangle_{n}\) to an integrally equivalent polytope to \(P_{n-1}\), from which we obtain: \[L_{\mathbb{Z}}(\triangle_{n};t)=L_{\mathbb{Z}}(Q_{n}-\nu;t)=L_{\mathbb{Z}}(Q_{ n};t)=L_{\mathbb{Z}}(P_{n-1};t).\] **Example 5.3**.: Observe in Figure 3 that the three polytopes, the \(2341\) simplex \(\triangle_{4}\subseteq\mathbb{R}^{4}\), the \(3\)-unit cube \(\square^{3}\), and the \(3\)-dimensional lecture-hall simplex \(P_{3}\), all share the same lattice-point count in their first dilate. Also note that \(\triangle_{4}\) is a \(3\)-dimensional polytope in \(\mathbb{R}^{4}\), whereas the other two polytopes are full-dimensional in their ambient space. Notice the similarity between how points are distributed in \(P_{3}\) and \(\triangle_{3}\). Since the Ehrhart polynomial for the unit cube and the lecture-hall simplex is known, we quickly obtain the following result as a corollary. **Corollary 5.4**.: _The lattice point enumerator of the \(L^{\prime}n1\) simplex is_ \[L_{\mathbb{Z}}(\triangle_{n};t)=(t+1)^{n-1}.\] **Corollary 5.5**.: _The Ehrhart series of the \(L^{\prime}n1\) simplex is_ \[\mathsf{Ehr}_{\mathbb{Z}}(\triangle_{n};z)=\frac{\sum_{k=0}^{n-1}A(n,k)z^{k}} {(1-z)^{n-1}}\] _where \(A(n,k)\) is the number of permutations of \([n]\) with \(k\) descents._ **Remark 5.6**.: By Ehrhart-Macdonald reciprocity, the following holds: \[L_{\mathbb{Z}}(\triangle_{n}^{\circ};t+2) =(-1)^{n-1}L_{\mathbb{Z}}(\triangle_{n};-t-2)\] \[=(-1)^{n-1}(-t-1)^{n-1}=(t+1)^{n-1}=L_{\mathbb{Z}}(\triangle_{n}; t).\] Also note that \(L_{\mathbb{Z}}(\triangle_{n}^{\circ};1)=0\) and \(L_{\mathbb{Z}}(\triangle_{n}^{\circ};2)=1\). Therefore, \(\triangle_{n}\) is Gorenstein of index \(2\). Figure 3. From Left to Right: the \(2341\)-simplex \(\triangle_{4}\subseteq\mathbb{R}^{4}\), the \(3\)-unit cube \(\square^{3}\), and the \(3\)-dimensional lecture-hall simplex \(P_{3}\). ### Using real lattice-point enumerators for recursive structures Initial efforts to obtain the lattice-point enumerator for \(L^{\prime}n1\) simplices resulted in differing techniques than those used previously in the section. While a less direct method, the approach used in the following results yielded some connections to _real_ lattice-point enumerators of various translations of our simplices which we present now. Note that an integral equivalence of two polytopes, as defined in Definition 2.8, does not imply that the two polytopes have the same _real_ lattice-point count. For example, \(\triangle_{n}\) and \(\triangle_{n}-\boldsymbol{\pi}_{n}\) are integrally equivalent, but have different real-lattice point enumerators, as we will later see. This is in notable contrast to the fact integer translations are lattice invariant for lattice polytopes. Taking the last coordinate in each lattice point of a polytope \(P\subset\mathbb{R}^{n}\) and setting it equal to \(0\) is the _projection_ of \(P\) to the hyperplane defined by \(x_{n}=0\). We denote this operation as \(\mathsf{proj}_{x_{n}=0}(P)\) and use an apostrophe \(P^{\prime}\) as shorthand. **Lemma 5.7**.: _Let \(\mathbf{p}=(\rho_{1},...\,,\rho_{n})\in\mathsf{aff}(\triangle_{n})\), and \(\mathbf{p}^{\prime}=(\rho_{1},...\,,\rho_{n-1},0)\). Consider the \(L^{\prime}n1\) simplex \(\triangle_{n}\) and let \(\triangle^{\prime}_{n}:=\mathsf{proj}_{(x_{n}=0)}(\triangle_{n})\). Then, for any \(\lambda\in\mathbb{R}_{\geq 0}\):_ \[L_{\mathbb{R}}(\triangle_{n}-\mathbf{p};\lambda)=L_{\mathbb{R}}(\triangle^{ \prime}_{n}-\mathbf{p}^{\prime};\lambda).\] We shift \(\triangle_{n}\) by \(\mathbf{p}\) (and thus \(\triangle^{\prime}_{n}\) by \(\mathbf{p}^{\prime}\)) as a slight extra step in order to show the much stronger result that these two shifted polytopes are in bijection for _all_ real dilates \(\lambda\) instead of just integer dilates. Our desired result for the unshifted polytopes will come as a brief corollary. Proof.: Let \(\mathcal{X}:=\lambda(\triangle_{n}-\mathbf{p})\cap\mathbb{Z}^{n}\) and \(\mathcal{X}^{\prime}:=\lambda(\triangle^{\prime}_{n}-\mathbf{p}^{\prime})\cap \mathbb{Z}^{n}\). Define \[\varphi:\mathcal{X}\to\mathcal{X}^{\prime}\text{ such that }\varphi(x_{1},...\,,x_{n-1},x_{n})=(x_{1},...\,,x_{n-1},0).\] This map is well-defined because of our construction of \(\mathcal{X}^{\prime}\) from \(\mathcal{X}\); it is immediate that if \((x_{1},...\,,x_{n})\in\mathcal{X}\) then \((x_{1},...\,,x_{n-1},0)\in\mathcal{X}^{\prime}\) because \(\mathcal{X}^{\prime}\) is the projection of each point in \(\mathcal{X}\) to the hyperplane \(x_{n}=0\). To prove injectivity, suppose \(\mathbf{x},\mathbf{y}\in\mathcal{X}\) such that \(\varphi(\mathbf{x})=\varphi(\mathbf{y})\). Thus, \[(x_{1},...\,,x_{n-1},0)=(y_{1},...\,,y_{n-1},0)\text{ and }x_{i}=y_{i}\text{ for }i \in[n-1].\] Since the vertices of \(\triangle_{n}\) are permutations of \([n]\), points \(\mathbf{z}=(z_{1},...\,,z_{n})\) in \(\triangle_{n}\) exist on the hyperplane defined by \[\sum_{k=1}^{n}z_{k}=\sum_{k=1}^{n}k=\frac{n(n+1)}{2}.\] Therefore, any point \(\mathbf{z}\) in \(\triangle_{n}-\mathbf{p}^{\prime}\) lives in \(\sum_{k=1}^{n}z_{k}=0\), as does any nonnegative real dilate \(\sum_{k=1}^{n}\lambda z_{k}=\lambda\sum_{k=1}^{n}z_{k}=0\). Hence, \(\sum_{k=1}^{n}x_{k}=\sum_{k=1}^{n}y_{k}\) and further \(x_{n}=y_{n}\). We conclude that \(\mathbf{x}=\mathbf{y}\) and \(\varphi\) is injective. For surjectivity, consider \(\mathbf{x}^{\prime}=(x_{1},...\,,x_{n-1},0)\in\mathcal{X}^{\prime}\). We know there exists a unique convex combination of vertices of \(\lambda(\triangle^{\prime}_{n}-\mathbf{p}^{\prime})\) such that \(\sum_{k=1}^{n}c_{k}\mathbf{v}^{\prime}_{k}=\mathbf{x}^{\prime}\), where each \(c_{i}\in\mathbb{R}_{\geq 0}\) and \(\sum_{k=1}^{n}c_{k}=1\). An integral point in \(\lambda(\triangle_{n}-\mathbf{p})\) is a convex combination of the vertices of \(\lambda(\triangle_{n}-\mathbf{p})\) and similarly for \(\lambda(\triangle^{\prime}_{n}-\mathbf{p}^{\prime})\). Note that the corresponding vertices of both polytopes share the exact same first \(n-1\) coordinates; thus, the convex combination will yield \(x_{1},...\,,x_{n-1}\) as the first \(n-1\) coordinates of the resulting point \(\mathbf{x}\in\lambda(\triangle_{n}-\mathbf{p})\). Further, this point lies in the affine span of \(\lambda(\triangle_{n}-\mathbf{p})\) and thus we know \(\sum_{i=1}^{n}x_{i}=0\). This implies \(x_{n}=-\sum_{i=1}^{n-1}x_{i}\) and therefore \(x_{n}\in\mathbb{Z}\) because \(x_{1},...\,,x_{n-1}\in\mathbb{Z}\). We conclude \(\mathbf{x}\in\mathcal{X}\) and \(\varphi\) is surjective. Hence, we have established a bijection between \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\) for all nonnegative real dilates \(\lambda\) and can now deduce the polytopes \(\lambda(\triangle_{n}-\mathbf{p})\) and \(\lambda(\triangle_{n}^{\prime}-\mathbf{p}^{\prime})\) will always have the same number of lattice points. **Lemma 5.8**.: _Let \(\triangle_{n}\) be the \(\mathcal{l}^{\prime}n1\) simplex, \(\boldsymbol{\pi}_{n}=\mathcal{l}^{\prime}n1\), and \(\overline{\boldsymbol{\pi}}_{n}=\mathcal{l}^{\prime}n=(\pi_{1},...\,,\pi_{n- 1})\). For all \(\mathfrak{t}\in\mathbb{N}\),_ \[\mathcal{L}_{\mathbb{R}}\left(\triangle_{n}-\overline{\boldsymbol{\pi}_{n+1}} ;\frac{\mathfrak{t}}{n}\right)=\mathcal{L}_{\mathbb{R}}\left(\triangle_{n}- \boldsymbol{\pi};\frac{\mathfrak{t}}{n}\right).\] Proof.: Recall integer translations preserve lattice count and note that \(\boldsymbol{\pi}_{n}=\overline{\boldsymbol{\pi}_{n+1}}-(0,...\,,0,n)\). Thus, translating \(\triangle_{n}-\boldsymbol{\pi}_{n}\) by \((0,...\,,0,n)\) yields \(\triangle_{n}-\overline{\boldsymbol{\pi}_{n+1}}\). We must conclude they have equivalent lattice counts for all integer dilates \(\mathfrak{t}\). Further, translation by \((0,...\,,0,\lambda n)\) will remain integral after dilation for all rational dilates with form \(\lambda=\frac{\mathfrak{t}}{n}\), making it lattice-preserving in these special cases as well. The following result presents a recursive relationship that allows for the enumeration of lattice points of _real_ dilates of \(\triangle_{n+1}\) from _rational_ dilates of \(\triangle_{n}\). **Theorem 5.9**.: _For all \(\lambda\in\mathbb{R}_{\geq 0}\),_ \[\mathcal{L}_{\mathbb{R}}(\triangle_{n+1}-\boldsymbol{\pi}_{n+1}; \lambda) =\left|\lambda((\triangle_{n+1}-\boldsymbol{\pi}_{n+1})\cap\mathbb{ Z}^{n+1})\right|\] \[=\sum_{k=0}^{\lfloor n\lambda\rfloor}\,\left|\frac{k}{n}( \triangle_{n}-\boldsymbol{\pi}_{n})\cap\mathbb{Z}^{n}\right|=\sum_{k=0}^{ \lfloor n\lambda\rfloor}\,\,\mathcal{L}_{\mathbb{R}}\left(\triangle_{n}- \boldsymbol{\pi}_{n};\frac{k}{n}\right).\] Before we present the proof of this result, we provide an example of the theorem. **Example 5.10**.: Take \(n=2\), then \(\boldsymbol{\pi}_{3}=231\) and \(\boldsymbol{\pi}_{2}=21\) and consider \(\lambda=\frac{5}{2}\). Note that \[\left|\frac{5}{2}(\triangle_{3}-\boldsymbol{\pi}_{3})\cap\mathbb{Z}^{3}\right| =12,\] which we can directly count from the schematic in Figure 4. Observe that \(\triangle_{3}-\boldsymbol{\pi}_{3}\) lives in \(\mathbb{R}^{3}\), but is a \(2\)-dimensional simplex. By Theorem 5.9, we can alternatively count the number of lattice points of \(\frac{5}{2}(\triangle_{3}-\boldsymbol{\pi}_{3})\) by counting the number of lattice points in the five dilates of \(\frac{k}{2}(\triangle_{2}-\boldsymbol{\pi}_{2})\) for \(k=1,2,3,4\), and \(5\). In the schematic, consider the polytopes on their affine span and we can treat the polytopes as being projected onto the hyperplane \(x_{3}=0\) (hence, the quotations in some of the figure labels). Counting the number of lattice points in the five dilates of \(\frac{k}{2}(\triangle_{2}-\boldsymbol{\pi}_{2})\) for \(k=1,2,3,4\), and \(5\) as shown in orange, we also obtain \(12\) lattice points. Proof of Theorem 5.9.: Let \(\triangle_{n}^{\prime}=\mathsf{proj}_{(x_{n}=0)}(\triangle_{n})\), and \(\boldsymbol{\pi}_{n}^{\prime}=(\pi_{1},...,\pi_{n-1},0)\) from \(\boldsymbol{\pi}_{n}=(\pi_{1},...,\pi_{n})=L^{\prime}n1\). By Lemma 5.7, \[\left|\lambda(\triangle_{n+1}-\boldsymbol{\pi}_{n+1})\cap\mathbb{Z}^{n+1} \right|=\left|\lambda(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime })\cap\mathbb{Z}^{n+1}\right|. \tag{3}\] Obtain \(\hat{\triangle}_{n}\) by lifting \(\triangle_{n}\) to \(\mathbb{R}^{n+1}\) by appending \(0\) to the end of each lattice point of \(\triangle_{n}\). By Claim 6.3, for all \(\lambda\in\mathbb{R}_{\geq 0}\), \[\lambda(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime})\cap\mathbb{Z }^{n+1}=\biguplusplus_{k=0}^{\lfloor\lambda n\rfloor}\left(\frac{k}{n}(\hat{ \triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime})\cap\mathbb{Z}^{n+1}\right),\] that is, the lattice points in the real dilates of \(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime}\) correspond the disjoint union of rational dilates of \(\hat{\triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime}\). Thus, \[\left|\lambda(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{ \prime})\cap\mathbb{Z}^{n+1}\right| =\left|\biguplus_{k=0}^{\lfloor\lambda n\rfloor}\left(\frac{k}{n }(\hat{\triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime})\cap\mathbb{Z}^{n+1} \right)\right|\] \[=\sum_{k=0}^{\lfloor\lambda n\rfloor}\left|\frac{k}{n}(\hat{ \triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime})\cap\mathbb{Z}^{n+1}\right|.\] Further, we have that \[\left|\lambda(\triangle_{n+1}-\boldsymbol{\pi}_{n+1})\cap\mathbb{Z}^{n+1} \right| =\sum_{k=0}^{\lfloor n\lambda\rfloor}\left|\frac{k}{n}(\triangle _{n}-\overline{\boldsymbol{\pi}_{n+1}})\cap\mathbb{Z}^{n}\right|,\] and by Lemma 5.8, \[=\sum_{k=0}^{\lfloor n\lambda\rfloor}\left|\frac{k}{n}(\triangle_{n}- \boldsymbol{\pi}_{n})\cap\mathbb{Z}^{n}\right|.\] We continue by presenting a recursive relationship that deals with the enumeration of relative interior lattice points of real dilates of \(\triangle_{n+1}\) from _rational_ dilates of \(\triangle_{n}\). **Proposition 5.11**.: _For all \(\lambda\in\mathbb{R}_{\geq 0}\),_ \[\begin{split}\mathcal{L}_{\mathbb{R}}((\triangle_{n+1}-\mathbf{\pi}_{n +1})^{\circ};\lambda)&=\left|\left(\lambda(\triangle_{n+1}-\mathbf{ \pi}_{n+1})\right)^{\circ}\cap\mathbb{Z}^{n+1}\right|\\ &=\sum_{k=1}^{\lceil n\lambda\rceil-1}\left|\left(\frac{k}{n}( \triangle_{n}-\mathbf{\pi}_{n})\right)^{\circ}\cap\mathbb{Z}^{n}\right|=\sum_{k=1} ^{\lceil n\lambda\rceil-1}\mathcal{L}_{\mathbb{R}}\left((\triangle_{n}-\mathbf{\pi }_{n})^{\circ};\frac{k}{n}\right).\end{split}\] Proof.: The proof follows similar arguments to that of the proof of Theorem 5.9; modifying that proof the claim holds. We conclude this section by making use of the similarities in the recurrence relations of Theorem 5.9 and Proposition 5.11 to prove the following result by induction. The result shows that the real lattice-point count for the \(\lambda\)-th dilate of \(\triangle_{n}-\mathbf{\pi}_{n}\) coincides with the relative interior lattice-point count for the \((\lambda+2)\)-th dilate of \(\triangle_{n}-\mathbf{\pi}_{n}\) for every real number \(\lambda\). Furthermore, the result shows that \(\triangle_{n}\) is Gorenstein of index \(2\). This alternative proof is included because it allows for the Gorenstein result without knowledge of the entire lattice-point enumerator. **Theorem 5.12**.: _For \(\lambda\in\mathbb{R}\),_ \[\mathcal{L}_{\mathbb{R}}(\triangle_{n}-\mathbf{\pi}_{n};\lambda)=\mathcal{L}_{ \mathbb{R}}((\triangle_{n}-\mathbf{\pi}_{n})^{\circ};\lambda+2).\] _Hence, any integer translate of \(\triangle_{n}-\mathbf{\pi}_{n}\), in particular \(\triangle_{n}\), is Gorenstein of index \(2\)._ Proof.: First, we prove the claim: for all \(t\in\mathbb{Z}\), \[\left|\left(\left(\frac{t}{n}+\frac{2n-1}{n}\right)(\triangle_{n}-\mathbf{\pi}_{n })\right)^{\circ}\cap\mathbb{Z}^{n}\right|=\left|\frac{t}{n}(\triangle_{n}- \mathbf{\pi}_{n})\cap\mathbb{Z}^{n}\right|.\] In words, the relative interior lattice-point count for the \(\left(\frac{t}{n}+\frac{2n-1}{n}\right)\)-dilate of \(\triangle_{n}-\mathbf{\pi}_{n}\) matches the lattice-point count of the \(\frac{t}{n}\)-dilate of \(\triangle_{n}-\mathbf{\pi}_{n}\). We proceed by induction on \(n\), which relies on manipulations of summations. Consider \(n=2\). Observe that \(\triangle_{2}-\mathbf{\pi}_{2}\) is simply a line segment from \((0,0)\) to \((-1,1)\), \((\frac{t}{2}+\frac{3}{2})(\triangle_{2}-\mathbf{\pi}_{2})\) is a line segment from \((0,0)\) to \((-\frac{t}{2}-\frac{3}{2},\frac{t}{2}+\frac{3}{2})\), and \(\frac{t}{2}(\triangle_{2}-\mathbf{\pi}_{2})\) is a line segment from \((0,0)\) to \((-\frac{t}{2},\frac{t}{2})\). Notice that all the interior lattice points of the line segment from \((0,0)\) to \((-\frac{t}{2}-\frac{3}{2},\frac{t}{2}+\frac{3}{2})\) lie on the line segment from \((0,0)\) to \((-\frac{t}{2},\frac{t}{2})\) giving the following relation: \[\mathcal{L}_{\mathbb{R}}\left((\triangle_{2}-\mathbf{\pi}_{2})^{\circ}\colon\frac{t }{2}+\frac{3}{2}\right)=\mathcal{L}_{\mathbb{R}}\left(\triangle_{2}-\mathbf{\pi}_ {2};\frac{t}{2}\right)\] for all \(t\in\mathbb{Z}\). Assume the statement holds for some \(m\in\mathbb{Z}_{\geq 2}\). We then have: \[\begin{split}\left|\frac{t}{m+1}(\triangle_{m+1}-\mathbf{\pi}_{m+1} )\cap\mathbb{Z}^{m+1}\right|&=\sum_{k=0}^{\lfloor m\cdot\frac{t} {m+1}\rfloor}\left|\frac{k}{m}(\triangle_{m}-\pi_{m})\cap\mathbb{Z}^{m}\right| \\ &=\underbrace{\sum_{k=0}^{\lfloor m\cdot\frac{t}{m+1}\rfloor} \left|\left(\frac{k}{m}+\frac{2m-1}{m}(\triangle_{m}-\mathbf{\pi}_{m})\right)^{ \circ}\cap\mathbb{Z}^{m}\right|}_{(\mathbf{\star})}.\end{split}\] By Claim 6.4, we can derive the following: \[(\bigstar) =\sum_{k=2m-1}^{\lceil m\cdot\frac{t}{m+1}+\frac{2m+1}{m+1}\rceil-1} \left|\left(\frac{k}{m}(\triangle_{m}-\boldsymbol{\pi}_{m})\right)^{\circ} \cap\mathbb{Z}^{m}\right|\] \[=\sum_{k=0}^{\lceil m\cdot\frac{t}{m+1}+\frac{2m+1}{m+1}\rceil-1} \left|\left(\frac{k}{m}(\triangle_{m}-\boldsymbol{\pi}_{m})\right)^{\circ} \cap\mathbb{Z}^{m}\right|-\sum_{k=0}^{2m-2}\left|\left(\frac{k}{m}(\triangle_{ m}-\boldsymbol{\pi}_{m})\right)^{\circ}\cap\mathbb{Z}^{m}\right|\] \[=\left|\left(\left(\frac{t}{m+1}+\frac{2m+1}{m+1}\right)( \triangle_{m+1}-\boldsymbol{\pi}_{m+1})\right)^{\circ}\cap\mathbb{Z}^{m+1} \right|-\sum_{k=0}^{2m-2}\left|\left(\frac{k}{m}(\triangle_{m}-\boldsymbol{ \pi}_{m})\right)^{\circ}\cap\mathbb{Z}^{m}\right|.\] Here, note that by Proposition 5.11, \[\sum_{k=0}^{2m-2}\left|\left(\frac{k}{m}(\triangle_{m}- \boldsymbol{\pi}_{m})\right)^{\circ}\cap\mathbb{Z}^{m}\right| =\left|\left(\frac{2m-1}{m}(\triangle_{m+1}-\boldsymbol{\pi}_{m+ 1})\right)^{\circ}\cap\mathbb{Z}^{m+1}\right|\] \[=\left|(\triangle_{m+1}-\boldsymbol{\pi}_{m+1})^{\circ}\cap \mathbb{Z}^{m+1}\right|,\] \[=0,\] where the last equality holds by Theorem 4.3. Hence, \[\bigstar)=\left|\left(\left(\frac{t}{m+1}+\frac{2m+1}{m+1}\right)\left( \triangle_{m+1}-\boldsymbol{\pi}_{m+1}\right)\right)^{\circ}\cap\mathbb{Z}^{m +1}\right|.\] Therefore, by induction, we have proved that for all \(t\in\mathbb{Z}_{\geq 0}\), \(n\in\mathbb{Z}_{\geq 2}\), \[\left|\left(\left(\frac{t}{n}+\frac{2n-1}{n}\right)(\triangle_{n}-\boldsymbol{ \pi}_{n})\right)^{\circ}\cap\mathbb{Z}^{n}\right|=\left|\frac{t}{n}(\triangle _{n}-\boldsymbol{\pi}_{n})\cap\mathbb{Z}^{n}\right|.\] Now, by Theorem 5.9, for all \(n\in\mathbb{Z}_{\geq 3}\) \[L_{\mathbb{R}}(\triangle_{n}-\boldsymbol{\pi}_{n};\lambda)=\sum_{k=0}^{\lfloor (n-1)\lambda\rfloor}L_{\mathbb{R}}\left(\triangle_{n-1}-\boldsymbol{\pi}_{n- 1};\frac{k}{n}\right).\] By our recently proven claim, \[=\sum_{k=0}^{\lfloor(n-1)\lambda\rfloor}\left|\left(\left(\frac{k }{n-1}+\frac{2n-3}{n-1}\right)(\triangle_{n-1}-\boldsymbol{\pi}_{n-1}) \right)^{\circ}\cap\mathbb{Z}^{n-1}\right|\] \[=\sum_{k=0}^{\lceil(n-1)(\lambda+2)\rceil-1}\left|\left(\left( \frac{k}{n-1}\right)(\triangle_{n-1}-\boldsymbol{\pi}_{n-1})\right)^{\circ} \cap\mathbb{Z}^{n-1}\right|\] \[\qquad\qquad\qquad-\sum_{k=0}^{2n-4}\left|\left(\left(\frac{k}{n- 1}\right)(\triangle_{n-1}-\boldsymbol{\pi}_{n-1})\right)^{\circ}\cap\mathbb{ Z}^{n-1}\right|,\] and by Theorem 4.3, \[=|((\lambda+2)(\triangle_{n}-\boldsymbol{\pi}_{n}))^{\circ}\cap \mathbb{Z}^{n}|-0\] \[=L_{\mathbb{R}}((\triangle_{n}-\boldsymbol{\pi}_{n})^{\circ}; \lambda+2).\] One can also show by solving inequalities that for \(n=2\) the result holds, but we omit the technicalities here. Therefore, for all \(n\in\mathbb{Z}_{\geq 2}\), \[L_{\mathbb{R}}(\triangle_{n}-\boldsymbol{\pi}_{n};\lambda)=L_{\mathbb{R}}(( \triangle_{n}-\boldsymbol{\pi}_{n})^{\circ};\lambda+2).\] In particular, for all \(n\in\mathbb{Z}_{\geq 2}\) and \(t\in\mathbb{Z}_{\geq 0}\), \(\triangle_{n}\) is Gorenstein of index \(2\), i.e., \[L_{\mathbb{Z}}(\triangle_{n};t)=L_{\mathbb{Z}}(\triangle_{n}^{\circ};t+2).\] **Remark 5.13**.: If we extend the previous definition and define a polytope to be _real Gorenstein of index \(k\)_ for an integer \(k\), if the relations from Equation 2 hold for all real dilates \(t>k\), we believe this matches the rational/real Gorenstein definitions given in [1] and we can say that \(\triangle_{n}-\boldsymbol{\pi}_{n}\) is "real Gorenstein" of index \(2\). ## Dedication & Acknowledgments In loving memory of a dear friend, Luca Elghanayan. The authors thank Matthias Beck, Simon Rubenstein-Salzedo, and Julie Vega for fruitful conversation. Andres R. Vindas-Melendez also thanks Cameron Lowe, Justin McClung, Emily Muller-Foster, and Heather Willis for conversations at the start of this project. Andres R. Vindas-Melendez is partially supported by the National Science Foundation under Award DMS-2102921. ## References * [1] Matthias Beck, Sophia Elia, and Sophie Rehberg, _Rational Ehrhart theory_, preprint (arXiv:2110.10204). * [2] Matthias Beck and Sinai Robins, _Computing the continuous discretely_, second ed., Undergraduate Texts in Mathematics, Springer, New York, 2015, Integer-point enumeration in polyhedra. * [3] Miklos Bona, _A survey of stack-sorting disciplines_, Electron. J. Combin. **9** (2002/03), no. 2, Article 1, 16. * [4] Katie L. Bright and Carla D. Savage, _The geometry of lecture hall partitions and quadratic permutation statistics_, 22nd International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2010), Discrete Math. Theor. Comput. Sci. Proc., vol. AN, Assoc. Discrete Math. Theor. Comput. Sci., Nancy, 2010, pp. 569-580. * [5] Sylvie Corteel, Sunyoung Lee, and Carla D. Savage, _Enumeration of sequences constrained by the ratio of consecutive parts_, Sem. Lothar. Combin. **54A** (2005/07), Art. B54Aa, 12. MR 2180869 * [6] Alan J. Hoffman and Joseph B. Kruskal, _Integral boundary points of convex polyhedra_, pp. 49-76, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. * [7] Donald E. Knuth, _The art of computer programming._, vol. 1, Addison-Wesley, 1968. * [8] Gunter M. Ziegler, _Lectures on polytopes_, Graduate Texts in Mathematics, vol. 152, Springer-Verlag, New York, 1995. ## 6. Appendix Due to the length and technicalities of the proofs of the following claims, we have decided to include them in an appendix. Before beginning, it is necessary to outline a few more terms: Given a collection of points \(\mathcal{C}=\{\mathbf{x}_{1},...\,,\mathbf{x}_{k}\}\) in \(\mathbb{R}^{n}\), we define a _pointed cone_ with apex \(\mathbf{v}\in\mathbb{R}^{n}\) as a set of the form \[\mathsf{cone}_{\mathbf{v}}(\mathcal{C}):=\left\{\mathbf{v}+\sum_{i=1}^{k} \lambda_{i}(\mathbf{x}_{i}-\mathbf{v}):\lambda_{i}\in\mathbb{R}_{\geq 0} \right\}. \tag{4}\] This creates a cone that starts from the point \(\mathbf{v}\) and consists of the infinite region bounded by the \(k\) rays formed from \(\mathbf{v}\) through a given point \(\mathbf{x}_{i}\). Similarly, we define _the pyramid over \(\mathcal{P}\) from \(\mathbf{v}\)_ as \[\mathsf{pyr}_{\mathbf{v}}(\mathcal{P}):=\left\{\mathbf{v}+\sum_{i=1}^{k} \lambda_{i}(\mathbf{x}_{i}-\mathbf{v}):\lambda_{i}\in\mathbb{R}_{\geq 0} \text{ and }\sum_{i=1}^{k}\lambda_{i}=1\right\}.\] Note that the choice of point \(\mathbf{v}\) will affect the information that is encoded in the cone/pyramid. For a polytope \(\mathcal{P}\) that is not full-dimensional, it is a careful selection of \(\mathbf{v}\not\in\mathsf{aff}(\mathcal{P})\) that allows dilates of \(\mathcal{P}\) to perfectly encode themselves in'slices' of the cone or pyramid over \(\mathbf{v}\). **Definition 6.1**.: Let \(\mathcal{S}\subseteq\mathbb{R}^{n}\) be an \((n-1)\)-dimensional affine subspace and take \(\mathbf{x}\in\mathbb{R}^{n}-\mathcal{S}\). Then the _\(\mathcal{S}\)-halfspace with \(\mathbf{x}\)_ denoted as \(\mathcal{H}_{\mathbf{x}}(\mathcal{S})\) is defined as the unique halfspace in \(\mathbb{R}^{n}\) formed from the hyperplane \(\mathcal{S}\) that includes the point \(\mathbf{x}\). **Remark 6.2**.: Note that \(\mathsf{pyr}_{\mathbf{v}}(\mathcal{P})=\mathsf{cone}_{\mathbf{v}}(\mathcal{P} )\cap\mathcal{H}_{\mathbf{v}}(\mathcal{P})\) in our case where \(\mathcal{P}\) is \((n-1)\)-dimensional in \(n\)-space and \(\mathbf{v}\not\in\mathsf{aff}(\mathcal{P})\). The first claim shows that the lattice points in the real dilates of \(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime}\) correspond to the disjoint union of rational dilates of \(\hat{\triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime}\). **Claim 6.3**.: _The following equality holds for all \(\lambda\in\mathbb{R}_{\geq 0}\):_ \[\lambda(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime})\cap\mathbb{ Z}^{n+1}=\biguplus_{k=0}^{\lfloor\lambda n\rfloor}\left(\frac{k}{n}(\hat{ \triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime})\cap\mathbb{Z}^{n+1}\right).\] Proof.: Let \(\mathbf{0}_{n}\) be the origin in \(\mathbb{R}^{n}\), \(\mathcal{d}\) be the distance between \(\mathbf{0}_{n+1}\) and \(\mathsf{aff}(\hat{\triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime})\), and \(\mathbf{v}\) be any fixed vector on the plane \(\mathsf{x}_{n+1}=\mathbf{0}\) that is orthogonal to \(\mathsf{aff}(\hat{\triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime})\) such that \(\mathbf{v}\cdot\boldsymbol{\pi}_{n+1}^{\prime}\geq\mathbf{0}\) (see Figure 5, for a concrete visual example). We will show that the lattice points in \(\lambda\)-th dilate of \(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime}\) corresponds to the lattice points in the union of slices of \(\lambda(\triangle_{n+1}^{\prime}-\boldsymbol{\pi}_{n+1}^{\prime})\) obtained by intersecting the polytope with a infinite set of hyperplanes \(\mathsf{aff}(\lambda(\hat{\triangle}_{n}-\boldsymbol{\pi}_{n+1}^{\prime}))+k \mathbf{v}\) for all real numbers \(k\in[0,\frac{\lambda d}{|\mathbf{v}|}]\). Take \(\mathcal{H}_{\mathbf{0}_{n+1}}(\lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+1}))\) as defined using Definition 6.1. Then \[\lambda\big{(}\triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1}\big{)} \cap\mathbb{Z}^{n+1} =\lambda\big{(}\triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1}\big{)} \cap\mathcal{H}_{\mathbf{0}_{n+1}}(\lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+ 1})\big{)}\cap\mathbb{Z}^{n+1}\] \[=\lambda\big{(}\triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1}\big{)} \cap\left(\bigcup_{0\leq k\leq\frac{\lambda d}{|\mathbf{v}|}}(\mathsf{aff}( \lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+1}))+k\mathbf{v})\right)\cap \mathbb{Z}^{n+1} \tag{5}\] \[=\bigcup_{0\leq k\leq\frac{\lambda d}{|\mathbf{v}|}}\left(\lambda (\triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1})\cap(\mathsf{aff}(\lambda(\hat{ \triangle}_{n}-\pi^{\prime}_{n+1}))+k\mathbf{v})\cap\mathbb{Z}^{n+1}\right).\] Since, \[\mathsf{aff}(\hat{\triangle}_{n}-\pi^{\prime}_{n+1})=\{\mathbf{x}\in\mathbb{R} ^{n+1}|\mathbf{x}_{1}+\cdots+\mathbf{x}_{n}=-n,\mathbf{x}_{n+1}=0\},\] note that \(\mathbf{v}=c(1,...\,,1,0)\in\mathbb{R}^{n+1}\) for some \(c\in\mathbb{R}\). Without loss of generality, take \(c=1\) and consider \(\mathbf{v}=(1,...\,,1,0)\in\mathbb{R}^{n+1}\). Note that \(d=|\mathbf{v}|\) and thus, we can proceed with (5) as follows: \[\lambda(\triangle_{n+1}^{\prime}-\pi_{n+1}^{\prime})\cap\mathbb{Z}^{n+1}= \mathsf{cone}_{\mathbf{0}_{n+1}}(\lambda(\hat{\triangle}_{n}-\pi_{n+1}^{ \prime}))\cap\bigcup_{0\leq k\leq\lambda}\left(\left(\mathsf{aff}(\lambda( \hat{\triangle}_{n}-\pi_{n+1}^{\prime}))+k\mathbf{v}\right)\cap\mathbb{Z}^{n+1 }\right).\] with \(\mathbf{v}=(1,...\,,1,0)\in\mathbb{R}^{n+1}\). Now, we show that lattice points can only exist on the slices of \(\lambda(\triangle_{n+1}^{\prime}-\pi_{n+1}^{\prime})\) obtained from the hyperplanes \(\mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi_{n+1}^{\prime}))+k\mathbf{v}\) for \(k\in[0,\lambda]\) such that \(k\mathsf{n}\) is a integer. Equivalently, we show that if \(k\mathsf{n}\notin[0,\lambda]\cap\mathbb{Z}\), then \(\lambda(\triangle_{n+1}^{\prime}-\pi_{n+1}^{\prime})\cap\left(\mathsf{aff}( \lambda(\hat{\triangle}_{n}-\pi_{n+1}^{\prime}))+k\mathbf{v}\right)=\emptyset\). So, first we consider \(\lambda(\triangle_{n+1}-\pi)\), which is \(\lambda(\triangle_{n+1}^{\prime}-\pi_{n+1}^{\prime})\) before the projection to \(x_{n+1}=0\). Notice that the slice of \(\lambda(\triangle_{n+1}-\pi)\) obtained from intersecting it with the hyperplane \(\mathsf{H}_{h}:=\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{n+1}=h\}\) only contains lattice points if \(h\) is an integer. Thus, \[h\not\in\mathbb{Z}\cap[-\lambda n,0]\Longrightarrow(\lambda(\triangle_{n+1}- \pi)\cap\mathbb{Z}^{n+1})\cap\mathsf{H}_{h}=\emptyset.\] Next, we show that the slice of \(\lambda(\triangle_{n+1}-\pi)\) obtained from intersecting it with \(\mathsf{H}_{h}\) is equivalent to the slice of \(\lambda(\triangle_{n+1}-\pi)\) obtained from intersecting it with \(\mathsf{H}^{\mathrm{vert}}+\left(1-\frac{h}{n}\right)\mathbf{v}\), where \[\mathsf{H}^{\mathrm{vert}}:=\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{1}+\cdots+x_{n }=-n\}.\] Note that \[\mathsf{aff}(\lambda(\triangle_{n+1}-\pi_{n+1}))=\{\mathbf{x}\in\mathbb{R}^{n +1}|x_{1}+\cdots+x_{n+1}=0\}.\] Now, we can derive the following: \[\mathsf{H}_{h}\cap\mathsf{aff}(\lambda(\triangle_{n+1}-\pi_{n+1} )) =\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{n+1}=h\}\cap\{\mathbf{x}\in \mathbb{R}^{n+1}|x_{1}+\cdots+x_{n+1}=0\}\] \[=\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{1}+\cdots+x_{n}=-h,x_{n+1}=h\}\] \[=\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{1}+\cdots+x_{n}=-h\}\] \[\cap\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{1}+\cdots+x_{n+1}=0\}\] \[=\left(\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{1}+\cdots+x_{n}=-n\}+ \left(1-\frac{h}{n}\right)\mathbf{v}\right)\cap\mathsf{aff}(\lambda(\triangle_ {n+1}-\pi))\] \[=\left(\mathsf{H}^{\mathrm{vert}}+\left(1-\frac{h}{n}\right) \mathbf{v}\right)\cap\mathsf{aff}(\lambda(\triangle_{n+1}-\pi)).\] Hence, \[(\lambda(\triangle_{n+1}-\pi_{n+1})\cap\mathbb{Z}^{n+1})\cap\mathsf{H}_{k}=( \lambda(\triangle_{n+1}-\pi_{n+1})\cap\mathbb{Z}^{n+1})\cap\left(H^{\mathrm{vert }}+\left(1-\frac{h}{n}\right)\mathbf{v}\right).\] Further, note that the slice of \(\lambda(\triangle_{n+1}-\pi_{n+1})\) obtained from intersecting it with \(\mathcal{H}^{\mathrm{vert}}+\left(1-\frac{h}{n}\right)\mathbf{v}\) has no lattice point on it if \(h\) is not an integer, i.e., \[h\not\in\mathbb{Z}\cap[-\lambda n,\mathtt{0}]\Longrightarrow(\lambda(\triangle _{n+1}-\pi_{n+1})\cap\mathbb{Z}^{n+1})\cap\left(\mathcal{H}^{\mathrm{vert}}+ \left(1-\frac{h}{n}\right)\mathbf{v}\right)=\emptyset.\] Lastly, we show that the projection onto \(x_{n+1}=\mathtt{0}\) of the slice of \(\lambda(\triangle_{n+1}-\pi_{n+1})\) obtained from intersecting it with \(\mathcal{H}^{\mathrm{vert}}+\left(1-\frac{h}{n}\right)\mathbf{v}\) is equivalent to the slice of \(\lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+1})\) obtained from intersecting it with \(\mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+1}))+(1-\frac{h}{n}) \mathbf{v}\). So, consider \[\mathsf{proj}_{x_{n+1}=\mathtt{0}}\left(\left(\lambda(\triangle_{n+1}-\pi_{n +1})\cap\mathbb{Z}^{n+1}\right)\cap\left(\mathcal{H}^{\mathrm{vert}}+\left( 1-\frac{h}{n}\right)\mathbf{v}\right)\right).\] Then \[\mathsf{proj}_{x_{n+1}=\mathtt{0}}\left(\left(\lambda(\triangle_{n +1}-\pi_{n+1})\cap\mathbb{Z}^{n+1}\right)\cap\left(\mathcal{H}^{\mathrm{vert}} +\left(1-\frac{h}{n}\right)\mathbf{v}\right)\right)\] \[\qquad=\mathsf{proj}_{x_{n+1}=\mathtt{0}}\left(\lambda(\triangle_{ n+1}-\pi_{n+1})\cap\mathbb{Z}^{n+1}\right)\cap\mathsf{proj}_{x_{n+1}= \mathtt{0}}\left(\mathcal{H}^{\mathrm{vert}}+\left(1-\frac{h}{n}\right) \mathbf{v}\right)\] \[\qquad=\left(\lambda(\triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1}) \cap\mathbb{Z}^{n+1}\right)\cap\left(\left\{\mathbf{x}\in\mathbb{R}^{n+1}|x_{ 1}+\cdots+x_{n}=-n,x_{n+1}=\mathtt{0}\right\}+\left(1-\frac{h}{n}\right) \mathbf{v}\right)\] \[\qquad=\left(\lambda(\triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1}) \cap\mathbb{Z}^{n+1}\right)\cap\left(\mathsf{aff}(\lambda(\hat{\triangle}_{n} -\pi^{\prime}_{n+1}))+\left(1-\frac{h}{n}\right)\mathbf{v}\right).\] Thus, \[h\not\in\mathbb{Z}\cap[-\lambda n,\mathtt{0}]\Longrightarrow(\lambda( \triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1})\cap\mathbb{Z}^{n+1})\cap\left( \mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+1}))+\left(1-\frac{h} {n}\right)\mathbf{v}\right)=\emptyset.\] Therefore, \[kn\not\in\mathbb{Z}\cap[\mathtt{0},\lambda n]\Longrightarrow(\lambda( \triangle_{n+1}^{\prime}-\pi^{\prime}_{n+1})\cap\mathbb{Z}^{n+1})\cap\left( \mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi^{\prime}_{n+1}))+k\mathbf{v} \right)=\emptyset.\] To conclude, by _"flossing"_ out all the slices of \(\lambda(\triangle_{n+1}^{\prime}-\pi_{n+1}^{\prime})\) that don't contain lattice points, we can derive the following: \[\bigcup_{0\leq k\leq\lambda}\left(\lambda(\triangle_{n+1}^{\prime}- \pi_{n+1}^{\prime})\cap(\mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi_{n+1}^{ \prime}))+k\mathbf{v})\cap\mathbb{Z}^{n+1}\right)\] \[\qquad=\underset{\begin{subarray}{c}0\leq k\leq\lambda:\\ kn\in\mathbb{Z}\end{subarray}}{\big{\downarrow}}\left(\lambda(\triangle_{n+1}^{ \prime}-\pi_{n+1}^{\prime})\cap(\mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi_{n +1}^{\prime}))+k\mathbf{v})\cap\mathbb{Z}^{n+1}\right)\] \[\qquad=\underset{k=0}{\overset{\lfloor\lambda n\rfloor}{\big{(} \lambda(\triangle_{n+1}^{\prime}-\pi_{n+1}^{\prime})\cap(\mathsf{aff}(\lambda( \hat{\triangle}_{n}-\pi_{n+1}^{\prime}))+\frac{k}{n}\mathbf{v})\cap\mathbb{Z} ^{n+1}\Big{)}}\] \[\qquad=\underset{k=0}{\overset{\lfloor\lambda n\rfloor}{\big{(} \mathsf{p}\mathsf{v}\mathsf{r}_{\mathbf{0}_{n+1}}(\lambda(\hat{\triangle}_{n}- \pi_{n+1}^{\prime}))\cap(\mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi_{n+1}^{ \prime}))+\frac{k}{n}\mathbf{v})\cap\mathbb{Z}^{n+1}\Big{)}}\] \[\qquad=\underset{k=0}{\overset{\lfloor\lambda n\rfloor}{\big{(} \mathsf{cone}_{\mathbf{0}_{n+1}}(\hat{\triangle}_{n}-\pi_{n+1}^{\prime})\cap( \mathsf{aff}(\lambda(\hat{\triangle}_{n}-\pi_{n+1}^{\prime}))+\frac{k}{n} \mathbf{v})\cap\mathbb{Z}^{n+1}\Big{)}}\] \[\qquad=\underset{k=0}{\overset{\lfloor\lambda n\rfloor}{\big{(} \frac{k}{n}(\hat{\triangle}_{n}-\pi_{n+1}^{\prime})\cap\mathbb{Z}^{n+1}\Big{)} \,.}}\] Hence, the claim holds. **Claim 6.4**.: _For \(\pi_{n}:=L^{\prime}n1\), we have_ \[\sum_{k=0}^{\lfloor n\frac{t}{n+1}\rfloor}\left|\left(\frac{k}{n}+\frac{2n-1} {n}(\triangle_{n}-\pi_{n})\right)^{\circ}\cap\mathbb{Z}^{n}\right|=\sum_{k=2n -1}^{\lceil n\frac{t}{n+1}+\frac{2n+1}{n+1}\rceil-1}\left|\left(\frac{k}{n}( \triangle_{n}-\pi_{n})\right)^{\circ}\cap\mathbb{Z}^{n}\right|.\] Proof.: By dividing the cases for when \(n+1|t\) and \(n+1\nmid t\), we can derive the following: \[\sum_{k=0}^{\lfloor n\frac{t}{n+1}\rfloor}\left|\left(\frac{k}{n }+\frac{2n-1}{n}(\triangle_{n}-\pi_{n})\right)^{\circ}\cap\mathbb{Z}^{n}\right|\] \[\qquad=\begin{cases}\sum_{k=2n-1}^{\lceil n\frac{t}{n+1}+2n\rceil -1}\left|\left(\frac{k}{n}(\triangle_{n}-\pi_{n})\right)^{\circ}\cap\mathbb{Z }^{n}\right|&\text{if }n+1|t\\ \sum_{k=2n-1}^{\lceil n\frac{t}{n+1}+2n-1\rceil-1}\left|\left(\frac{k}{n}( \triangle_{n}-\pi_{n})\right)^{\circ}\cap\mathbb{Z}^{n}\right|&\text{if }n+1 \nmid t.\end{cases}\] Note that if \(n+1\mid t\), then \(n\cdot\frac{t}{n+1}+2n\in\mathbb{Z}\), thus \[\left[n\cdot\frac{t}{n+1}+2n\right]=\left\lceil n\cdot\frac{t}{n+1}+2n-\frac{ n}{n+1}\right\rceil.\] Alternatively, if \(n+1\nmid t\), then \(0<n\cdot\frac{t}{n+1}+2n-1-\lceil n\cdot\frac{t}{n+1}+2n-1\rceil\leq\frac{n}{n+1}\), thus \[\left\lceil n\cdot\frac{t}{n+1}+2n-1\right\rceil=\left\lceil n\cdot\frac{t}{n+1 }+2n-1+\frac{1}{n+1}\right\rceil.\] Therefore, \[\sum_{k=0}^{\lfloor n\cdot\frac{t}{n+1}\rfloor}\left|\left(\frac{k}{n}+\frac{2 n-1}{n}\big{(}\triangle_{n}-\pi_{n}\big{)}\right)^{\circ}\cap\mathbb{Z}^{n} \right|=\sum_{k=2n-1}^{\lceil n\cdot\frac{t}{n+1}+\frac{2n+1}{n+1}\rceil-1} \left|\left(\frac{k}{n}\big{(}\triangle_{n}-\pi_{n}\big{)}\right)^{\circ}\cap \mathbb{Z}^{n}\right|.\]
2309.05064
Power laws of natural swarms are fingerprints of an extended critical region
Collective biological systems display power laws for macroscopic quantities and are fertile probing grounds for statistical physics. Besides power laws, natural insect swarms present strong scale-free correlations, suggesting closeness to phase transitions. Swarms exhibit {\em imperfect} dynamic scaling: their dynamical correlation functions collapse into single curves when written as functions of the scaled time $t\xi^{-z}$ ($\xi$: correlation length, $z$: dynamic exponent), but only for short times. Triggered by markers, natural swarms are not invariant under space translations. Measured static and dynamic critical exponents differ from those of equilibrium and many nonequilibrium phase transitions. Here, we show that: (i) the recently discovered scale-free-chaos phase transition of the harmonically confined Vicsek model has a novel extended critical region for $N$ (finite) insects that contains several critical lines. (ii) As alignment noise vanishes, there are power laws connecting critical confinement and noise that allow calculating static critical exponents for fixed $N$. These power laws imply that the unmeasurable confinement strength is proportional to the perception range measured in natural swarms. (iii) Observations of natural swarms occur at different times and under different atmospheric conditions, which we mimic by considering mixtures of data on different critical lines and $N$. Unlike results of other theoretical approaches, our numerical simulations reproduce the previously described features of natural swarms and yield static and dynamic critical exponents that agree with observations.
R. González-Albaladejo, L. L. Bonilla
2023-09-10T16:11:56Z
http://arxiv.org/abs/2309.05064v2
# Power laws of natural swarms are fingerprints of an extended critical region ###### Abstract Collective biological systems display power laws for macroscopic quantities and are fertile probing grounds for statistical physics. Besides power laws, natural insect swarms present strong scale-free correlations, suggesting closeness to phase transitions. Swarms exhibit _imperfect_ dynamic scaling: their dynamical correlation functions collapse into single curves when written as functions of the scaled time \(t\xi^{-z}\) (\(\xi\): correlation length, \(z\): dynamic exponent), but only for short times. Triggered by markers, natural swarms are not invariant under space translations. Measured static and dynamic critical exponents differ from those of equilibrium and many nonequilibrium phase transitions. Here, we show that the recently discovered scale-free-chaos phase transition of the harmonically confined Vicsek model has a novel extended critical region for finitely many insects. Unlike results of other theoretical approaches, our numerical simulations of the critical region reproduce the previously described features of natural swarms and yield static and dynamic critical exponents that agree with observations. The formation of animal flocks presents common features irrespective of biological details [1; 2; 3; 4; 5; 6; 7; 8; 9] and it is a precursor of the major transitions in the evolution of complexity [10; 11] (e.g., changes from single cell protists to multicellular organisms, changes from individual ants, bees and other insects to their society [10; 11]). In particular, many macroscopic observables of biological systems obey power laws and their critical exponents have been measured [12; 14; 15; 16; 17; 18; 22]. Mitochondrial networks [16], bacterial colonies [19], bird flocks [20; 21; 22] and insect swarms [23; 24] provide examples of scale free behavior as their correlation length increases with the size of the flock, thereby rendering irrelevant intrinsic length scales associated to individuals [18]. Since the scale free property accompanies phase transitions, there have been many theoretical studies on the possible phase transitions responsible for flocking and other collective behavior in active matter, starting with the works by Vicsek _et al_[25] and Toner and Tu [26]. See [27]. The origin of power laws in observations of insect swarms is puzzling. Swarm of midges in the wild exhibit long range correlations [23; 24; 28; 29], which are absent in laboratory conditions without background noise and atmospheric variability [30]. Measurements of the critical dynamical exponent between the correlation time \(\tau\) and the correlation length \(\xi\) produce values in a range between \(z=1.16\) and \(1.37\) depending on sampling and fitting procedures [24; 29]. When latent in terms of the scaled time \(t/\xi^{z}\) and measured on natural swarms, the normalized dynamic connected correlation function (NDCCF) collapses into a single curve only on a finite interval (approximately \(0<t/\xi^{z}<4\)) [24]. However, the same function collapses for all scaled times according to theories based on standard renormalization group (RG) ideas [31] for the ordering phase transition of a complex system of stochastic partial differential equations (PDEs) [29]. Measured and predicted \(z\) values are very close but static critical exponents are not [29]. Homogeneous phases in the ordering transition are invariant under space translations. Triggered by markers, natural swarms are not. What is going on? Simply put, natural swarms are not close to an ordering phase transition. Recently, we have discovered a phase transition in the harmonically confined Vicsek model (HCVM) characterized by scale-free chaos and an extended criticality region [32; 33]. The static critical exponents are close to measured ones and the NDCCF collapses into a single curve for \(0<t/\xi^{z}<4\) but \(z\approx 1\)[32]. Here we mimic the mesurements performed on natural swarms using numerical simulations of the HCVM. On the noise-confinement phase plane \((\eta,\beta)\), three scale-free critical lines having \(\xi\sim N^{\frac{1}{3}}\) collapse at the same rate to the \(\beta=0\) axis as the number of insects \(N\rightarrow\infty\): the single-to-multicluster chaos line, \(\beta_{c}(N,\eta)\), the line of maximal largest Lyapunov exponents (LLE), \(\beta_{i}(N,\eta)\), and the onset of chaos line (zero LLE), \(\beta_{0}(N,\eta)\)[32; 33]. For finite \(N\), the region comprising these lines is an extended criticality region. We simulate the HCVM for different \(N\), \(\eta\) and \(\beta\) within the criticality region and calculate \(z\). We find \(z=1.15\) using least squares (LS) fitting and \(z=1.33\) by reduced major axis (RMA) regression [34]. Furthermore, the NDCCF collapses into a single curve for \(0<t/\xi^{z}<4\) and the swarm exhibits a condensed core and a surrounding vapor in experiments [35] and in our simulations [32]. Thus, varying insect number and noise within an _extended criticality region_ explain the observed \(z\) and limited NDCCF collapse. _Phase diagrams._ For finite \(N\), Figure 1 depicts the phase diagram of the three dimensional HCVM on the plane \((\eta,\beta)\): \[\mathbf{x}_{i}(t+1) = \mathbf{x}_{i}(t)+\mathbf{v}_{i}(t+1),\quad i=1,\ldots,N,\] \[\mathbf{v}_{i}(t+1) = v_{0}\mathcal{R}_{\eta}\!\left[\Theta\!\left(\sum_{|\mathbf{x}_{ j}-\mathbf{x}_{i}|<R_{0}}\mathbf{v}_{j}(t)-\beta\mathbf{x}_{i}(t)\right)\right] \tag{1}\] Here \(\Theta(\mathbf{x})=\mathbf{x}/|\mathbf{x}|\), \(R_{0}\) is the radius of the sphere of influence about particles, \(\beta\) is the confining spring constant, and \(\mathcal{R}_{\eta}(\mathbf{w})\) performs a random rotation uniformly distributed around \(\mathbf{w}\) with maximum amplitude of \(\eta\)[32]. Fig. 1(a) resembles the phase diagram of the mean-field (MF) equation for the swarm center of mass, given by Eq. (1) with \(N=1\)[33]. Fig. 1(b) shows three scale-free lines where swarm size and correlation length are proportional, \(\beta_{0}<\beta_{c}<\beta_{i}\). \(\beta_{0}(\eta;N)\) separates regions of chaotic attractors (LLE \(\lambda_{1}>0\) in [III]) from nonchaotic regions (negative LLE) [33; 34]; \(\beta_{c}(\eta;N)\) (line [II]) separates chaotic single from multicluster swarms in region [I], whereas the LLE are maximal on the line \(\beta_{i}(\eta;N)\)[32]. The extended criticality region between \(\beta_{0}(\eta;N)\) and \(\beta_{i}(\eta;N)\) shrinks with increasing \(N\); see Fig. 1(c). The three critical lines collapse into the noise axis at the same rate as \(N\rightarrow\infty\); see Fig. 1(g). Finite-size and dynamical scaling imply \(\xi\sim\beta^{-\nu}\), \(\chi\sim\beta^{-\gamma}\) (susceptibility), \(\tau\sim\xi^{z}\) (correlation time) [32]. Fig. 1(d) and 1(e) illustrate how \(\beta_{c}\) and \(\beta_{0}\), respectively, depend on \(\eta\) for different \(N\). In rescaled coordinates, these curves collapse for fixed \(\eta\) as shown in Fig. 1(f): \[\beta_{j}(N;\eta)=C_{j}N^{-\frac{1}{3\varepsilon}}\eta^{m_{j}},\quad j=0,c, \tag{2}\] where \(C_{c}=1.5\pm 0.2\), \(m_{c}=1.2\pm 0.04\), \(C_{0}=0.92\pm 0.22\), \(m_{0}=m_{c}+a_{2}N^{-n_{2}}\), \(m_{c}=1.20\pm 0.04\), \(a_{2}=2.36\pm 0.07\), \(n_{2}=0.24\pm 0.01\). As the static critical exponents _are independent of \(\eta\)_[32], the power laws (2) allow calculating \(\nu=0.43\pm 0.03\) and \(\gamma=0.92\pm 0.13\) using one or several values of \(N\)[34]. This is _a first major result_ because the critical exponents are found from power laws in \(\eta\) without resorting to numerical simulations for ever increasing particle numbers. _Mixtures of simulation data in extended criticality regions_. Natural swarms experience background noise and variable atmospheric conditions [30] that may account for their strong correlations [18; 23; 24]. The measured power law for macroscopic quantities indicate that swarms are close to criticality. To interpret measurements using the HCVM, we need to recreate a mixture of results of numerical simulations that resemble measurements taken from swarms of different \(N\), \(\eta\) and \(\beta\), all within the extended criticality region [III] of Fig. 1(a) [34]. to determine the dynamical critical exponent \(z\). Figure 2(a) depicts correlation time vs correlation length for points on the scale free curves \(\beta_{0}\) and \(\beta_{c}\). Using LS regression, \(z\approx 1\) for them, whereas \(z\approx 2\) for \(\beta_{i}\) (not shown; see [34]). The standard deviation \(\sigma\) is larger for \(0.1\leq\eta\leq 0.5\) than for larger noise values and so is the difference \(\Delta\beta=\beta_{c}-\beta_{0}\); see Fig. 2(b). Natural swarms have relatively small sizes (the largest observed swarm has \(N=781\)) and data are inevitably noisy [24; 29]. Therefore \(N\) is smaller than required to obtain accurate values of the critical exponents [32]. Thus, we select data points on scale-free curves \(\beta_{0}(N;\eta)\) and \(\beta_{c}(N;\eta)\) for \(0.1\leq\eta\leq 0.5\) (critical region [III] in Fig. 1) to calculate the dynamical critical exponent in Fig. 2(c). As explained in [29], fitting a straight line by RMA regression takes into consideration both the errors in \(\tau\) and \(\xi\), whereas LS regression considers only the error in \(\tau\), thereby underestimating \(z\). We find \(z_{\text{LS}}=1.15\pm 0.11\) and \(z_{\text{RMA}}=1.33\pm 0.10\) with probability distributions shown in Fig. 2(d). These values are very close to measurements on natural swarms: \(z_{\text{LS}}=1.16\pm 0.12\) and \(z_{\text{RMA}}=1.37\pm 0.11\)[29]. They are also close to the RG prediction \(z=1.35\) for the active version of models E/F and G of [37], which holds for the inertial spin model (ISM [36]) ordering transition [29]. Dynamic correlation function.Figure 3 shows the NDCCF \(g(t)=\hat{C}(k_{c},t)/\hat{C}(k_{c},0)\) with \(k_{c}=1/\xi\) for \(\beta=\beta_{c}\), small \(N\in(100,300)\), and \(0.1\leq\eta\leq 1\)[34]. We observe that the NDCCF collapses to a single curve only for \(0<k_{c}^{c}t<4\) Figure 2: Mixtures of simulation data. **(a)** Correlation time vs length for data on scale-free curves \(\beta_{0}(N;\eta)\) and \(\beta_{c}(N;\eta)\) for \(0.1<\eta<1\) and \(100<N<2500\). Note that the standard deviation \(\sigma\) is larger for smaller noise values. **(b)** Scale-free curves for \(N=500\) showing the noise intervals with smaller and larger \(\sigma\). \(\Delta\beta=\beta_{c}-\beta_{0}\) increases as \(\eta\) decreases. **(c)** Same as panel (a) for \(0.1<\eta<0.5\) showing LS and RMA fittings to straight lines for \(\beta_{0}\) and \(\beta_{c}\) data: the corresponding dynamical critical exponents are \(z_{\text{LS}}=1.15\pm 0.11\) and \(z_{\text{RMA}}=1.33\pm 0.10\). **(d)** Probability distribution of the LS (blue) and RMA (orange) critical exponent \(z\) from the resampling method consisting of randomly drawing \(10^{7}\) subsets with half the number of points from numerical simulations. Then we determine \(z\) in each subset using LS and RMA [34]. which is similar to observations in natural swarms (Fig. 2 of [24]), to HCVM data for \(\eta=0.5\) and \(100\leq N\leq 5000\) (Fig. 4 of [32]), and to HCVM data on \(\beta_{0}(N;\eta)\) (Fig. S7 of Ref. [34]). Contrastingly, numerical simulations near ordering scale-free transitions produce collapse of the NDCCF for all scale times \(k_{c}^{2}t\); see Figures 2(c) and 2(d) of [24] for the periodic VM and [29] (Fig. 20 of the Supplementary Material) for the ISM, also with periodic boundary conditions. Collapse of the NDCCF for all scaled times indicates that a single correlation time is involved in the ordering phase transition of these models. Collapse of \(g(t)\) only for short scaled times suggests that several time scales are involved in HCVM simulation data near the scale-free-chaos phase transition and in measurements on natural swarms [32]. _Perception range and static critical exponents._ While confinement or noise are not measurable control parameters, the perception range \(x\) (time averaged arithmetic mean of the minimal distance between each particle and its closest neighbor [23; 28]) is. For \(\beta_{0}(N;\eta)\) and finite \(N\), \(x>x_{c}(\eta)\) (the value at \(N=\infty\)), whereas \(x<x_{c}(\eta)\) on \(\beta_{c}(N;\eta)\); see Figs. 4(a) and 4(b). Static critical exponents are defined by \(\xi\sim(x-x_{c})^{-\nu}\), \(\chi\sim(x-x_{c})^{-\gamma}\). As shown in Fig. 4(c), they are the same if \(\beta_{0}(N;\eta)\) or \(\beta_{c}(N;\eta)\) replace \(|x-x_{c}|\) because [34] \[\eta^{m_{\alpha}o}x=A_{0}+B_{0}\beta_{0}\eta^{-m_{0}},\,\eta^{m_{ex}}x=A_{c}-B _{c}\beta_{c}\eta^{-m_{e}}, \tag{3}\] where \(x_{c}(\eta)=A_{j}\eta^{-m_{mj}},\,m_{xc}=0.50\pm 0.03\), \(A_{c}=2.00\pm 0.02\), \(B_{c}=13.00\pm 0.03\), \(m_{x0}=1.6\pm 0.2\), \(A_{0}=2.0\pm 0.2\), \(B_{0}=219.8\pm 0.2\). Eqs. (2) and (3) imply that \(\xi\sim N^{\frac{1}{2}}\sim\eta^{\nu m_{mj}}\beta_{j}(N;\eta)^{-\nu}\sim\eta^{m _{mj}}|x-x_{c}|^{-\nu}\), \(j=0,c\). Using a mixture of data, we find \(\nu=0.43\pm 0.03\), \(\gamma=0.92\pm 0.13\), which are close to observed values: \(\nu=0.35\pm 0.1\), \(\gamma=0.9\pm 0.2\)[23]. _Discussion._ For finite \(N\), the HCVM scale-free-chaos phase transition has an extended criticality region in the noise-confinement phase plane bounded by two critical scale-free lines \(\beta_{0}(N;\eta)\) and \(\beta_{c}(N;\eta)\), which separate nonchaotic-chaotic attractors and single-multicluster chaos, respectively. On these lines, macroscopic quantities exhibit power laws in the control parameter (confinement or perception range) and in noise. We compute critical exponents by exploiting these power laws, either by fixing noise and increasing \(N\) or by decreasing noise at fixed \(N\). Power laws for natural midge swarms are obtained using data from different number of insects and species under variable environmental conditions [23; 29; 24]. We mimic these conditions by using a mixture of \(\eta\) and \(N\) values on the critical lines \(\beta_{0,c}(N;\eta)\) of the extended criticality region of the HCVM scale-free-chaos phase transition (Region III of Fig. 1(c)). Applying to our numerical simulations the same tests used to extract critical exponents from observations, we predict static and dynamic critical exponents in agreement with those observed in natural swarms within the uncertainty range of the data. Furthermore, observed _qualitative features_ such as the collapse of the NDCCF only at short scaled times \(t/\xi^{z}\)[24] or the swarm shape (condensed core surrounded by insect vapor [35]) agree with HCVM simulations and the HCVM mean-field theory [33]. In contrast with these qualitative and quantitative agreements, RG theories based on the ordering phase transition predict accurately the dynamical critical exponent (\(z=1.35\)), but not the static critical exponents (\(\nu=0.748\), \(\gamma=1.171\)) [29], the limited collapse of the NDCCF [24], or the shape of the swarm [23; 35]. To check whether natural swarms are close to a scale-free-chaos phase transition, time series from measurements should be used to calculate the largest Lyapunov exponent [32]. Random motion observed in experiments [38; 39] may point out to midge swarms being in the vicinity of chaotic attractors. From a theoretical standpoint, a future RG theory of the HCVM would ascertain the class of universality of its scale-free-chaos phase transition. Numerical simulations indicate that the RG flow should include a line of critical points comprising the point of zero noise and confinement in Fig. 1. This feature is absent from the RG flow of the ordering transition of active stochastic PDEs [29]. The interaction between midges is acoustic and metric alignment with neighbors within a sphere of influence is reasonable, and so is a confining harmonic potential [1; 40]. For starling flocks, neighbors are topologically defined, bird rotations propagate swiftly as linear waves [18] and a reasonable extension of the continuous-time Vicsek model is the inertial spin model [36]. It would be interesting to find whether scale-free chaos transitions exist for a harmonically confined ISM or its discrete-time version, which are extensions of the Vicsek model to the underdamped regime. Yet Vicsek-type models based in social interactions do not account for features of bird flocks and fish schools based on vortices shed by flapping. Hydrodynamic interactions theoretically studied in Ref. [41] are important for observed ordered schools and bird flocks. Other authors have argued that noise induces schooling in finitely many fish experiencing binary interactions [42]. These points may warrant further investigation. _Acknowledgments._ This work has been supported by the FEDER/Ministerio de Ciencia, Innovacion y Universidades - Agencia Estatal de Investigacion grants PID2020-112796RB-C21 (RGA) and PID2020-112796RB-C22 (LLB), by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M23), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). RGA acknowledges support from the Ministerio de Economia y Competitividad of Spain through the Formacion de Doctores program Grant PRE2018-083807 cofinanced by the European Social Fund.
2309.14169
Extrapolated regularization of nearly singular integrals on surfaces
We present a method for computing nearly singular integrals that occur when single or double layer surface integrals, for harmonic potentials or Stokes flow, are evaluated at points nearby. Such values could be needed in solving an integral equation when one surface is close to another or to obtain values at grid points. We replace the singular kernel with a regularized version having a length parameter $\delta$ in order to control discretization error. Analysis near the singularity leads to an expression for the error due to regularization which has terms with unknown coefficients multiplying known quantities. By computing the integral with three choices of $\delta$ we can solve for an extrapolated value that has regularization error reduced to $O(\delta^5)$, uniformly for target points on or near the surface. In examples with $\delta/h$ constant and moderate resolution we observe total error about $O(h^5)$ close to the surface. For convergence as $h \to 0$ we can choose $\delta$ proportional to $h^q$ with $q < 1$ to ensure the discretization error is dominated by the regularization error. With $q = 4/5$ we find errors about $O(h^4)$. For harmonic potentials we extend the approach to a version with $O(\delta^7)$ regularization; it typically has smaller errors but the order of accuracy is less predictable.
J. Thomas Beale, Svetlana Tlupova
2023-09-25T14:24:29Z
http://arxiv.org/abs/2309.14169v2
# Extrapolated Regularization of ###### Abstract We present a method for computing nearly singular integrals that occur when single or double layer surface integrals, for harmonic potentials or Stokes flow, are evaluated at points nearby. Such values could be needed in solving an integral equation when one surface is close to another or to obtain values at grid points. We replace the singular kernel with a regularized version having a length parameter \(\delta\) in order to control discretization error. Analysis near the singularity leads to an expression for the error due to regularization which has terms with unknown coefficients multiplying known quantities. By computing the integral with three choices of \(\delta\) we can solve for an extrapolated value that has regularization error reduced to \(O(\delta^{5})\). In examples with \(\delta/h\) constant and moderate resolution we observe total error about \(O(h^{5})\). For convergence as \(h\to 0\) we can choose \(\delta\) proportional to \(h^{q}\) with \(q<1\) to ensure the discretization error is dominated by the regularization error. With \(q=4/5\) we find errors about \(O(h^{4})\). For harmonic potentials we extend the approach to a version with \(O(\delta^{7})\) regularization; it typically has smaller errors but the order of accuracy is less predictable. **Keywords:** boundary integral method, nearly singular integral, layer potential, Stokes flow **Mathematics Subject Classifications:** 65R20, 65D30, 31B10, 76D07 ## 1 Introduction The evaluation of singular or nearly singular surface integrals, on or near the surface, requires special care. Here we are concerned with single and double layer integrals for harmonic potentials or for Stokes flow. One of several possible approaches is to regularize the singular kernel in order to control the discretization error. A natural choice is to replace the \(1/r\) singularity in the single layer potential with \(\mathrm{erf}(r/\delta)/r\), where \(\mathrm{erf}\) is the error function and \(\delta\) is a numerical parameter setting the length scale of the regularization. This replacement introduces an additional error due to smoothing. For the singular case, evaluating at points on the surface, we can modify the choice of regularization so that the new error is \(O(\delta^{5})\); see [2, 5, 21]. The nearly singular case, evaluation at points near the surface, could be needed e.g. when surfaces are close together or to find values at grid points. For this case, in the previous work, we used analysis near the singularity to derive corrections which leave a remaining error of \(O(\delta^{3})\). It does not seem practical to extend the corrections to higher order. In the present work we show by local analysis that the simpler regularization can be used with extrapolation, rather than corrections, to improve the error to \(O(\delta^{5})\) in the nearly singular case. For \({\bf y}\) near the surface, at signed distance \(b\), if \({\cal S}\) is the single layer potential with some density function and \({\cal S}_{\delta}\) is the regularized integral, we show that \[{\cal S}_{\delta}({\bf y})={\cal S}({\bf y})+C_{1}\delta I_{0}(b/\delta)+C_{2} \delta^{3}I_{2}(b/\delta)+O(\delta^{5}) \tag{1}\] where \(I_{0}\) and \(I_{2}\) are certain integrals, known explicitly, and \(C_{1}\), \(C_{2}\) are coefficients which depend on \({\bf y}\), \(b\), the surface, and the density function. We can regard \({\cal S}\), \(C_{1}\), \(C_{2}\) as unknowns at one point \({\bf y}\). Our strategy is to calculate the regularized integrals \({\cal S}_{\delta}\) for three different choices of \(\delta\) and then solve for \({\cal S}\), within \(O(\delta^{5})\), from the system of three equations. We treat the double layer potential in a similar way, as well as the single and double layer integrals for Stokes flow. For the harmonic potentials we extend the approach to a method with \(O(\delta^{7})\) regularization error; it requires four choices of \(\delta\) rather than three. To compute the integrals we use a quadrature rule for surface integrals for which the quadrature points are points where the surface intersects lines in a three-dimensional grid and the weights are determined by the normal vector to the surface. In our examples with moderate resolution we take \(\delta/h\) constant where \(h\) is the grid spacing. With the fifth order method we observe errors about \(O(h^{5})\). For convergence as \(h\to 0\) we need to increase \(\delta/h\) to ensure that the discretization error is dominated by the regularization error. To do this we choose \(\delta\) proportional to \(h^{q}\), e.g. with \(q=4/5\), resulting in an error about \(O(h^{4})\). With the seventh order regularization we typically see smaller errors in both versions but the order in \(h\) is less predictable. Considerable work has been devoted to the computation of singular integrals such as layer potentials. Only a portion of this work has concerned nearly singular integrals on surfaces. Often values close to the surface are obtained by extrapolating from values further away [26], sometimes as part of the quadrature by expansion method [1, 10, 11, 14, 19]. In [20] sources are placed on the opposite side of the surface to produce a kernel independent method. With the singularity subtraction technique [9] a most singular part is evaluated analytically leaving a more regular remainder. In [15], for the nearly singular axisymmetric case, the error in computing the most singular part provides a correction. In [16] an approximation to the density function is used to reduce the singularity. Regularization has been used extensively to model Stokes flow in biology [7, 8]; see also [21]. While the choice of numerical method depends on context, the present approach is simple and direct. The work required is similar to that for a surface integral with smooth integrand, except that three (or four) related integrals must be computed rather than one. No special gridding or separate treatment of the singularity is needed. The surface must be moderately smooth, without corners or edges. Geometric information about the surface is not needed other than normal vectors; further geometry was needed for the corrections of [2, 5, 21] and in some other methods. It would be enough for the surface to be known through values of a level set function at grid points nearby. For efficiency fast summation methods suitable for regularized kernels [25, 18, 22] could be used. The approach here is general enough that it should apply to other singular kernels; however, a limitation is discussed at the end of the next section. Results are described more specifically in Sect. 2. The analysis leading to (1) is carried out in Sect. 3. In Sect. 4 we discuss the quadrature rule and the discretization error. In Sect. 5 we present numerical examples which illustrate the behavior of the method. ## 2 Summary of results For a single layer potential \[{\cal S}({\bf y})=\int_{\Gamma}G({\bf x}-{\bf y})f({\bf x})\,dS({\bf x})\,,\quad G ({\bf r})=-\frac{1}{4\pi|{\bf r}|} \tag{2}\] on a closed surface \(\Gamma\), with given density function \(f\), we define the regularized version \[{\cal S}_{\delta}({\bf y})=\int_{\Gamma}G_{\delta}({\bf x}-{\bf y})f({\bf x}) \,dS({\bf x})\,,\quad G_{\delta}({\bf r})=G({\bf r})s_{1}(|{\bf r}|/\delta) \tag{3}\] with \[s_{1}(r)={\rm erf}(r)=\frac{2}{\sqrt{\pi}}\int_{0}^{r}e^{-s^{2}}\,ds \tag{4}\] Then \(G_{\delta}\) is smooth, with \(G_{\delta}(0)=-1/(2\pi^{3/2}\delta)\), and \({\rm erf}(r/\delta)\to 1\) rapidly as \(r/\delta\) increases. Typically \({\cal S}_{\delta}-{\cal S}=O(\delta)\). If \({\bf y}\) is near the surface, then \({\bf y}={\bf x}_{0}+b{\bf n}\), where \({\bf x}_{0}\) is the closest point on \(\Gamma\), \({\bf n}\) is the outward normal vector at \({\bf x}_{0}\), and \(b\) is the signed distance. From a series expansion for \({\bf x}\) near \({\bf x}_{0}\) and \(b\) near \(0\) we show in Sect. 3 that \[{\cal S}({\bf y})+C_{1}\delta I_{0}(\lambda)+C_{2}\delta^{3}I_{2}(\lambda)={ \cal S}_{\delta}({\bf y})+O(\delta^{5}) \tag{5}\] where \(\lambda=b/\delta\); \(C_{1}\), \(C_{2}\) are unknown coefficients; and \(I_{0}\) and \(I_{2}\) are integrals occurring in the derivation that are found to be \[I_{0}(\lambda)=e^{-\lambda^{2}}/\sqrt{\pi}-|\lambda|\,{\rm erfc}\,|\lambda| \tag{6}\] \[I_{2}(\lambda)=\frac{2}{3}\left((\frac{1}{2}-\lambda^{2})e^{-\lambda^{2}}/ \sqrt{\pi}+|\lambda|^{3}\,{\rm erfc}\,|\lambda|\right) \tag{7}\] Here \({\rm erfc}=1-{\rm erf}\). To obtain an accurate value of \({\cal S}\), we calculate the regularized integrals \({\cal S}_{\delta}\) for three different choices of \(\delta\), with the same grid size \(h\), resulting in a system of three equations with three unknowns. We can then solve for the exact integral \({\cal S}\) within error \(\delta^{5}\). We typically choose \(\delta_{i}=\rho_{i}h\) with \(\rho_{i}=2,3,4\) or \(3,4,5\). To improve the conditioning we write three versions of (5) in terms of \(\rho\) rather than \(\delta\), \[{\cal S}({\bf y})+c_{1}\rho_{i}I_{0}(\lambda_{i})+c_{2}\rho_{i}^{3}I_{2}( \lambda_{i})={\cal S}_{\delta_{i}}({\bf y})+O(\delta^{5})\,,\quad i=1,2,3 \tag{8}\] with \(\lambda_{i}=b/\delta_{i}\). It is important that \(c_{1},c_{2}\) do not depend on \(\delta\) or \(\lambda\). We solve this \(3\times 3\) system for \(S\). The \(i\)th row is \([1\,,\rho_{i}I_{0}(\lambda_{i})\,\ \rho_{i}^{3}I_{2}(\lambda_{i})]\); the entries depend only on \(\lambda_{i}\) as well as \(\rho_{i}\). The value obtained for \({\cal S}\) has the form \[{\cal S}({\bf y})=\sum_{i=1}^{3}a_{i}(\lambda_{i})S_{\delta_{i}}({\bf y}) \tag{9}\] For each \(\lambda\), \(a_{1}+a_{2}+a_{3}=1\). At \(\lambda=0\), \(a_{1}=14/3\), \(a_{2}=-16/3\), \(a_{3}=5/3\) provided \(\rho_{i}=2,3,4\). As \(\lambda\) increases, the coefficients approach \((1,0,0)\), allowing a gradual transition to the region far enough from \(\Gamma\) to omit the regularization. To ensure the smoothing error is dominant as \(h\to 0\) we may choose \(\delta=\rho h^{q}\) with \(q<1\), rather than \(q=1\), to obtain convergence \(O(h^{5q})\); see Sect. 4. For the double layer potential \[{\cal D}({\bf y})=\int_{\Gamma}\frac{\partial G({\bf x}-{\bf y})}{\partial{\bf n }({\bf x})}g({\bf x})\,dS({\bf x}) \tag{10}\] the treatment is similar after a subtraction. Using Green's identities we rewrite (10) as \[\mathcal{D}(\mathbf{y})=\int_{\Gamma}\frac{\partial G(\mathbf{x}-\mathbf{y})}{ \partial\mathbf{n}(\mathbf{x})}[g(\mathbf{x})-g(\mathbf{x}_{0})]\,dS(\mathbf{ x})+\chi(\mathbf{y})g(\mathbf{x}_{0}) \tag{11}\] where again \(\mathbf{x}_{0}\) is the closest point on \(\Gamma\) and \(\chi=1\) for \(\mathbf{y}\) inside, \(\chi=0\) for \(\mathbf{y}\) outside, and \(\chi=\frac{1}{2}\) on \(\Gamma\). To regularize we replace \(\nabla G\) with the gradient of the smooth function \(G_{\delta}\), obtaining \[\nabla G_{\delta}(\mathbf{r})=\nabla G(\mathbf{r})s_{2}(|\mathbf{r}|/\delta) =\frac{\mathbf{r}}{4\pi|\mathbf{r}|^{3}}s_{2}(|\mathbf{r}|/\delta) \tag{12}\] with \[s_{2}(r)=\mathrm{erf}(r)-(2/\sqrt{\pi})re^{-r^{2}} \tag{13}\] Thus \[\mathcal{D}_{\delta}(\mathbf{y})=\int_{\Gamma}\frac{\mathbf{r}\cdot\mathbf{ n}(\mathbf{x})}{4\pi|\mathbf{r}|^{3}}s_{2}(|\mathbf{r}|/\delta)[g(\mathbf{x})-g( \mathbf{x}_{0})]\,dS(\mathbf{x})+\chi(\mathbf{y})g(\mathbf{x}_{0})\,,\quad \mathbf{r}=\mathbf{x}-\mathbf{y} \tag{14}\] The expansion for \(\mathcal{D}_{\delta}-\mathcal{D}\) near \(\mathbf{x}_{0}\) is somewhat different but coincidentally leads to the same relation as in (8) with \(\mathcal{S}\) and \(\mathcal{S}_{\delta}\) replaced by \(\mathcal{D}\) and \(\mathcal{D}_{\delta}\). Thus we can solve for \(\mathcal{D}\) to \(O(\delta^{5})\) in the same way as for \(\mathcal{S}\). There is a straightforward extension to a method with \(O(\delta^{7})\) regularization error. In equation (8) there is now an additional term \(C_{3}\delta^{5}I_{4}\). There are four unknowns, so that four choices of \(\delta\) are needed. Otherwise this version is similar to the original one. On the other hand, we could use only two choices of \(\delta\), omitting the \(\delta^{3}\) term in (8), obtaining a version with error \(O(\delta^{3})\). The special case of evaluation at points \(\mathbf{y}\) on the surface is important because it is used to solve integral equations for problems such as the Dirichlet or Neumann problem for harmonic functions. We could use the procedure described with \(b=0\) and \(\lambda=0\). However in this case we can modify the regularization to obtain \(O(\delta^{5})\) error more directly [2, 5]. For the single layer integral, in place of (3) we use \[G_{\delta}(\mathbf{r})=-\frac{s_{1}^{\sharp}(|\mathbf{r}|/\delta)}{4\pi| \mathbf{r}|}\,,\quad s_{1}^{\sharp}(r)=\mathrm{erf}(r)+\frac{2}{3\sqrt{\pi}}( 5r-2r^{3})e^{-r^{2}} \tag{15}\] For the double layer we use (14) with \(\chi=\frac{1}{2}\) and (13) replaced by \[s_{2}^{\sharp}(r)=\mathrm{erf}(r)-\frac{2}{\sqrt{\pi}}\left(r-\frac{2r^{3}}{3 }\right)e^{-r^{2}} \tag{16}\] We typically use \(\delta=3h\) with these formulas for evaluation on the surface [5, 21]. They were derived by imposing conditions to eliminate the leading error [2], and the error can be checked using the analysis in the next section. Formulas with \(O(\delta^{7})\) error could be produced with the same approach. The equations of Stokes flow represent the motion of incompressible fluid in the limit of zero Reynolds number; e.g. see [17]. In the simplest form they are \[\Delta\mathbf{u}-\nabla p=0\,,\quad\nabla\cdot\mathbf{u}=0 \tag{17}\] where \(\mathbf{u}\) is the fluid velocity and \(p\) is the pressure. The primary fundamental solutions for the velocity are the Stokeslet and stresslet, \[S_{ij}(\mathbf{y},\mathbf{x}) =\frac{\delta_{ij}}{|\mathbf{y}-\mathbf{x}|}+\frac{(y_{i}-x_{i})( y_{j}-x_{j})}{|\mathbf{y}-\mathbf{x}|^{3}} \tag{18a}\] \[T_{ijk}(\mathbf{y},\mathbf{x}) =-\frac{6(y_{i}-x_{i})(y_{j}-x_{j})(y_{k}-x_{k})}{|\mathbf{y}- \mathbf{x}|^{5}} \tag{18b}\] where \(\delta_{ij}\) is the Kronecker delta and \(i,j,k=1,2,3\). They are the kernels for the single and double layer integrals \[u_{i}({\bf y}) = \frac{1}{8\pi}\int_{\Gamma}S_{ij}({\bf y},{\bf x})f_{j}({\bf x})dS( {\bf x}) \tag{19a}\] \[v_{i}({\bf y}) = \frac{1}{8\pi}\int_{\Gamma}T_{ijk}({\bf y},{\bf x})q_{j}({\bf x})n _{k}({\bf x})dS({\bf x}) \tag{19b}\] where \(f_{j}\) and \(q_{j}\) are components of vector quantities \({\bf f}\) and \({\bf q}\) on the surface and \(n_{k}\) is a component of the normal vector \({\bf n}\). A subtraction can be used in both cases; e.g., see [17], Sect. 6.4. With \({\bf x}_{0}\) as before we rewrite (19a) as \[u_{i}({\bf y})=\frac{1}{8\pi}\int_{\Gamma}S_{ij}({\bf y},{\bf x})[f_{j}({\bf x })-f_{k}({\bf x}_{0})n_{k}({\bf x}_{0})n_{j}({\bf x})]\,dS({\bf x}) \tag{20}\] The subtracted form of (19b) is \[v_{i}({\bf y})=\frac{1}{8\pi}\int_{\Gamma}T_{ijk}({\bf y},{\bf x})[q_{j}({\bf x })-q_{j}({\bf x}_{0})]n_{k}({\bf x})dS({\bf x})+\chi({\bf y})q_{i}({\bf x}_{0}) \tag{21}\] To compute (20) we replace \(S_{ij}\) with the regularized version \[S_{ij}^{\delta}({\bf y},{\bf x})=\frac{\delta_{ij}}{r}s_{1}(r/\delta)+\frac{( y_{i}-x_{i})(y_{j}-x_{j})}{r^{3}}s_{2}(r/\delta)\,,\quad r=|{\bf y}-{\bf x}| \tag{22}\] with \(s_{1}\) and \(s_{2}\) as in (4),(13), resulting in a smooth kernel. For the Stokes double layer integral we need to rewrite the kernel for a reason described below. For \({\bf y}\) near the surface we have \({\bf y}={\bf x}_{0}+b{\bf n}_{0}\) with \({\bf x}_{0}\in\Gamma\) and \({\bf n}_{0}={\bf n}({\bf x}_{0})\). In \(T_{ijk}\) we substitute \(y_{i}-x_{i}=bn_{i}-\hat{x}_{i}\) where \(n_{i}\) and \(\hat{x}_{i}\) are the \(i\)th components of \({\bf n}_{0}\) and \(\hat{\bf x}={\bf x}-{\bf x}_{0}\). With \(r=|{\bf y}-{\bf x}|\), where possible, we replace \(b^{2}/r^{2}\) with \(1-(r^{2}-b^{2})/r^{2}\) to eliminate factors in the numerator which are nonzero at \({\bf x}_{0}\). We obtain \[T_{ijk}=T_{ijk}^{(1)}+T_{ijk}^{(2)}=-6\left(\frac{t_{ijk}^{(1)}}{r^{3}}+\frac{ t_{ijk}^{(2)}-(r^{2}-b^{2})t_{ijk}^{(1)}}{r^{5}}\right) \tag{23}\] where \[t_{ijk}^{(1)}=bn_{i}n_{j}n_{k}-(\hat{x}_{i}n_{j}n_{k}+n_{i}\hat{x}_{j}n_{k}+n_ {i}n_{j}\hat{x}_{k}) \tag{24}\] \[t_{ijk}^{(2)}=b(\hat{x}_{i}\hat{x}_{j}n_{k}+\hat{x}_{i}n_{j}\hat{x}_{k}+n_{i} \hat{x}_{j}\hat{x}_{k})-\hat{x}_{i}\hat{x}_{j}\hat{x}_{k} \tag{25}\] and we substitute \(r^{2}-b^{2}=|\hat{\bf x}|^{2}-2b\hat{\bf x}\cdot{\bf n}_{0}\). We compute (21) with \(T_{ijk}\) replaced with the regularized version of (23) \[T_{ijk}^{\delta}=T_{ijk}^{(1)}s_{2}(r/\delta)+T_{ijk}^{(2)}s_{3}(r/\delta) \tag{26}\] where \[s_{3}(r)={\rm erf}(r)-\frac{2}{\sqrt{\pi}}\left(\frac{2}{3}r^{3}+r\right)e^{- r^{2}} \tag{27}\] For both Stokes integrals, calculated in the manner described, we find in Sect. 3 that the error has a form equivalent to (8), and we extrapolate with three choices of \(\delta\) as before. Again for the special case of evaluation on the surface we can obtain an \(O(\delta^{5})\) regularization directly. Formulas were given in [21] and an improved formula for the stresslet case was given in [4]. It appears this method would not be successful if applied directly to the double layer potential or the Stokeslet integral without the subtraction. There would be a term in the integrand proportional to \(1/r^{3}\). The equation (5) for the regularization error would then have an additional term which, to first approximation, does not change as \(\delta\) is varied. As a result the extrapolated value of the integral becomes unstable as \(b\to 0\); i.e., the coefficients in the linear combination replacing (9) become large as \(b\to 0\). A similar consideration motivates the expression for \(T_{ijk}\) above. For other kernels general techniques to reduce the singularity could be used if necessary, e.g. [16]. ## 3 Local analysis near the singularity We derive an expansion for the error due to regularizing a singular integral, when evaluated at a point \({\bf y}\) near the surface \(\Gamma\). The expression obtained leads to the formula (5) and the extrapolation strategy used here. The first few terms of the expansion were used in [2, 5, 21] to find corrections to \(O(\delta^{3})\). We begin with the single layer potential (2). The error \(\epsilon\) is the difference between (3) and (2). Given \({\bf y}\) near \(\Gamma\), we assume for convenience that the closest point on \(\Gamma\) is \({\bf x}=0\). Then \({\bf y}=b{\bf n}_{0}\), where \({\bf n}_{0}\) is the outward normal at \({\bf x}=0\) and \(b\) is the signed distance from the surface. We choose coordinates \(\alpha=(\alpha_{1},\alpha_{2})\) on \(\Gamma\) near \({\bf x}=0\) so that \({\bf x}(0)=0\), the metric tensor \(g_{ij}=\delta_{ij}\) at \(\alpha=0\), and the second derivatives \({\bf x}_{ij}\) are normal at \(\alpha=0\). E.g., if the tangent plane at \({\bf x}=0\) is \(\{x_{3}=0\}\), we could use \((\alpha_{1},\alpha_{2})=(x_{1},x_{2})\). Since the error in the integral is negligible for \({\bf x}\) away from \(0\) we can assume the density \(f\) is zero outside this coordinate patch, regard it as a function of \(\alpha\), and write the regularization error as \[\epsilon=\int[G_{\delta}({\bf x}(\alpha)-{\bf y})-G({\bf x}-{\bf y})]f(\alpha) \,dS(\alpha) \tag{28}\] Then \[\epsilon=\frac{1}{4\pi}\int\frac{\mbox{erfc}(r/\delta)}{r}f(\alpha)\,dS( \alpha)\,,\qquad r=|{\bf x}(\alpha)-{\bf y}| \tag{29}\] We can expand \({\bf x}\) near \(0\) as \[{\bf x}(\alpha)={\bf T}_{1}(0)\alpha_{1}+{\bf T}_{2}(0)\alpha_{2}+\sum_{2 \leq|\nu|\leq 4}c_{\nu}\alpha^{\nu}D^{\nu}{\bf x}(0)+O(|\alpha|^{5}) \tag{30}\] Here \({\bf T}_{i}=\partial{\bf x}/\partial\alpha_{i}\), the tangent vector at \({\bf x}(\alpha)\), and we use multi-index notation: \(\nu=(\nu_{1},\nu_{2})\), \(\alpha^{\nu}=\alpha_{1}^{\nu_{1}}\alpha_{2}^{\nu_{2}}\), \(D^{\nu}\) is mixed partial derivative of order \((\nu_{1},\nu_{2})\), and \(|\nu|=\nu_{1}+\nu_{2}\). We will use the notation \(c_{\nu}\) for generic constants whose value will not be needed. We first get an expression for \(r^{2}\). We start with \[|{\bf x}(\alpha)|^{2}=\alpha_{1}^{2}+\alpha_{2}^{2}+\sum_{|\nu|=4,5}c_{\nu} \alpha^{\nu}+O(|\alpha|^{6}) \tag{31}\] There is no term with \(|\nu|=3\) since the first and second order terms in \({\bf x}\) are orthogonal. Also \[{\bf x}(\alpha)\cdot{\bf n}_{0}=\sum_{2\leq\nu\leq 4}c_{\nu}^{\prime}\alpha^{ \nu}+O(|\alpha|^{5}) \tag{32}\] Then \[r^{2}=|x(\alpha)-b{\bf n}_{0}|^{2}=|\alpha|^{2}+b^{2}+\sum_{|\nu|=4,5}c_{\nu} \alpha^{\nu}+b\sum_{2\leq|\nu|\leq 4}c_{\nu}^{\prime}\alpha^{\nu}+O(|\alpha|^{6}+|b ||\alpha|^{5}) \tag{33}\] We will make a change of variables \(\alpha=(\alpha_{1},\alpha_{2})\to\xi=(\xi_{1},\xi_{2})\) defined by \[|\xi|^{2}+b^{2}=r^{2}\,,\quad\xi/|\xi|=\alpha/|\alpha| \tag{34}\] This allows us to write the error as \[\epsilon=\frac{1}{4\pi}\int\frac{\mbox{erfc}(\sqrt{|\xi|^{2}+b^{2}}/\delta)}{ \sqrt{|\xi|^{2}+b^{2}}}w(\xi,b)\,d\xi \tag{35}\] where \[w(\xi,b)=f(\alpha)\left|\frac{\partial\alpha}{\partial\xi}\right|\left|T_{1} \times T_{2}\right| \tag{36}\] The mapping \(\xi=\xi(\alpha)\) is close to the identity but it is not smooth at \(\alpha=0\), so that we cannot write \(w\) directly in a power series in \((\xi,b)\). We will see that \(w\) is a sum of terms of the form \(b^{m}\xi^{\nu}/|\xi|^{2p}\) with \(|\nu|\geq 2p\), and such a term makes a contribution to the error \(\epsilon\) of order \(\delta^{m+|\nu|-2p+1}\). To obtain a suitable series we need to express \(\alpha\) as a function of \(\xi\). From above we have \[|\xi|^{2}/|\alpha|^{2}=1+\sum_{|\nu|=4,5}c_{\nu}\alpha^{\nu}/|\alpha|^{2}+b \sum_{2\leq|\nu|\leq 4}c^{\prime}_{\nu}\alpha^{\nu}/|\alpha|^{2}+O(|(\alpha,b)| ^{4}) \tag{37}\] Here \(O(|(\alpha,b)|^{4})\) means \(O(|\alpha|^{4}+b^{4})\). With \(u=\alpha/|\alpha|=\xi/|\xi|\), we can substitute \(\alpha^{\nu}=u^{\nu}|\alpha||^{\nu|}\) in (37). We then regard (37) as a power series in \(|\alpha|\) in which the coefficient of \(|\alpha|^{n}\) is a polynomial in \(b\) and \(u\) with terms \(u^{\nu}\) such that \(|\nu|-n\) is even. It is important that this form is preserved by multiplication, and that the \(k\)th term in a product series depends only on the first \(k\) terms in the factors. Using the power series for \((1+x)^{-1/2}\) we can write a similar expression for \(|\alpha|/|\xi|\) with terms as in (37) and their products. The coefficient of \(|\alpha|^{n}\) is a polynomial in \(b\) and \(u\), again with terms \(u^{\nu}\), \(|\nu|-n\) even; the same is true for powers of \(|\alpha|/|\xi|\). We will invert the function \(|\alpha|\to|\xi|\) using the Lagrange Inversion Theorem [23, 12]; the theorem is usually stated for analytic functions, but for \(C^{N}\) functions it can be applied to the Taylor polynomial. We conclude from the theorem that \(|\alpha|\) can be expressed as a power series in \(|\xi|\), with remainder, such that the coefficient of \(|\xi|^{n}\) is proportional to the coefficient of \(|\alpha|^{n-1}\) in the series for \((|\alpha|/|\xi|)^{n}\) as a function of \(|\alpha|\). This quantity has factors \(u^{\nu}\) with \(|\nu|-(n-1)\) even. We now divide this expression for \(|\alpha|\) by \(|\xi|\) so that the earlier parity is restored. Finally we rewrite \(u^{\nu}|\xi|^{n}\) as \(\xi^{\nu}|\xi|^{n-|\nu|}\), and in summary we have shown that \[\frac{|\alpha|}{|\xi|}=\sum c_{m\nu}b^{m}\frac{\xi^{\nu}}{|\xi|^{2p}}+O(|(\xi, b)|^{4}) \tag{38}\] where \(m\geq 0\), \(|\nu|\geq 2p\), and \(m+|\nu|-2p\leq 3\). With \(\alpha_{j}=(|\alpha|/|\xi|)\xi_{j}\) we get a similar expression for \(\alpha\) as a function of \(\xi\). The function \(f(\alpha)\) and the factor \(|T_{1}\times T_{2}|\) in \(w\) have series in \(\alpha\) which can be converted to \(\xi\). The Jacobian is \[\left|\frac{\partial\alpha}{\partial\xi}\right|=\mu^{2}+\mu\xi\frac{\partial \mu}{\partial|\xi|}\,,\quad\mu=\frac{|\alpha|}{|\xi|} \tag{39}\] It has terms of the same type as those in \(|\alpha|/|\xi|\). The Jacobian has leading term \(1\) and is bounded but not smooth as \(\xi\to 0\). We conclude that \(w\) has the expression \[w(\xi,b)=\sum c_{m\nu}b^{m}\frac{\xi^{\nu}}{|\xi|^{2p}}+R(\xi,b) \tag{40}\] where \(m\geq 0\), \(|\nu|\geq 2p\), \(m+|\nu|-2p\leq 3\), and \(R(\xi,b)=O(|(\xi,b)|^{4})\). To find the contribution \(\epsilon_{m\nu p}\) to the error (35) from a term in (40) with a particular \((m,\nu,p)\) we will integrate in polar coordinates. The angular integral is zero by symmetry unless \(\nu_{1},\,\nu_{2}\) are both even. Let \(n=|\nu|-2p\), the degree of \(\xi\). With the restriction \(m+n\leq 3\) the possible nonzero terms have \(n=0\) and \(0\leq m\leq 3\) or \(n=2\) with \(m=0,1\). To carry out the integration, we rescale variables to \(\xi=\delta\zeta\), \(b=\delta\lambda\), and write \(\zeta\) in polar coordinates. With \(s=|\zeta|\) we obtain \[\epsilon_{m\nu p}=c_{m\nu p}b^{m}\delta^{n+1}I_{n}(\lambda)=c_{m\nu p}\lambda^ {m}\delta^{m+n+1}I_{n}(\lambda) \tag{41}\] where \[I_{n}(\lambda)=\int_{0}^{\infty}\frac{\mbox{erfc}(\sqrt{s^{2}+\lambda^{2}})}{ \sqrt{s^{2}+\lambda^{2}}}s^{n+1}\,ds \tag{42}\] In a similar way we see that the remainder \(R\) leads to an error which is \(O(\delta^{5})\). In summary we can express the error as \[\epsilon=\delta p_{0}(b)I_{0}(\lambda)+\delta^{3}p_{2}(b)I_{2}(\lambda)+O( \delta^{5}) \tag{43}\] where \(p_{0},p_{2}\) are polynomials in \(b\) with \(\deg p_{0}=3\), \(\deg p_{2}=1\). They depend only on the surface and \(b\), not \(\delta\) or \(\lambda\). For fixed \(b\) and \(h\) they are unknown coefficients. To normalize the equation we set \(\delta=\rho h\) and rewrite it as \[\epsilon=c_{1}\rho I_{0}(\lambda)+c_{2}\rho^{3}I_{2}(\lambda)+O(\delta^{5}) \tag{44}\] This conclusion is equivalent to (8), which we use with three choices of \(\delta\) to solve for the single layer potential within \(O(\delta^{5})\). For the double layer potential, in view of (14) and (11), we can write the error from regularizing as \[\epsilon=\frac{1}{4\pi}\int\phi(r/\delta)\frac{({\bf x}(\alpha)-{\bf y})\cdot {\bf n}(\alpha)}{r^{3}}(g(\alpha)-g(0))\,dS(\alpha) \tag{45}\] where \[\phi(r)=-\,\mbox{erfc}(r)-(2/\sqrt{\pi})re^{-r^{2}} \tag{46}\] and after changing from \(\alpha\) to \(\xi\), \[\epsilon=\frac{1}{4\pi}\int\frac{\phi(\sqrt{|\xi|^{2}+b^{2}}/\delta)}{(|\xi|^{ 2}+b^{2})^{3/2}}w(\xi,b)\,d\xi \tag{47}\] where now \[w(\xi,b)=[({\bf x}-{\bf y})\cdot{\bf n}][g(\alpha)-g(0)]\left|\frac{\partial \alpha}{\partial\xi}\right|\left|T_{1}\times T_{2}\right| \tag{48}\] We find \[({\bf x}-{\bf y})\cdot{\bf n}=-b+O(|(\xi,b)|^{2}) \tag{49}\] and note \(g(\alpha)-g(0)=O(|\xi|)\). Thus each term in \(w\) now has at least two additional factors. We expand \(w\) as in (40) but now include terms with \(m+n\leq 5\), where again \(n=|\nu|-2p\). The term \((m,\nu,p)\) now contributes an error of order \(\delta^{m+n-1}\), rather than \(\delta^{m+n+1}\) as before. From the last remark, each nonzero term must have \(m\geq 1\) and \(n\geq 1\) or \(m=0\) and \(n\geq 3\). By symmetry a term that contributes a nonzero error must have \(m\geq 1\) and \(n\geq 2\) or \(m=0\) and \(n\geq 4\). The possible terms with \(m+n\leq 5\) are \((m,2)\) with \(m=1,2,3\) and \((m,4)\) with \(m=0,1\). Rescaling the integrals we find \[\epsilon=\delta p_{0}(b)J_{0}(\lambda)+\delta^{3}p_{2}(b)J_{2}(\lambda)+O( \delta^{5}) \tag{50}\] with \(\deg p_{0}=3\), \(\deg p_{2}=1\), and \[J_{n}(\lambda)=\int_{0}^{\infty}\frac{\phi(\sqrt{s^{2}+\lambda^{2}})}{(s^{2}+ \lambda^{2})^{3/2}}s^{n+3}\,ds \tag{51}\] In fact \[J_{0}=-2I_{0}\,,\qquad J_{2}=-4I_{2} \tag{52}\] so that (50) is equivalent to (43), and we can solve for the double layer as in (8). The expansions can be carried further in the same manner. For the single layer integral we can refine the error expression (43) to \[\epsilon=\delta p_{0}(b)I_{0}(\lambda)+\delta^{3}p_{2}(b)I_{2}(\lambda)+\delta ^{5}p_{4}(b)I_{4}(\lambda)+O(\delta^{7}) \tag{53}\] For the double layer (50) is replaced by \[\epsilon=\delta p_{0}(b)J_{0}(\lambda)+\delta^{3}p_{2}(b)J_{2}(\lambda)+ \delta^{5}p_{4}(b)J_{4}(\lambda)+O(\delta^{7}) \tag{54}\] Each of these expressions leads to a system of four equations in four unknowns, using four different choices of \(\delta\). In fact \(J_{4}=-6I_{4}\), so that again we may use the same equations for both cases. For the Stokes single layer integral, calculated in the form (20), (22), the first term is equivalent to the single layer potential (2). The second term resembles the double layer (10). We note the integrand has a factor \(\tilde{\bf f}({\bf x})\cdot({\bf y}-{\bf x})\) with \(\tilde{\bf f}({\bf x})={\bf f}({\bf x})-({\bf f}({\bf x}_{0})\cdot{\bf n}({\bf x }_{0})){\bf n}({\bf x})\). Thus \(\tilde{\bf f}\cdot{\bf n}=0\) at \({\bf x}={\bf x}_{0}\), and since \({\bf y}-{\bf x}=b{\bf n}({\bf x}_{0})+O(\xi)\), the numerator of the integrand is \(O(\xi)\). The discussion above for the double layer now applies to this second term, leading to the same expression for the error. For the Stokes double layer integral, with the subtraction (21) and the kernel rewritten as in (26), the first term is again like the harmonic double layer. For the second term, regularized with \(s_{3}\), the numerator in the expansion will have terms \(O(\xi^{3})\) or higher. By symmetry the terms that contribute nonzero error have \(O(\xi^{4})\) or higher. We get an expansion for the error in the second term in the form \[\epsilon=\delta p_{0}(b)K_{0}(\lambda)+\delta^{3}p_{2}(b)K_{2}(\lambda)+O( \delta^{5}) \tag{55}\] with \[K_{n}(\lambda)=\int_{0}^{\infty}\frac{\phi_{3}(\sqrt{s^{2}+\lambda^{2}})}{(s^ {2}+\lambda^{2})^{5/2}}s^{n+5}\,ds \tag{56}\] and \(\phi_{3}=1-s_{3}\). We find that \(K_{0}=(8/3)I_{0}\) and \(K_{2}=8I_{2}\), so that once again we can use (8) for extrapolation. ## 4 Surface quadrature and the discretization error We use a quadrature rule for surface integrals introduced in [24] and used in [2, 5, 21]. We cover the surface with a three-dimensional grid with spacing \(h\). The quadrature points have the form \({\bf x}=(ih,jh,x_{3})\), i.e., points on the surface \(\Gamma\) whose projections on the \((x_{1},x_{2})\) plane are grid points, and similarly for the other two directions. We only use points for which the component of the normal vector in the distinguished direction is no smaller than \(\cos\theta\) for a chosen angle \(\theta\). In our case we take \(\theta=70^{o}\). The weights are determined by a partition of unity \(\psi_{1},\psi_{2},\psi_{3}\) on the unit sphere; it is applied to the normal vector at each point. We define three sets of quadrature points \(\Gamma_{1},\Gamma_{2},\Gamma_{3}\) as \[\Gamma_{3}=\{{\bf x}=(ih,jh,x_{3})\in\Gamma:|n_{3}({\bf x})|\geq\cos\theta\} \tag{57}\] where \(n_{3}\) means the third component of the normal vector, and similarly for \(\Gamma_{1},\Gamma_{2}\). To construct the partition of unity we start with the bump function \[b(r)=\exp(ar^{2}/(r^{2}-1))\,,\quad|r|<1\,;\qquad b(r)=0\,,\quad|r|\geq 1 \tag{58}\] Here \(a\) is a parameter. For a unit vector \({\bf n}=(n_{1},n_{2},n_{3})\) we define \[\beta_{i}({\bf n})=b(\cos^{-1}|n_{i}|)/\theta)\,,\quad\psi_{i}({\bf n})=\beta_{i }({\bf n})/\left(\sum_{j=1}^{3}\beta_{j}({\bf n})\right) \tag{59}\] The quadrature rule for a surface integral with integrand \(f\) is \[\int_{\Gamma}f({\bf x})\,dS({\bf x})\,\approx\,\sum_{i=1}^{3}\sum_{{\bf x}\in \Gamma_{i}}f({\bf x})w_{i}({\bf x})\,,\qquad w_{i}({\bf x})=\frac{\psi_{i}({ \bf n}({\bf x}))}{|n_{i}({\bf x})|}\,h^{2} \tag{60}\] It has high order accuracy as allowed by the smoothness of the surface and the integrand. In earlier work we chose the parameter \(a\) to be \(1\). Here we use \(a=2\). We have found from error estimates in [3], discussed below, as well as numerical experiments, that the discretization error is controlled better with this choice. We do not recommend using \(a>2\) because of increased derivatives. The full error in this method consists of the regularization error plus the discretization error; symbolically \[\sum_{\delta}\,-\int\,=\,\left(\int_{\delta}\,-\,\int\right)\,+\,\left(\sum_{ \delta}\,-\int_{\delta}\right) \tag{61}\] For either the single layer potential (2) or the double layer (11) the discretization error can be written as \[c_{1}h\,+\,C_{2}h^{2}\exp(-c_{0}\delta^{2}/h^{2})\,+\,O(\delta^{5})\,,\quad c _{1}=c_{1}(\delta/h) \tag{62}\] which at first appears inaccurate. Formulas for the first term were given in [2, 5], based on approximating the surface locally as a plane. They can be used as corrections. Estimates for these formulas were given in [3]. With the parameter choices here, in particular with \(\delta/h\geq 2\), it was shown that \[c_{1}^{(S)}\leq 2.1\cdot 10^{-7}\max|f|\,,\quad c_{1}^{(D)}\leq 8.3\cdot 10^{- 7}\max|\nabla g| \tag{63}\] for the single and double layer respectively, and they decrease rapidly as \(\delta/h\) increases. Here \(\nabla g\) means the tangential gradient. The \(h^{2}\) term in (62) evidently decreases rapidly as \(\delta/h\) increases, as does \(c_{1}\). With \(\theta=70^{o}\), \(c_{0}\approx 1.15\); see [5], Sect. 3.4. However \(C_{2}\) depends on the surface and integrand and could be large. With moderate resolution we expect that the discretization error is controlled by the regularization. If desired the formulas for \(c_{1}h\) in [5] could be used as corrections with the present method; they are infinite series, but only the first few terms are significant. To ensure that the regularization error dominates the discretization error for small \(h\) we can choose \(\delta\) proportional to \(h^{q}\), with \(q<1\), so that \(\delta/h\) increases as \(h\to 0\). ## 5 Numerical examples We present examples computing single and double layer integrals at grid points within \(O(h)\) of a surface, for harmonic potentials and for Stokes flow. The points are selected from the three-dimensional grid with spacing \(h\) which determines the quadrature points on the surface, as described in Sect. 4. With the fifth order regularization the results are in general agreement with the theoretical predictions. With moderate resolution and \(\delta/h\) constant the errors are about \(O(h^{5})\). With \(\delta\) proportional to \(h^{4/5}\) the error is about \(O(h^{4})\). For the harmonic potentials we also test the seventh order method; the errors are typically smaller but the order of accuracy is less predictable. It is likely that the discretization error is relatively more significant with the smaller errors of the seventh order case. We report maximum errors and \(L^{2}\) errors, defined as \[\|e\|_{L^{2}}=\left(\sum_{\bf y}|e(y)|^{2}/N\right)^{1/2} \tag{64}\] where \(e(y)\) is the error at \({\bf y}\) and \(N\) is the number of points. We present absolute errors; for comparison we give approximate norms of the exact solution. **Harmonic Potentials.** We begin with known solutions on the unit sphere. We test the single and double layer separately. We compute the integrals at grid points first within distance \(h\) and then on shells at increasing distance. In the latter case we also find values computed without regularization. We then compute known harmonic functions on three other surfaces which combine single and double layers. The single and double layer potentials, (2) and (10), are harmonic inside and outside the surface \(\Gamma\). They are characterized by the jump conditions \[[{\cal S}({\bf x})]=0\,,\quad[\partial{\cal S}({\bf x})/\partial{\bf n}]=f({ \bf x}) \tag{65}\] \[[{\cal D}({\bf x})]=-g({\bf x})\,,\quad[\partial{\cal D}({\bf x})/\partial{\bf n }]=0 \tag{66}\] where \([\cdot]\) means the value outside \(\Gamma\) minus the value inside. For the unit sphere we use the spherical harmonic function \[f({\bf x})=1.75(x_{1}-2x_{2})(7.5x_{3}^{2}-1.5)\,,\quad|{\bf x}|=1 \tag{67}\] for both the single and double layer integrals. The functions \[u_{-}({\bf y})=r^{3}f({\bf y}/r)\,,\quad u_{+}({\bf y})=r^{-4}f({\bf y}/r)\,, \quad r=|{\bf y}| \tag{68}\] are both harmonic. We define \({\cal S}({\bf y})\) by (2) and \({\cal D}({\bf y})\) by (10) with \(g=f\). They are determined by the jump conditions, \[{\cal S}({\bf y})=-(1/7)u_{-}({\bf y})\,,\quad|{\bf y}|<1\,;\quad\ {\cal S}({\bf y})=-(1/7)u_{+}({\bf y})\,,\quad|{\bf y}|>1 \tag{69}\] \[{\cal D}({\bf y})=(4/7)u_{-}({\bf y})\,,\quad|{\bf y}|<1\,;\quad\ {\cal D}({\bf y})=-(3/7)u_{+}({\bf y})\,,\quad|{\bf y}|>1 \tag{70}\] We present errors for the single and double layer potentials at grid points at various distances from the sphere. We begin with the single layer. We compute the integral as in (3) and extrapolate as in (8). Near the sphere the maximum of \(|{\cal S}|\) is about 1.15 and the \(L^{2}\) norm is about.50. Table 1 shows the \(L^{2}\) and maximum errors for grid points within distance \(h\) of the sphere, using fifth or seventh order extrapolation. For the fifth order we take \(\delta/h=2,3,4\) as previously described, and for the seventh order we take \(\delta/h=2,3,4,5\). The expected order of accuracy is evident in the fifth order case; the seventh order method has somewhat smaller errors but does not have a discernible order of accuracy, probably because the discretization error is significant. In subsequent tables we display the errors at nearby grid points at distance between \(mh\) and \((m+1)h\) from the sphere, both inside and outside, for \(m=1,2,3\). We compute the integral with no regularization as well as the fifth and seventh order methods. Table 2 shows errors for \(m=1\) and Table 3 for \(m=2\) and 3. The values without regularization in Table 2 appear to be about \(O(h)\) accurate. The fifth order method again has the expected order of accuracy at least for \(m=1\) but becomes less steady with distance. The errors become smaller overall as the distance increases. Beyond \(4h\) the error without regularization is quite small, suggesting that we can discontinue the regularization for points at least \(4h\) from the surface. In Tables 4,5,6 we present results of the same type for the double layer potential, computed as in (14). They are similar in behavior to those for the single layer. The maximum of \(|\mathcal{D}|\) is about 4.6 and \(\|\mathcal{D}\|_{L^{2}}\approx 1.8\). For the remaining tests on other surfaces we use a procedure as in [5] which allows us to have known solutions with an arbitrary surface \(\Gamma\). This provides a test of the single and double layer combined, rather than separately. We choose harmonic functions \(u_{+}\) outside and \(u_{-}\) inside. We set \(f=-[u]\) and \(g=[\partial u/\partial n]\), the jumps across \(\Gamma\) as above. Then assuming \(u_{+}\) decays at infinity, \(u(\mathbf{y})=\mathcal{S}(\mathbf{y})+\mathcal{D}(\mathbf{y})\) on both sides, where \(\mathcal{S}\) and \(\mathcal{D}\) are defined in (2), (10). We choose \[u_{-}(\mathbf{y})=(\sin y_{1}+\sin y_{2})\exp y_{3}\,,\quad u_{+}(\mathbf{y})=0 \tag{71}\] In these tests we again use \(\delta/h=2,3,4\) with the fifth order method and \(\delta/h=2,3,4,5\) with seventh order. We also choose \(\delta\) proportional to \(h^{4/5}\) with the fifth order method and \(h^{4/7}\) with the seventh order method, so that the predicted order of error is \(O(h^{4})\). We choose constants so that \(\delta\) agrees with the earlier choice at \(1/h=64\). Our first surface with this procedure is a rotated ellipsoid \[\frac{z_{1}^{2}}{a^{2}}+\frac{z_{2}^{2}}{b^{2}}+\frac{z_{3}^{2}}{c^{2}}=1 \tag{72}\] where \(a=1\), \(b=.8\), \(c=.6\) and \(\mathbf{z}=M\mathbf{x}\), where \(M\) is the orthogonal matrix \[M=(1/\sqrt{6})\,[\sqrt{2}\quad 0\quad-2;\ \sqrt{2}\quad\sqrt{3}\quad 1;\ \sqrt{2}\quad-\sqrt{3}\quad 1] \tag{73}\] \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{3}{c||}{\(2h<\) distance \(<3h\)} & \multicolumn{3}{c|}{\(3h<\) distance \(<4h\)} \\ & no reg’n & 5th order & 7th order & no reg’n & 5th order & 7th order \\ \hline 32 & 5.04e-7 & 4.13e-7 & 6.28e-8 & 4.51e-8 & 8.64e-8 & 4.16e-8 \\ \hline 64 & 1.77e-7 & 1.18e-8 & 6.10e-10 & 2.90e-9 & 1.74e-9 & 2.35e-10 \\ \hline 128 & 8.89e-8 & 3.77e-10 & 5.43e-11 & 1.24e-9 & 5.83e-11 & 1.29e-11 \\ \hline \end{tabular} \end{table} Table 3: \(L^{2}\) errors in the single layer potential on the unit sphere, evaluated at distance between \(2h\) and \(3h\) or \(3h\) and \(4h\). \begin{table} \begin{tabular}{|c||c|c|c||c|c|c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{3}{c||}{\(5\)th order} & \multicolumn{3}{c|}{\(7\)th order} \\ & \(L^{2}\) err & ratio & max err & ratio & \(L^{2}\) err & ratio & max err & ratio \\ \hline 32 & 7.60e-6 & & 2.83e-5 & & 2.43e-7 & & 3.57e-6 & \\ \hline 64 & 2.39e-7 & 31.8 & 8.84e-7 & 32.0 & 5.20e-9 & 46.7 & 8.27e-8 & 43.2 \\ \hline 128 & 7.48e-9 & 31.9 & 2.98e-8 & 29.7 & 5.20e-10 & 10.0 & 8.30e-9 & 10.0 \\ \hline \end{tabular} \end{table} Table 1: Errors for the single layer potential on the unit sphere, at grid points within distance \(h\), computed with the 5th and 7th order regularization. We present results in two tables. In Table 7 we evaluate at all grid points within distance \(h\) with both regularizations. Table 8 has values at points \(y\) within distance \(h\) in the first octant, i.e., those with \(y_{1},y_{2},y_{3}\geq 0\). The accuracy of the fifth order version is close to the prediction; the seventh order version has smaller errors in Table 8 and perhaps approximates the predicted order \(O(h^{4})\) but not clearly so. For the first table the \(L^{2}\) norm of the exact solution is about.5 and the maximum about 1.7. For the second, within the first octant, they are about.76 and 1.4. The next example is a surface obtained by revolving a Cassini oval about the \(x_{3}\) axis, \[(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+a^{2})^{2}-4a^{2}(x_{1}^{2}+x_{2}^{2})=b^{2} \tag{74}\] with \(a=.65\) and \(b=.7\). The final surface represents a molecule with four atoms, \[\sum_{i=1}^{4}\exp(-|\mathbf{x}-\mathbf{x}_{k}|^{2}/r^{2})=c \tag{75}\] with \(r=.5\), \(c=.6\), and \(\mathbf{x}_{k}\) given by \[(\sqrt{3}/3,0,-\sqrt{6}/12)\,,\;(-\sqrt{3}/6,\pm.5,-\sqrt{6}/12)\,,\;(0,0, \sqrt{6}/4) \tag{76}\] We compute the solution for grid points in the first octant as before for the ellipsoid, with \(\delta\) related to \(h\) in the same way. We present errors with fifth or seventh order regularization, with \(\delta\) proportional to \(h\) or fractional. The results, reported in Tables 9 and 10, are generally similar to \begin{table} \begin{tabular}{|c||c|c|c||c|c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{2}{c|}{\(2h<\) distance \(<3h\)} & \multicolumn{2}{c|}{\(3h<\) distance \(<4h\)} \\ & no reg’n & 5th order & 7th order & no reg’n & 5th order & 7th order \\ \hline 32 & 7.99e-6 & 1.08e-6 & 2.31e-7 & 2.75e-7 & 1.48e-7 & 1.36e-7 \\ \hline 64 & 3.31e-6 & 3.62e-8 & 6.45e-9 & 6.76e-8 & 6.14e-9 & 2.52e-9 \\ \hline 128 & 1.58e-6 & 1.37e-9 & 8.32e-10 & 2.97e-8 & 3.04e-10 & 2.60e-10 \\ \hline \end{tabular} \end{table} Table 6: \(L^{2}\) errors in the double layer potential on the unit sphere, evaluated at distance between \(2h\) and \(3h\) or \(3h\) and \(4h\). \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{3}{c||}{\(5\)th order} & \multicolumn{3}{c|}{\(7\)th order} \\ & \(L^{2}\) err & ratio & max err & ratio & \(L^{2}\) err & ratio & max err & ratio \\ \hline 32 & 2.25e-5 & & 8.13e-5 & & 3.88e-7 & & 5.22e-6 & \\ \hline 64 & 7.17e-7 & 31.4 & 2.61e-6 & 31.1 & 1.31e-8 & 29.7 & 2.79e-7 & 18.7 \\ \hline 128 & 2.25e-8 & 31.9 & 8.29e-8 & 31.5 & 1.62e-9 & 8.1 & 3.57e-8 & 7.8 \\ \hline \end{tabular} \end{table} Table 4: Errors for the double layer potential on the unit sphere, at grid points within distance \(h\), computed with the 5th and 7th order regularization. \begin{table} \begin{tabular}{|c||c|c||c|c||c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{3}{c||}{no regularization} & \multicolumn{3}{c||}{\(5\)th order} & \multicolumn{3}{c|}{\(7\)th order} \\ & \(L^{2}\) err & max err & \(L^{2}\) err & max err & \(L^{2}\) err & max err \\ \hline 32 & 2.78e-4 & 2.70e-3 & 5.46e-6 & 2.17e-5 & 3.76e-7 & 5.05e-6 \\ \hline 64 & 1.39e-4 & 1.45e-3 & 1.79e-7 & 7.11e-7 & 1.27e-8 & 2.43e-7 \\ \hline 128 & 6.87e-5 & 7.27e-4 & 5.77e-9 & 2.53e-8 & 1.66e-9 & 3.18e-8 \\ \hline \end{tabular} \end{table} Table 5: Errors for the double layer potential on the unit sphere, evaluated at distance between \(h\) and \(2h\), without regularization and with the 5th and 7th order methods. those for the rotated ellipsoid. For both surfaces we see roughly the predicted orders of accuracy in the fifth order case. For seventh order the errors are smaller, but the accuracy in the fractional case is somewhat less than fourth order in \(h\). For the Cassini surface the \(L^{2}\) norm for the exact values is about.78 and the maximum is about 1.45. For the molecular surface they are about.57 and 1.0. \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|c|} \hline & \multicolumn{6}{|c||}{5th order} & \multicolumn{6}{|c|}{7th order} \\ \(1/h\) & \multicolumn{2}{|c|}{\(\delta=\rho h\)} & \multicolumn{2}{|c||}{\(\delta=\rho h^{4/5}\)} & \multicolumn{2}{|c|}{\(\delta=\rho h\)} & \multicolumn{2}{|c|}{\(\delta=\rho h^{4/7}\)} \\ & \(L^{2}\) err & max err & \(L^{2}\) err & max err & \(L^{2}\) err & max err & \(L^{2}\) err & max err \\ \hline 32 & 5.80e-5 & 2.51e-4 & 3.09e-5 & 1.81e-4 & 2.10e-5 & 1.45e-4 & 1.58e-5 & 1.17e-4 \\ \hline 64 & 2.40e-6 & 1.33e-5 & 2.40e-6 & 1.33e-5 & 4.24e-7 & 3.92e-6 & 4.24e-7 & 3.92e-6 \\ \hline 128 & 8.40e-8 & 4.98e-7 & 1.75e-7 & 9.77e-7 & 5.57e-9 & 7.86e-8 & 3.92e-8 & 3.03e-7 \\ \hline 256 & 2.73e-9 & 1.82e-8 & 1.20e-8 & 7.15e-8 & 1.55e-10 & 2.51e-9 & 3.22e-9 & 2.60e-8 \\ \hline \end{tabular} \end{table} Table 10: Errors for the molecular surface, at grid points within distance \(h\) in the first octant; 5th and 7th order method with \(\delta\) proportional to \(h\) or corresponding to \(h^{4}\). \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{6}{|c||}{5th order} & \multicolumn{6}{|c|}{7th order} \\ & \(L^{2}\) err & ratio & max err & ratio & \(L^{2}\) err & ratio & max err & ratio \\ \hline 32 & 1.47e-5 & & 1.88e-4 & & 2.60e-6 & & 3.28e-5 & \\ \hline 64 & 5.02e-7 & 29.3 & 7.10e-6 & 26.5 & 3.01e-8 & 86.4 & 7.08e-7 & 46.4 \\ \hline 128 & 1.61e-8 & 31.2 & 2.40e-7 & 29.6 & 5.26e-10 & 57.3 & 1.53e-8 & 46.4 \\ \hline \end{tabular} \end{table} Table 7: Errors for the single and double layers on a rotated ellipsoid at grid points within distance \(h\), with the 5th order and 7th order methods, \(\delta\) proportional to \(h\). \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|c|} \hline & \multicolumn{6}{|c||}{5th order} & \multicolumn{6}{|c|}{7th order} \\ \(1/h\) & \multicolumn{2}{|c|}{\(\delta=\rho h\)} & \multicolumn{2}{|c||}{\(\delta=\rho h^{4/5}\)} & \multicolumn{2}{|c|}{\(\delta=\rho h\)} & \multicolumn{2}{|c|}{\(\delta=\rho h^{4/7}\)} \\ & \(L^{2}\) err & max err & \(L^{2}\) err & max err & \(L^{2}\) err & max err & \(L^{2}\) err & max err \\ \hline 32 & 5.16e-5 & 5.44e-4 & 2.75e-5 & 3.00e-4 & 1.98e-5 & 2.24e-4 & 8.06e-6 & 4.47e-5 \\ \hline 64 & 2.46e-6 & 3.98e-5 & 2.46e-6 & 3.98e-5 & 4.82e-7 & 9.99e-6 & 4.82e-7 & 9.99e-6 \\ \hline 128 & 8.64e-8 & 1.57e-6 & 1.79e-7 & 3.07e-6 & 5.59e-9 & 1.22e-7 & 4.18e-8 & 8.67e-7 \\ \hline 256 & 2.81e-9 & 5.15e-8 & 1.23e-8 & 2.05e-7 & 1.14e-10 & 1.34e-9 & 3.39e-9 & 7.44e-8 \\ \hline \end{tabular} \end{table} Table 9: Errors for the Cassini oval surface, at grid points within distance \(h\) in the first octant; 5th and 7th order method with \(\delta\) proportional to \(h\) or corresponding to \(h^{4}\). **Stokes Flow.** We present examples of three types. First we calculate the velocity near a translating spheroid in Stokes flow, given as a single layer integral. We then compute a standard identity for the double layer integral. Finally we compute a velocity that combines single and double layer integrals on an arbitrary surface, as in the examples above with harmonic potentials. We have increased \(\rho\) to \((3,4,5)\) to make the order of accuracy more evident, even though errors are typically smaller with \((2,3,4)\). In each case we report errors at grid points within distance \(h\) of the surface. In our first example we compare the single layer or Stokeslet integral with an exact solution. We compute the Stokes flow around a prolate spheroid \[x_{1}^{2}+4x_{2}^{2}+4x_{3}^{2}=1 \tag{77}\] with semi-axes 1,.5,.5, translating with velocity \((1,0,0)\). The fluid velocity is determined by the integral (19a) from the surface traction \({\bf f}\). Formulas for the solution are given in [6, 13, 21]. The surface traction is \[{\bf f}({\bf x})=\left(f_{1}({\bf x}),0,0\right),\quad f_{1}({\bf x})=\frac{F_ {0}}{\sqrt{1-3x_{1}^{2}/4}}\] where \(F_{0}\) is a constant. We compute the fluid velocity \({\bf u}\) as in (20),(22) and extrapolate as before. Results are presented in Table 11. The exact solution has maximum amplitude 1 and \(L^{2}\) norm about 1. Next we test the double layer integral (19b) using the identity (2.3.19) from [17] \[\frac{1}{8\pi}\epsilon_{jlm}\int_{\Gamma}x_{m}T_{ijk}({\bf x_{0}},{\bf x})n_{ k}({\bf x})dS({\bf x})=\chi({\bf x_{0}})\epsilon_{ilm}x_{0,m} \tag{78}\] where \(\chi=1\), \(1/2\), \(0\) when \({\bf x}_{0}\) is inside, on, and outside the boundary. We set \(l=1\) and define \(q_{j}({\bf x})=\epsilon_{j1m}x_{m}=(0,-x_{3},x_{2})\). We compute the integral according to (21), (23), (26) and extrapolate. We report errors for a sphere and for the spheroid (77) in Tables 12 and 13. For the sphere the maximum value is 1 and the \(L^{2}\) norm is about.57. For the spheroid the maximum is \(\approx.5\) and the \(L^{2}\) norm is \(\approx.3\). In order to test integrals on general surfaces we again use a formula combining the single and double layer integrals. If \({\bf u}\) is the velocity of Stokes flow outside and inside a surface \(\Gamma\), with suitable decay at infinity, then \[u_{i}({\bf y})=-\frac{1}{8\pi}\int_{\Gamma}S_{ij}({\bf y},{\bf x})[f]_{j}({ \bf x})dS({\bf x})-\frac{1}{8\pi}\int_{\Gamma}T_{ijk}({\bf y},{\bf x})[u]_{j}( {\bf x})n_{k}({\bf x})dS({\bf x}) \tag{79}\] Here \([f]=f^{+}-f^{-}\) is the jump in surface force, outside minus inside, and \([u]\) is the jump in velocity. The surface force is the normal stress, \(f^{\pm}=\sigma^{\pm}\cdot{\bf n}\), where \({\bf n}\) the outward normal. The \begin{table} \begin{tabular}{|c|c|c|c|c||c|c|c|c|} \hline \multirow{2}{*}{\(1/h\)} & \multicolumn{3}{|c|}{\(\delta=\rho h\), \(\rho=(3,4,5)\)} & \multicolumn{3}{|c|}{\(\delta=\rho h^{4/5}\), \(\rho=(3,4,5)/(64)^{1/5}\)} \\ & \(L^{2}\) err & ratio & max err & ratio & \(L^{2}\) err & ratio & max err & ratio \\ \hline 32 & 3.35e-4 & & 3.27e-3 & & 1.85e-4 & & 1.99e-3 & \\ \hline 64 & 1.65e-5 & 20.3 & 2.03e-4 & 16.1 & 1.65e-5 & 11.2 & 2.03e-4 & 9.8 \\ \hline 128 & 6.02e-7 & 27.4 & 8.09e-6 & 25.1 & 1.23e-6 & 13.4 & 1.57e-5 & 12.9 \\ \hline 256 & 1.95e-8 & 30.9 & 2.71e-7 & 29.9 & 8.34e-8 & 14.7 & 1.06e-6 & 14.8 \\ \hline \end{tabular} \end{table} Table 11: Errors for the Stokes single layer on a prolate spheroid, at grid points within distance \(h\) outside the spheroid. jump conditions are derived e.g. in [17]. As a test problem we take the inside velocity to be the Stokeslet due to a point force singularity of strength \({\bf b}=(4\pi,0,0)\), placed at \({\bf y}_{0}=(2,0,0)\). The velocity is \[u_{i}^{-}({\bf y})=\frac{1}{8\pi}S_{ij}b_{j}=\frac{1}{8\pi}\Big{(}\frac{\delta_{ ij}}{r}+\frac{\hat{y}_{i}\hat{y}_{j}}{r^{3}}\Big{)}b_{j} \tag{80}\] and the stress tensor is \[\sigma_{ik}^{-}({\bf y})=\frac{1}{8\pi}T_{ijk}b_{j}=\frac{-6}{8\pi}\frac{\hat{y }_{i}\hat{y}_{j}\hat{y}_{k}}{r^{5}}b_{j} \tag{81}\] where \(\hat{\bf y}={\bf y}-{\bf y}_{0}\), \(r=|\hat{\bf y}|\). We choose the outside velocity and stress to be zero. We compute the two integrals in the same manner as above. We present results for three surfaces: the unit sphere, Table 14; an ellipsoid with semi-axes 1,.8,.6, Table 15; and the molecular surface (75), Table 16. For the first two surfaces, the errors are at all grid points within \(h\), but for the molecular surface the points are in the first octant only. For the sphere or ellipsoid the maximum velocity magnitude is \(\approx 1\) and the \(L^{2}\) norms are \(\approx.35\) and \(.37\), respectively. For the molecular surface they are \(\approx.9\) and \(\approx.4\). ## Declarations ### Conflict of interest The authors declare no competing interests. ## Acknowledgment The work of ST was supported by the National Science Foundation grant DMS-2012371.
2309.03415
The influence of black holes on the binary population of the globular cluster Palomar 5
The discovery of stellar-mass black holes (BHs) in globular clusters (GCs) raises the possibility of long-term retention of BHs within GCs. These BHs influence various astrophysical processes, including merger-driven gravitational waves and the formation of X-ray binaries. They also impact cluster dynamics by heating and creating low-density cores. Previous N-body models suggested that Palomar 5, a low-density GC with long tidal tails, may contain more than 100 BHs. To test this scenario, we conduct N-body simulations of Palomar 5 with primordial binaries to explore the influence of BHs on binary populations and the stellar mass function. Our results show that primordial binaries have minimal effect on the long-term evolution. In dense clusters with BHs, the fraction of wide binaries with periods >$10^5$ days decreases, and the disruption rate is independent of the initial period distribution. Multi-epoch spectroscopic observations of line-of-sight velocity changes can detect most bright binaries with periods below $10^4$ days, significantly improving velocity dispersion measurements. Four BH-MS binaries in the model with BHs suggests their possible detection through the same observation method. Including primordial binaries leads to a flatter inferred mass function because of spatially unresolved binaries, leading to a better match of the observations than models without binaries, particularly in Palomar 5's inner region. Future observations should focus on the cluster velocity dispersion and binaries with periods of $10^4-10^5$ days in Palomar 5's inner and tail regions to constrain BH existence.
Long Wang, Mark Gieles, Holger Baumgardt, Chengyuan Li, Xiaoying Pang, Baitian Tang
2023-09-07T00:24:31Z
http://arxiv.org/abs/2309.03415v1
# The influence of black holes on the binary population of the globular cluster Palomar 5 ###### Abstract The discovery of stellar-mass black holes (BHs) in globular clusters (GCs) raises the possibility of long-term retention of BHs within GCs. These BHs influence various astrophysical processes, including merger-driven gravitational waves and the formation of X-ray binaries. They also impact cluster dynamics by heating and creating low-density cores. Previous \(N\)-body models suggested that Palomar 5, a low-density GC with long tidal tails, may contain more than 100 BHs. To test this scenario, we conduct \(N\)-body simulations of Palomar 5 with primordial binaries to explore the influence of BHs on binary populations and the stellar mass function. Our results show that primordial binaries have minimal effect on the long-term evolution. In dense clusters with BHs, the fraction of wide binaries with periods \(>\)10\({}^{5}\) days decreases, and the disruption rate is independent of the initial period distribution. Multi-epoch spectroscopic observations of line-of-sight velocity changes can detect most bright binaries with periods below 10\({}^{4}\) days, significantly improving velocity dispersion measurements. Four BH-MS binaries in the model with BHs suggests their possible detection through the same observation method. Including primordial binaries leads to a flatter inferred mass function because of spatially unresolved binaries, leading to a better match of the observations than models without binaries, particularly in Palomar 5's inner region. Future observations should focus on the cluster velocity dispersion and binaries with periods of 10\({}^{4}\) - 10\({}^{5}\) days in Palomar 5's inner and tail regions to constrain BH existence. keywords: keyword1 - keyword2 - keyword3 ## 1 Introduction Following several detections of stellar-mass black hole (BH) candidates through X-ray and radio observations (Strader et al., 2012; Chomiuk et al., 2013; Miller-Jones et al., 2015; Bahramian et al., 2017) and via radial velocity measurements (Giess et al., 2018, 2019) in globular clusters (GCs), the long-term dynamical impact of BHs in GCs has been extensively studied (e.g. Breen and Heggie, 2013; Morscher et al., 2013, 2015; Sippel and Hurley, 2013; Heggie and Giersz, 2014; Wang et al., 2016; Sollima et al., 2016; Peuten et al., 2016; Rodriguez et al., 2016; Askar et al., 2018; Weatherford et al., 2020; Wang, 2020; Weatherford et al., 2021; Wang et al., 2021; Gieles and Gnedin, 2023). Investigating the BH population is also crucial for constraining the massive end of the initial mass function (IMF) (e.g. Shanahan and Gieles, 2015; Chatterjee et al., 2017; Henault-Brunet et al., 2020; Baumgardt et al., 2023; Dickson et al., 2023). Breen and Heggie (2013) demonstrated that the presence of BH subsystems significantly impacts the evolution of star clusters, with BHs forming binary BHs (BBHs) and controlling the central energy flow. Wang (2020) further showed that a large fraction of BHs would accelerate the relaxation process and leads to faster tidal disruption of GCs. In the case of a top-heavy IMF in GCs, a prominent core of bright stars tends to emerge (Chatterjee et al., 2017; Giersz et al., 2019; Weatherford et al., 2021; Wang et al., 2021). Therefore, to constrain the massive end of the IMF, comparisons between dynamical models and observations of GCs are required. Palomar 5 (Pal 5) is among the Galactic GCs renowned for its long tidal streams and unusually low central density (e.g. Rockosi et al., 2002; Odenkirchen et al., 2001, 2002, 2003; Koch et al., 2004; Odenkirchen et al., 2009; Carlberg et al., 2012; Kuzma et al., 2015; Ishigaki et al., 2016; Price-Whelan et al., 2019; Bonaca et al., 2020; Starkman et al., 2020), which suggests the possible presence of a substantial number of BHs in the cluster (Gieles et al., 2021, hereafter G21). Understanding the properties of the BH population in Pal 5 is also crucial for explaining the pronounced nature of its stream. G21 employed self-consistent \(N\)-body models that resolve individual stars to propose the existence of a large population of BHs in the cluster core (20% of the total mass), enhancing tidal disruption. However, the BH hypothesis needs further confirmation, because the observed density profiles of the cluster and the stream could also be reproduced by an \(N\)-body model of a BH-free cluster with a low initial density. The binary population of Pal 5 plays a crucial role in resolving this degeneracy. According to the Heggie (1975)-Hills (1975) law, close encounters with binaries can result in two opposing evolutionary trends: wide/soft binaries become less bound and decay with a few close encounters, while tight/hard binaries become tighter due to the increased kinetic energy of the intruder and the centre-of-mass of the binary. The boundary between these two types depends on the local kinetic energy of particles where the binary resides. G21 argue that the kinetic energy of BHs is higher than that of stars in a cluster without BHs with similar half-light radius. It is therefore expected that fewer soft binaries could survive in the case the cluster contains BHs, which is a prediction that can be tested with observations. Furthermore, due to the large distance of Pal 5, most binaries cannot be resolved spatially by current state-of-art observational instruments. Because unresolved binaries might influence the determination of velocity dispersion and present-day mass functions, it is worthwhile to investigate how primordial binaries and BHs collectively affect the line-of-sight velocity measurement and mass function and whether it can be used to indirectly constrain the existence of BHs. In this study, we perform \(N\)-body simulations of several Pal 5-like clusters with and without BHs and incorporating a large number of binaries, to examine the impact of BHs on binary disruption and the long-term evolution of Pal 5 and its tidal tails. Section 2 describes the \(N\)-body simulation method, data analysis tools, and the observational data of Pal 5 utilized in this study. Section 3 presents the results of our \(N\)-body models, comparing the structural evolution, surface number density, binary properties, and present-day mass function with models from G21 and observational data. Section 4 discusses the limitations of our models and outlines prospects for future observations. Finally, Section 5 concludes this work. ## 2 Methods ### \(N\)-body code We conducted simulations of Pal 5-like clusters using the high-performance \(N\)-body code petar(Wang et al., 2020). To achieve high parallel performance, the framework for developing parallel particle simulation codes (fops) is implemented in petar(Iwasawa et al., 2016, 2020). The code incorporates the particle-tree and particle-particle method (P\({}^{3}\)T) (Oshino et al., 2011), which enables the separate integration of long-range and short-range interactions between particles. For accurate integration of the weak long-range interactions, the code uses a Barnes & Hut (1986) particle-tree method with a 2nd-order Leap-frog integrator, which has a computational cost of \(O(N\log(N))\). To accurately follow orbital motions of binaries, hyperbolic encounters, and the evolution of hierarchical few-body systems, the 4th-order Hermite method along with the slowdown-algorithmic regularization (SDAR) method is used (Wang et al., 2020). One of the major advantages of the petar code is its capability to include a large fraction of binaries, up to 100%, in the simulation of stellar systems without significant performance loss. This feature enables us to carry out the models presented in this work. In our simulations, we included binaries with a wide period distribution (see Section 2.5), requiring the use of Leap-frog, Hermite, and SDAR integrators for integrating binary orbits. While Leap-frog and SDAR are symplectic methods that conserve energy and angular momentum, the Hermite integrator does not. We employ sufficiently small time steps for the Hermite integrator to ensure that the artificial drift of semi-major axes and eccentricities remains insignificant throughout the entire evolutionary time of all our models. The key parameters for switching the integrator and controlling the accuracy of one simulation in this work are provided below: * Changeover inner radius: 0.0027 pc * Changeover outer radius: 0.027 pc * SDAR separation criterion: 0.000216 pc * Tree time step: 0.0009765625 Myr * Hermite time step coefficient \(\eta\): 0.1 See Wang et al. (2020) for the details on the definition of these parameters. The population synthesis code for single and binary stellar evolution, sse and sse, are implemented in petar(Hurley et al., 2000, 2002). Furthermore, the code utilizes an updated version from Banerjee et al. (2020) that incorporates semi-empirical stellar wind prescriptions from Belczynski et al. (2010); Vink et al. (2011), a "rapid" supernova model for remnant formation and material fallback from Fryer et al. (2012), and the pulsation pair-instability supernova (PPSN) model from Belczynski et al. (2016). By including or excluding fallback we control the retention of BHs in our simulations. ### Milky Way potential The Milky Way potential is modeled by combining the galpy code (Bovy, 2015) with petar. We adopt the setup of a three-component Milky Way model from G21. The parameters are as follows: \(Bulge\): The present position of Pal 5 obtained in G21 is [5.733, 0.2069, 14.34] kpc and [-41.33, -111.8,-16.85] km s\({}^{-1}\) in the cartesian Galactocentric frame. The corresponding observational quantities of Pal 5 are: G21 has derived the initial position and velocity of Pal 5 (\(\sim\)11.5 Gyr ago) by backward integrating the orbit. But due to the different implementation of Galactic potentials in nbody6 used in G21 and in galpy, we could not directly use it. Using galpy, we trace back the orbital motion of Pal 5 in a similar way and obtain the initial position and velocity as \([-5.339,-1.602,-14.27]\) kpc and \([-21.78,111.9,-45.52]\) km s\({}^{-1}\), respectively. The orbit of Pal 5 calculated by galpy is shown in Figure 1. ### Mock photometry To convert snapshots from the \(N\)-body models to photometric data for different filters used in observations, we use the code galevnb(Pang et al., 2016), which selects corresponding spectral templates from the library of Lejeune et al. (1997, 1998) according to the fundamental stellar properties, such as stellar mass, temperature, luminosity and metallicity from \(N\)-body simulations. By convolving the spectra with the filter response curve from a given filter, we obtain the observational magnitudes of specific filters of main-stream telescopes, such as Hubble Space Telescope (HST) and the future Chinese Survey Space Telescope (CSST) for individual stars in the \(N\)-body models. In this way, we produce mock observations for \(N\)-body models, which allows a direction comparison with observational data. This is useful to compare the density or surface brightness profiles, unresolved binaries and stellar mass functions between observations and the models. In this study, the line-of-sight velocity of unresolved binaries is calculated using the Johnson I-band filter (as described in Section 3.2.4). For creating the color-magnitude diagram, we employ the HST F555W and F814W filters, along with the CSST g and i filters. To convert luminosity to mass for unresolved binaries, we utilize the HST F555W filter. Further details can be found in Section 3.4. ### Observational data To validate our \(N\)-body model and ensure its accuracy in reproducing the surface number density \(\Sigma(R)\) and mass function of Pal 5, we compare it with observational data. We utilize the data from Ibata et al. (2017) for the surface number density and the masses of stars obtained from two HST observations with Program IDs 6788 (PI: Smith; Grillmair & Smith, 2001) and 14535 (PI: Kuepper) as reported in Baumgardt et al. (2023). The observed surface number density \(\Sigma(R)\) encompasses stars with g-band magnitudes ranging from 19 to 23, with photometry obtained from the Canada-France-Hawaii Telescope. The corresponding mass range of these stars is 0.625 to 0.815 \(M_{\odot}\), determined using the magnitude-mass conversion provided by G21. Regarding the masses of stars derived from the HST data, Baumgardt et al. (2023) employed Dartmouth isochrones to fit the CMDs of the clusters and employed them to convert magnitudes into masses. Further details can be found in their work. ### Star cluster models To reproduce Pal 5's observed surface density and present-day position in the Galaxy, we generate the initial conditions of \(N\)-body models by referring to the wBH-1 and noBH-1 models in G21, which have the closest property to the observational data assuming Pal 5 contains a cluster of BHs and no BH, respectively. For the wBH-1 model, natal kick velocities of BHs after supernovae are affected by the material fallback from Fryer et al. (2012). A large fraction of BHs are retained in the clusters and finally sink to the centre via dynamical friction. The existence of a BH subsystem can significantly affect the structure and evolution of star clusters. As a result, the cluster has a loose core of luminous stars. The wBH-1 model has an initial half-mass radius, \(r_{\rm h,0}=5.85\) pc, and an initial number of stars, \(N_{0}=2.1\times 10^{5}\). In contrast, the noBH-1 model assumes BHs have the same high kick velocities as neutron stars and almost none are retained after supernova explosions. Without BHs, the core collapse of luminous stars result in a dense core. In order to reproduce the observed surface brightness profile, G21 find that the cluster must therefore have had a much lower density initially. Thus, for the noBH-1 model, \(r_{\rm h,0}=14\) pc and \(N_{0}=3.5\times 10^{5}\). We conducted five \(N\)-body models with varying setups of primordial binaries and the presence of BHs. The initial conditions for these five models are summarized in Table 1. We assigned labels to the models to indicate the existence of primordial binaries and BHs. For BH treatment, models with the label "BH" refer to the wBH-1 model from G21, where the mass fallback scaling for kick velocities is applied so that a part of the BHs has low kick velocities and stays in the clusters. They also have the same \(N_{0}\) and \(r_{\rm h,0}\) as those of the noBH-1 model. Models with the label "noBH" refer to the noBH-1 model from G21. In these models, all BHs have high kick velocities similar to the neutron stars after asymmetric supernovae. The velocity distribution follows a (1D) Maxwellian distribution with a dispersion of 265 km/s. As a result, we found no BHs are retained in our noBH models. The prefix "noBin" and "Bin" represent without and with primordial binaries, respectively. For "Bin" models, all stars are in binaries Figure 1: The orbit of the Pal 5 in the Galactocentric frame. The upper and lower panels show the projected trajectory in the \(x_{\rm G}-y_{\rm G}\) and \(R_{\rm G}-z_{\rm G}\) planes, respectively. \(R_{\rm G}\) is the projected radial coordinate in the \(x_{\rm G}-y_{\rm G}\) plane. The symbols ‘+’ and ‘x’ represent the zero-age and present-day positions, respectively. initially. For massive binaries with the component mass \(>5\) M\({}_{\odot}\), except the Bin-noBH-F model, all other "Bin" models have the period and mass ratio distributions follow the observational constraints of OB binaries from Sana et al. (2012). For low-mass binaries, except the Bin-BH-Alt model, all other "Bin" models assume the properties of primordial binaries following the model from Kroupa (1995a,b) and Belloni et al. (2017) (naming as Kroupa binary model). The orbital parameters of this model are derived from the inverse dynamical population synthesis of binaries in the Galactic field. This model assumes an universal property of primordial binaries and all stars forming in star clusters. In addition, a correction of the period and eccentricity distributions from Belloni et al. (2017) is included to better fit the observational data of GCs. For the Bin-BH-Alt model, we assume a different setup of low-mass primordial binaries (referred to as FlatLog model) as a comparison with the Kroupa binary model. The semi-major axes follow a flat distribution in the logarithmic scale where the minimum and maximum value are 3 solar radius and 2 pc, respectively. The eccentricity and mass ratio distributions are the same as those of the Kroupa binary model. The period and eccentricity distributions are shown in Figure 2. For both binary models, the initial distribution of periods covers a wide region with 9 orders of magnitudes. The initial eccentricities exhibit a sharp peak at \(e=0\) and a broader peak at \(e=0.8\), respectively. All binaries with peri-centre separation less than the sum of the stellar radii of the two components are excluded. Thus, an empty region is visible in the period-eccentricity distribution of Figure 2. In addition, the eccentricity distributions of the Kroupa and FlatLog are different after adjustment. These binary setups cover a wide range of binary orbital periods, where a large fraction of binaries are unstable in the cluster environment. After a short time (about one crossing time), the binary fraction significantly reduces. Referring to Pal 5, the binary fraction of our setup may be overestimated. The benefit is that we can investigate how long-term dynamical evolution of the clusters with and without BHs affect both the tight and wide binaries. The Bin-noBH-F model has the same \(N_{0}\) and \(r_{\rm h,0}\) as those in the noBH-1 model. However, after finishing the simulation, we found that the Bin-noBH-F model cannot reproduce the final structure of the noBH-1 model at 11.5 Gyr and it has suffered complete tidal disruption before 10 Gyr. The suffix "F" in the name of the model indicates that this is a failed model. Thus, we conducted another model "Bin-noBH" by reducing \(r_{\rm h,0}\) to 13.2 pc. This small modification results in a cluster similar to Pal 5 after 11.5 Gyr. In addition, we excluded massive binaries in the Bin-noBH-F model to prevent non-supernovae BH formation in a binary, but we observed that such events did not occur. Therefore, in the Bin-noBH model, we added the Sana distribution to massive binaries to ensure consistency with the Bin-BH models. The common setup for all models is also summarized in Table 1. All models were evolved for a duration of 12.0 Gyr. At 11.5 Gyr, the clusters are located at the same Galactic position as Pal 5. However, since the model did not precisely reproduce the surface number density of Pal5, we continue to evolve the cluster further to determine the age (referred to as \(T_{\rm mat}\)) when the model matches the observation more closely, as detailed in Section 3.1.5. We assumed a spherically symmetric Plummer profile (Plummer, 1911) with no primordial mass segregation. The initial mass function (IMF) of stars followed the two-component power-law shape described by Kroupa (2001). We adopted the same mass range of \(0.1-100M_{\odot}\) as used in G21, and the power-law indices (\(\alpha\)) and mass ranges are described as: \[\alpha=\begin{cases}-1.3&(0.1<m<0.5\ M_{\odot})\\ -2.3&(0.5<m<100\ M_{\odot})\end{cases} \tag{1}\] In this study, we adopted a cluster metallicity of \(Z=0.0006\), which is consistent with the value reported in Smith et al. (2002) of \(\rm[Fe/H]\approx-1.4\) dex for Pal 5. The initial star cluster models were generated using the updated version (Wang et al., 2019) of the mcluister code (Kupper et al., 2011). This update includes the implementation of the Kroupa binary model generator, as shown in Figure 2. ## 3 Results ### Structural evolution First, we present the evolution of the cluster structure and compare our results to the models from G21 and the observational data. Generally, although the existence of binaries does not significantly affect the structural evolution, the small difference can be amplified by the Galactic tidal field and result in early dissolution of the Bin-noBH-F model. In addition, the existence of primordial binaries reduces the BH populations and results in shorter relaxation times in the early evolution. The stochastic formation of BBHs also affects the expansion of the cluster and eventually influences the disruption of the cluster. The surface number density of \(N\)-body models roughly agree with observations with a larger central density. Figure 2: Initial periods (\(P\)) v.s. eccentricities (\(e\)) of primordial binaries for the Kroupa binary model and the FlatLog model. The central plot of each panel shows \(P\)-\(e\) of individual binaries. The upper and the right histograms show the normalized distribution of \(P\) and \(e\), respectively. The distribution of massive binaries is shown by blue lines. #### 3.1.1 Half-mass relaxation time The two-body relaxation time is an important timescale of stellar dynamics, which reflects the speed of changes in the density and mass segregation of a cluster and its tidal dissolution. The one-component half-mass relaxation time (\(t_{\rm rh1}\)) defined in Spitzer (1987) has the form as \[t_{\rm rh1}=0.138\frac{N^{1/2}r_{\rm h}^{3/2}}{m^{1/2}G^{1/2}\ln\Lambda}, \tag{2}\] where \(N\) is number of stars, \(r_{\rm h}\) is the half-mass radius, \(m\) is the average mass of stars, \(G\) is the gravitational constant, and \(\ln\Lambda\) is the Comumb logarithm. When BHs exist, the binary heating is dominated by BBHs, \(t_{\rm rh1}\) leads to an underestimation of the relaxation timescale of the system. Wang (2020) found that a proper two-component relaxation time (\(t_{\rm rh}\)) can be obtained by dividing a correction factor \(\psi\), defined as \[\psi=\frac{n_{1}m_{1}^{2}/\sigma_{1}+n_{2}m_{2}^{2}/\sigma_{2}}{nm^{2}/\sigma}, \tag{3}\] and \[t_{\rm rh}=\frac{t_{\rm rh1}}{\psi} \tag{4}\] where the suffixes 1 and 2 represent the quantities for non-BH and BH components, respectively. Figure 3 illustrates the evolution of \(t_{\rm rh}\) and \(\psi\). The three BH models exhibit significantly shorter \(t_{\rm rh}\) compared to the noBH models. During the first 100 Myr, the noBin-BH model displays a longer \(t_{\rm rh}\) compared to the Bin-BH and Bin-BH-Alt models because the Bin models treat binaries as single objects when calculating \(t_{\rm rh}\). Consequently, the Bin-BH and Bin-BH-Alt models experience relatively faster expansion of \(r_{\rm h}\) and faster mass segregation of BHs (see Section 3.1.2). Subsequently, the trend reverses, and the \(t_{\rm rh}\) of the noBin-BH model becomes shorter than that of the Bin-BH and Bin-BH-Alt models due to the difference in the number of BHs (see Section 3.1.3). As a result, the \(r_{\rm h}\) of the noBin-BH model expands faster than that of the other two models. After 8 Gyr, the \(t_{\rm rh}\) of all three BH models starts to decrease due to mass loss via tidal evaporation. The values of \(\psi\) for the BH models exceed 5, indicating that BHs significantly impact the relaxation process of the clusters. Further discussion of \(r_{\rm h}\) is provided in Section 3.1.2. In contrast, the two noBH models exhibit much longer \(t_{\rm rh}\). There is a rapid increase in \(t_{\rm rh}\) during the first 100 Myr, primarily due to the strong stellar winds from massive stars and the escape of BHs. Consequently, although the morphology appears similar at 11.5 Gyr for models with and without BHs, the relaxation processes differ significantly. These differences can lead to variations in the properties of binaries. In Section 3.2, we analyze the impact of these differences and discuss their implications for binary systems. It is important to note that assuming \(\psi=1\) for the noBH models is not accurate, as there is still an order of magnitude difference between the minimum and maximum masses of stars. #### 3.1.2 Half-mass radius Figure 4 illustrates the evolution of \(r_{\rm h}\) for all models, including the ones from G21 for comparison. We observe that the presence of primordial binaries has a weak impact on the evolution of \(r_{\rm h}\), consistent with the theoretical findings of Wang et al. (2022). When BHs exist, the long-term structural evolution of star clusters is primarily controlled by binary heating driven by the dynamical interactions between BBHs and the surrounding objects at the cluster center. The majority of primordial binaries have much smaller masses compared to BBHs, and therefore have a negligible impact on the binary heating until most BHs have escaped from the cluster. A small subset of massive primordial binaries can eventually evolve into BBHs. However, even in the absence of these massive binaries, a star cluster can generate BBHs through chaotic three-body interactions when the central density of the cluster reaches a threshold after the core collapse of BHs (see Section 3.1.4). Consequently, we only observe minor differences of \(r_{\rm h}\) between the Bin-BH, Bin-BH-Alt, and wBH-1 models during the first 10 Gyr of evolution. This can be explained by the differences in relaxation times (\(t_{\rm rh}\)) discussed in Section 3.1.1. The Figure 3: The evolution of two-component half-mass relaxation time for all models (\(t_{\rm rh}\); upper two panels) and \(\psi\) factors (lower panel) for BH models. \begin{table} \begin{tabular}{l c c c c c} \hline Models & noBin-BH & Bin-BH & Bin-BH-Alt & Bin-noBH & Bin-noBH-F \\ \hline \(r_{\rm h,0}\) [pc] & 5.85 & 5.85 & 5.85 & 13.2 & 14 \\ \(N_{0}\) & 210000 & 210000 & 210000 & 350000 & 350000 \\ Binary fraction & no & 100\% & 100\% & 100\% & 100\% \\ Low-mass binary & no & Kroupa & FlatLog & Kroupa & Kroupa \\ massive binary & no & Sana & Sana & Sana & no \\ Retaining BH & fallback-scale & fallback-scale & fallback-scale & no & no \\ \(T_{\rm max}\) [Gyr] & 11.8 & 12.0 & 11.0 & 12.0 & \\ \hline \end{tabular} \end{table} Table 1: Initial conditions of the \(N\)-body models. All models include the Plummer (1911) profile, the Kroupa (2001) initial mass function with a mass range from 0.1 to 100 \(M_{\odot}\), a metallicity of \(z=0.0006\), and a simulation duration of 12 Gyr. galactic potential also affects \(r_{\rm h}\), but since all models share the same orbit, the influence is similar. However, after 10 Gyr, the Bin-BH-Alt model exhibits a similar \(r_{\rm h}\) to that of the wBH-1 model, but its \(r_{\rm h}\) shows significant variations, indicating an energy imbalance and the onset of a disruptive tidal phase. In contrast, both the Bin-BH and wBH-1 models remain stable until 12 Gyr. This differing behavior is attributed to stochastic BBH heating, as explained in Section 3.1.4. The BH models with binaries (Bin-BH) and without binaries (noBin-BH) exhibit different timescales for the mass segregation of black holes, as indicated by the initial rapid contraction of \(r_{\rm h,BH}\). In the Bin-BH model, \(r_{\rm h,BH}\) undergoes faster contraction during the early stages of evolution compared to the noBin-BH model. This disparity can be attributed to the difference in \(t_{\rm th}\), as the timescale for mass segregation is proportional to \(t_{\rm th}\). When comparing the noBH models with binaries (Bin-noBH-F) and the model from G21 without binaries (noBH-1), significant differences in the evolution of \(r_{\rm h}\) emerge after 8 Gyr. The Bin-noBH-F model experiences tidal disruption at around 9 Gyr, whereas the noBH-1 model survives until 11.5 Gyr. G21 noted that the final properties of the noBH models are more sensitive to changes in the initial conditions, and in fact argued that this 'fine tuning' problem disfavours the noBH scenario. An offset of \(r_{\rm h,0}\) needs to be introduced in the Bin-noBH model to achieve consistent \(r_{\rm h}\) at 11.5 Gyr. Two factors may explain the need for this offset. Firstly, in the absence of BHs, binary heating is primarily generated by low-mass binaries. Consequently, the influence of primordial binaries is more pronounced compared to models with BHs. Secondly, due to the larger \(r_{\rm h,0}\), the cluster becomes more sensitive to the galactic tide. The presence of primordial binaries affects the relaxation time of the system, as the dynamical effect of tight binaries is equivalent to that of single objects, resulting in a shorter relaxation time for the system. Consequently, the system dissolves faster, necessitating a denser initial cluster to allow the cluster's survival, as seen in the noBH-1 model. Additionally, the differences caused by the stochastic scatter of \(r_{\rm h}\) resulting from the random seeds used to generate the initial conditions may also be amplified by the galactic tide, contributing to the divergent evolution. #### 3.1.3 Mass loss The upper panels of Figure 5 show the evolution of the total mass (\(M(t)\)) of our models. Data of the wBH-1 and the noBH-1 from G21 are also shown as references. The mass loss has two channels: wind mass loss driven by stellar evolution and estegers via stellar dynamics of star clusters. To have a consistent definition of \(M\), all models use the same criterion to select estegers. First, we calculate the bound energy of stars and centre-of-the-mass of binaries without external potential and then select estegers with energy >0. Here we compare the three cases: For models with no primordial binary and with BHs, \(M(t)\) of our noBin-BH model agrees with the wBH-1 model from G21. The final mass of the noBin-BH model at 11.5 Gyr is slightly larger than that of the wBH-1 model. For models with primordial binaries and with BHs, compared to the wBH-1 model, the Bin-BH and the Bin-BH-Alt models lose mass faster during the first few hundred Myr, but mass loss of the Bin-BH model becomes slower near the end of the simulation. Finally, the Bin-BH and the wBH-1 models agree with each other, while the Bin-BH-Alt model dissolves after about 11 Gyr. For models with no BHs, the Bin-noBH-F model with primordial binaries loses mass faster than the noBH-1 model with no binaries. The Bin-noBH model, with a smaller \(r_{\rm h,0}\), experiences a relatively slower mass loss, and its \(M(t)\) remains slightly above that of the noBH-1 model at 11.5 Gyr. In general, the evolution of \(M(t)\) and \(r_{\rm h}\) are similar for all three cases. #### 3.1.4 Black holes BHs significantly affect the long-term dynamical evolution. We investigate the mass fraction of BHs (\(f_{\rm BH}\)) and the bound mass of BHs (\(M_{\rm BH}\)) in Figure 5. The evolution of \(f_{\rm BH}\) in the noBin-BH and the wBH-1 models agree with each other in the first 8 Gyr. Then, \(f_{\rm BH}\) increases more slowly in the noBin-BH model and is half that in the wBH-1 model at 11.5 Gyr. \(M_{\rm BH}\) of the noBin-BH model is slightly smaller than that of the wBH-1 model initially and such a difference is inherited in the long-term evolution. Finally, as a large fraction of stars escape, such initial differences lead to a large difference of \(f_{\rm BH}\) at the end. For the Bin-BH and the Bin-BH-Alt models, \(M_{\rm BH}\) are significantly smaller than that of the noBin-BH model during the early evolution. This difference is due to the stellar evolution of massive binaries. Based on the orbital parameters of binaries from Sana et al. (2012), the progenitors of BHs (massive stars) are all in binaries. A fraction of the tight binaries suffers mass transfer and mergers. The BHs formed from these binaries can have different distribution of masses. The maximum \(M_{\rm BH}\) of the Bin-BH model is about 250 \(M_{\odot}\) less than that of the noBin-BH model. Then, after the mass segregation of BHs (a few hundreds Myr), binary heating of BBHs start to kick out BHs from the cluster, and result in larger difference of \(M_{\rm BH}\) during the long-term evolution. Although the Bin-BH (Bin-BH-Alt) and the noBin-BH models show a large difference of \(M_{\rm BH}\), their evolution of \(M\) and \(r_{\rm h}\) is similar before 10 Gyr. This was also observed in Wang et al. (2022). Figure 4: The evolution of half-mass radius of all objects (\(r_{\rm h}\); dashed curves) and the half-mass radius of BHs (\(r_{\rm h,BH}\); solid curves). The wBH-1 and noBH-1 models from Gieles et al. (2021) are shown as references. The evolution of the semi-major axes (\(a\)) of BBHs reflects both binary heating and mergers driven by gravitational wave (GW) radiation. Figure 6 provides a comparison of this evolution for the three BH models. Despite the absence of primordial binaries in the noBin-BH model, we can still observe the formation of BBHs and their orbital contraction. The frequency of BBH formation and the overall trend of \(a\) are similar for all three models, except that the two models with primordial binaries exhibit a higher number of BBHs formed from these binaries during the first 1000 Gyr. Some of these BBHs with \(a<1\) AU undergo orbital shrinking due to GW radiation, ultimately merging to form more massive BHs. These newly formed BHs lead to the creation of massive BBHs with masses exceeding 100 \(M_{\odot}\). The presence of these massive BBHs can have a substantial impact on the evolution of the star cluster, influencing its dynamical and structural properties. In particular, for the Bin-BH-Alt model, the formation of a massive BBH around 8 Gyr coincides with a faster expansion of \(r_{\rm h}\) compared to the Bin-BH model, ultimately leading to an earlier disruption of the Bin-BH-Alt model. Hence, the divergent evolution of the Bin-BH and Bin-BH-Alt models after 8 Gyr is attributed to the stochastic formation of BBHs. It is important to note that our models do not account for the high-velocity kicks experienced by newly formed black holes due to asymmetric GW radiation following mergers. Therefore, the formation of such massive BBHs might not be as common as our models suggest. Consequently, the stochastic effect of massive BBH heating could be overestimated in our cases. #### 3.1.5 Surface number density profiles The determination of \(r_{\rm h}\) and \(M\) relies on the selection criteria for identifying cluster members. When comparing the \(N\)-body models with observational data from Pal 5, it is challenging to use the exact same selection criterion for both. A more appropriate approach is to compare the surface number density (\(\Sigma(R)\)), where \(R\) represents the angular distance from the cluster center in the International Celestial Reference System (ICRS). Figure 7 illustrates the \(\Sigma(R)\) profiles for our \(N\)-body models and the observational data of Pal 5 obtained from Ibata et al. (2017). To ensure consistency with the observations, only main-sequence stars with masses ranging from \(0.625M_{\odot}\) to \(0.815M_{\odot}\) are considered in the \(N\)-body data (see G21 for details). No stars are removed during the simulation, allowing for the tracking of the tidal tail evolution. The centre-of-mass position of the star clusters in the Galaxy at exactly 11.5 Gyr does not perfectly align with that of Pal 5. This is due to the long-term evolution of star Figure 5: The evolution of the bound mass (\(M\)), the BH mass fraction (\(f_{\rm BH}\)) and the bound mass of BHs (\(M_{\rm BH}\)) for all models. The data of the wBH-1 and noBH-1 models are shown for comparison. cluster, where the center of the cluster drifts as a result of asymmetric mass loss due to stellar winds, supernovae, and the escape of stars. Therefore, we select snapshots from the simulations that have the closest centre-of-mass distance to that of Pal 5 whenever a comparison is required in the subsequent analysis. We then correct the positions and velocities of the stars by applying the offset between the centre-of-mass of the \(N\)-body models and the observational data. The results of this correction are presented in the upper panel of Figure 7. Due to the complete disruption of the Bin-noBH-F model, it is not possible to determine the centre-of-mass position for this particular model. Therefore, it is excluded from some analysis and comparisons. The vertical lines in Figure 7, representing the half surface number radii (\(R_{\rm hh}\)), indicate that all models except the Bin-BH-Alt model are more centrally concentrated than the observed Pal 5. In Figure 5, it is shown that these models retain more mass at 11.5 Gyr compared to the models presented in G21. The Bin-noBH and Bin-BH models exhibit similar \(\Sigma(R)\) profiles, but this similarity is coincidental since they had different initial density profiles and evolved in opposite ways, as demonstrated in Figure 4. Given the time-consuming nature of the simulations, it is challenging to precisely reproduce the models of G21 and the observational data. To enhance the comparison with the observational data, we selected snapshots at different ages that match the observed \(\Sigma(R)\) profile. These results are displayed in the bottom panel of Figure 7. Although the tidal streams differ substantially, we can still compare the internal properties of binaries and mass functions using these snapshots. ### Binaries #### 3.2.1 Binding energy of binaries While the BH and noBH models may exhibit a similar \(\Sigma(R)\) profile, as demonstrated in Figure 7, their relaxation processes differ. This discrepancy can lead to different properties of binaries at 11.5 Gyr. In star clusters, perturbations from incoming objects can significantly alter the orbits of binaries. According to the Heggie (1975)-Hills (1975) law, wide or soft binaries are prone to disruption after experiencing a few close encounters with intruding objects. Conversely, tight or hard binaries tend to become even tighter after these encounters. The hard-soft boundary of binding energy (\(E_{\rm hs}\)) at the distance to the cluster center (\(r\)) is determined by the local velocity dispersion: \[E_{\rm hs}=\frac{\langle mv^{2}\rangle}{3} \tag{5}\] where \(0.5\langle mv^{2}\rangle\) is the average kinetic energy of stars and binaries at \(r\), and \(v\) is the velocity. The hard-soft boundary of binaries evolves as the structure of the cluster changes over time. Initially, during the first 100 Myr of star cluster evolution, there is a rapid reduction in the hard-soft boundary. This is due to the expansion of \(r_{\rm h}\) caused by the strong stellar wind mass loss from massive stars, as shown in Figure 4. Figure 6: The evolution of the semi-major axes of BBHs within the core radius (\(r_{c}\)) of the three BH models. The colors of the lines indicate the masses of the BBHs. We can observe a reduction of the semi-major axes of individual BBHs, indicating their dynamical hardening over time (\(a>10\) AU) and inspiral by GW radiation (\(a<1\) AU). Figure 7: The surface number density (\(\Sigma(R)\)) profiles are presented for the \(N\)-body models along with observational data from Ibata et al. (2017). The upper panel displays snapshots of the \(N\)-body models at the present-day Galactic position and at approximately 11.5 Gyr. The lower panel shows \(N\)-body snapshots that match the observed \(\Sigma(R)\) profile. The ages of the corresponding snapshots (\(T_{\rm max}\)) are indicated in the legend. Vertical lines are used to indicate the the effective radius \({}^{-}\) – the radius containing half the number of stars in projection - (\(R_{\rm hh}\)) of the clusters. After 100 Myr, the evolution of \(r_{\rm h}\) slows down, and the hard-soft boundary, \(E_{\rm hs}\), evolves more gradually. The Bin-BH and Bin-noBH models have different initial \(E_{\rm hs}(r)\) curves as shown in Figure 4, but their final \(E_{\rm hs}(r)\) curves at 11.5 Gyr converge to a similar shape. This indicates that the distribution of binary binding energy at 11.5 Gyr may reflect the different evolutionary histories of \(E_{\rm hs}\). To further analyze the distribution of binary binding energy, Figure 8 presents a comparison of the contour plot of \(E_{\rm h}\) versus \(r\) at approximately 11.5 Gyr for the Bin-BH and Bin-noBH models. Across a wide range of \(r\) values, spanning from the center of the cluster to the distant tidal tail, two distinct peaks can be observed. The first peak, located around 10-30 pc, represents the population of binaries inside the cluster. The second peak, with \(r>3000\) pc, corresponds to binaries that have escaped from the cluster and are distributed along the tidal tail. We focus on the discussion of binaries within the cluster and examine the hard-soft boundaries, \(E_{\rm hs}(r)\), at three different ages: 0 Myr, 100 Myr, and 11.5 Gyr. These boundaries are plotted as reference curves. To calculate \(E_{\rm hs}(r)\), we divide the cluster into 10 radial bins, ensuring an equal number of objects per bin. Binaries are treated as unresolved objects in this analysis. The maximum value of \(r\) is set to be at 90% of the Lagrangian radius, providing a radial range that reflects the cluster's size at the three ages. The results show that \(E_{\rm hs}(r)\) does not exhibit strong variations along \(r\). The two models, Bin-BH and Bin-noBH, have similar \(E_{\rm hs}(r)\) curves, except for an offset in the radial region at 0 Myr and 100 Myr. The peak of \(E_{\rm b}\) falls between the \(E_{\rm hs}(r)\) curves at 100 Myr and 11.5 Gyr. This suggests that during the first 100 Myr, not all soft binaries with \(E_{\rm b}<E_{\rm hs}\) are immediately disrupted, and many of them can survive and become hard binaries by 11.5 Gyr. Therefore, the final distribution of \(E_{\rm b}\) does not clearly reflect the initial conditions of the two models, as anticipated by G21. However, the Bin-noBH model has a relatively larger number of binaries compared to the Bin-BH model. This difference suggests that the overall rate of binary disruption depends on the evolutionary history of the cluster density. #### 3.2.2 Period distribution To analyze the binary disruption rate in relation to cluster dynamics, we examine the period distributions normalized by the bound mass of the cluster (\(\overline{N}_{\rm bb}\)) for three models: Bin-BH, Bin-noBH, and Bin-BH-Alt, as depicted in Figure 9. The period distributions at the initial phase (0 Gyr) and the median age (5 Gyr) are compared. In the Bin-BH and Bin-noBH models, the initial period distributions are the same, but they exhibit different density profiles. At 5 Gyr, the Bin-noBH model retains more wide binaries compared to the Bin-BH model. The hard-soft boundaries of periods, estimated for stars within \(r_{\rm h}\), do not exhibit significant differences between the two models. However, the peak of the period distribution in the Bin-BH model is closer to the hard-soft boundary at zero age, whereas in the Bin-noBH model, it aligns with the boundary at 5 Gyr. This disparity suggests that the disruption rate of binaries is not solely determined by the hard-soft boundary. During long-term evolution, the Bin-BH model, which is denser and contains BH subsystems, experiences a higher rate of disruption for wide binaries, resulting in the peak of the period distribution being closer to the boundary. In contrast, the Bin-noBH model preserves more wide binaries, and the peak of the period distribution reflects the boundary at 5 Gyr for the cluster. Comparing the Bin-BH and Bin-BH-Alt models, they share a similar density evolution but differ in the assumptions of their primordial binaries. The ratio of \(\overline{N}_{\rm bb}\) at 5 Gyr to the initial phase, \(\overline{N}_{\rm bb}(5~{}{\rm Gyr})/\overline{N}_{\rm bb}(0)\), exhibits an identical trend for both models. This finding implies that the binary disruption is not highly sensitive to the assumption of the initial period distribution. Consequently, it is possible to infer the initial binary properties through inverse derivation if the evolution history of the cluster density is known (see Kroupa, 1995; Marks et al., 2011; Marks and Kroupa, 2012). Moreover, by utilizing the derived ratio, we can extrapolate the evolution of the period distribution of binaries for any arbitrary assumption regarding the primordial binary populations. This provides a valuable tool for understanding the long-term dynamical evolution of binary Figure 8: The contour of \(r\)-\(E_{\rm b}\) at 11.5 Gyr for the Bin-BH model (upper panel) and the Bin-noBH model (lower panel). Binaries with one or two compact objects are excluded in the contour. Instead, BH-MS and BH-WD binaries are marked as blue and lightblue stars, respectively. Three curves show the hard-soft boundaries \(E_{\rm hs}(r)\) at zero age, 100 Myr and 11.5 Gyr, respectively. The white region outside the color region indicates no binary. systems within star clusters and can aid in studying the impact of different initial binary properties on the binary disruption rate and cluster dynamics. #### 3.2.3 Radial distribution Figure 10 compares the radial distribution of the binary fraction (\(f_{\rm bin}\)) for the Bin-BH and Bin-noBH models at 11.5 Gyr. In the upper panel, the real \(f_{\rm bin}\) is plotted as a function of the 3D radial distance from the cluster center. Both models exhibit a similar trend, with a systematic offset of \(f_{\rm bin}\) along \(r\). The central region of the cluster shows a higher \(f_{\rm bin}\) compared to the outer halo. At the distant tail of the cluster, \(f_{\rm bin}\) experiences a significant increase. This can be attributed to binaries that escaped from the cluster during the early stages of evolution, as they suffer fewer dynamical perturbations and have a higher chance of survival. The lower panel of Figure 10 presents the predicted observed binary fraction as a function of projected distance. To identify binaries from the color-magnitude diagram, we assume that unresolved binaries with B-band magnitudes between 20.5 and 23 mag and a mass ratio above 0.6 can be detected. The B-band magnitudes for stars are generated by using galvenb. Notably, \(f_{\rm bin}\) (obs) for both models is nearly identical within a projected distance up to 30 arcmin, unlike the real \(f_{\rm bin}\) for all binaries. The observed binary fraction \(f_{\rm bin}\) (obs) falls in the range of 0.2 to 0.3. #### 3.2.4 Half-year evolution of line-of-sight velocities With high-resolution multi-epoch spectroscopic observations, it is possible to identify binaries by comparing the line-of-sight velocity changes (\(|\Delta v_{\rm LOS}|\)) over a span of approximately six months. The line-of-sight velocity \(v_{\rm LOS}\) of an unresolved binary is the combination of two \(v_{\rm LOS}\) of two components and is dominated by the brighter component. Thus, the \(|\Delta v_{\rm LOS}|\) values exhibit considerable variation during the multiple epochs of observation. These variations are determined by the periods, eccentricities, inclinations, and orbital phases of the binaries. Notably, larger variations are observed for short-period binaries, which could potentially aid in distinguishing these binaries from other effects that cause changes in velocity. The baseline of approximately half a year is sensitive to a maximum period of \(\sim 10^{4}\) days. We estimate \(v_{\rm LOS}\) of binaries by taking the I-band flux-weighted average of the \(v_{\rm LOS}\) of the two components. In Figure 11, we present the \(|\Delta v_{\rm LOS}|\) versus period plot for observable unresolved binaries with \(|\Delta v_{\rm LOS}|>0.3\) km/s and \(R<10\) arcmin after multiple epochs, respectively. We specifically select binaries with at least one bright (post-main-sequence) star component, and some binaries include white dwarfs. These bright stars have a luminosity in the HST \(F555W\) filter brighter than 20 mag. The three models (Bin-noBH, Bin-BH, and Bin-BH-Alt) exhibit observable binaries across a wide range of period distributions, spanning from 1 to \(10^{4}\) days. The snapshots at \(T_{\rm mat}\) (see the bottom panel of Figure 7) are chosen as the first epoch of observation. The choices of time intervals between epochs were chosen to be roughly equal space in half a year time interval, and the exact values are defined by the time step algorithm of the petar code. Figure 10: Upper panel: binary fractions of all objects along the 3D radial direction for the Bin-BH and Bin-noBH models; Lower panel: prediction for the observed binary fractions with an I-band magnitude range of 20.5 and 23 mag (corresponding to main sequence stars) and mass ratio \(>0.6\). Figure 9: The period distribution of binaries within \(r_{\rm h}\) at two different stages: the initial phase (represented by steps) and at 5 Gyr (shown as filled histograms). The upper panel displays the number of binaries within \(r_{\rm h}\) normalized by the bound mass of the cluster (\(\overline{N}_{\rm lab}\)). The lower panel shows the ratio between \(\overline{N}_{\rm lab}\) at 5 Gyr and the initial \(\overline{N}_{\rm lab}\). The vertical dashed and solid lines represent the hard-soft boundary of period within \(r_{\rm h}\) at 0 and 5 Gyr, respectively. The number of detectable binaries is similar for all three models, with the Bin-noBH model exhibiting slightly more binaries with periods above 3000 days. This trend aligns with the period distributions shown in Figure 9, although some stochastic scatter may be present. To assess the completeness of detectable binaries via multi-epoch observations of \(|\Delta v_{\rm LOS}|\), we compare the number counts of detectable binaries and all bright binaries as a function of periods, as shown in Figure 12. For all models, periods up to \(10^{4}\) days are detectable and all binaries with periods below \(10^{3}\) days can be detected with multiple epochs. From Figure 11, one binary in the Bin-BH model with a period between \(10^{3}-10^{4}\) days has only one epoch that shows \(|\Delta v_{\rm LOS}|>0.3\) km/s. A few binaries above \(10^{3}\) days in the Bin-noBH models have epochs where \(|\Delta v_{\rm LOS}|<0.3\) km/s, indicating that they might be missed if the observational epochs are limited to two. The observed \(v_{\rm LOS}\) of unresolved binaries does not represent the \(v_{\rm LOS}\) of the center-of-mass of the binaries, which complicates the determination of the physically useful line-of-sight velocity dispersion (\(\sigma_{\rm LOS}\)). A complete sample of detectable bright binaries with periods below \(10^{4}\) days can mitigate this effect and significantly improve the determination of (\(\sigma_{\rm LOS}\)). When binaries are detectable from multi-epoch observations, we can exclude them from the computation of \(\sigma_{\rm LOS}\). In our \(N\)-body model, we simulate the impact of excluding binaries with \(|\Delta v_{\rm LOS}|>0.3\) km/s on the determination of \(\sigma_{\rm LOS}\). Figure 13 displays the individual line-of-sight velocities of bright stars (\(v_{\rm LOS}\)), undetectable bright binaries with \(|\Delta v_{\rm LOS}|\leq 0.3\) km/s (\(\sigma_{\rm LOS,SB}\)), and detectable binaries with \(|\Delta v_{\rm LOS}|>0.3\) km/s, aligned with the projected distance. Most binaries with \(v_{\rm LOS}>1\) km/s are detectable, and thus, we can remove them for the calculation of \(\sigma_{\rm LOS}\). Table 2 demonstrates how removing detectable binaries improves the determination of \(\sigma_{\rm LOS}\). To have a consistent comparison among the three models, we scale the value of \(\sigma_{\rm LOS}\) by the estimated 1 \begin{table} \begin{tabular}{c c c c c c} \hline Model & \(\sigma_{\rm LOS,S,bin}\) & \(\sigma_{\rm LOS,S}\) & \(\sigma_{\rm LOS,SB}\) & \(\sigma_{\rm LOS,SCB}\) & \(\sigma_{\rm UD}\) \\ & [\(\sigma_{1}\)] & [\(\sigma_{10}\)] & [\(\sigma_{10}\)] & [\(\sigma_{10}\)] & [km/s] \\ \hline Bin-BH & 1.04 & 1.13 & 12.9 & 1.83 & 0.645 \\ Bin-BH-Alt & 1.01 & 1.05 & 22.9 & 1.33 & 0.528 \\ Bin-noBH & 1.02 & 0.815 & 8.81 & 1.27 & 0.729 \\ \hline \end{tabular} \end{table} Table 2: The table displays the line-of-sight velocity dispersion (\(\sigma_{\rm LOS}\)) estimated from bright stars and binaries. The last column, \(\sigma_{\rm UD}\), represents the estimation of \(\sigma_{\rm LOS}\) based on Equation 6, which serves as the unit for the other four columns. In particular, the column \(\sigma_{\rm LOS,S,bin}\) presents the \(\sigma_{\rm LOS}\) value derived from single stars within \(R<3\) arcmin (17 pc, approximately the \(R_{\rm bin}\)). The remaining three columns depict \(\sigma_{\rm LOS}\) within \(R<10\) arcmin (58 pc), where \(\sigma_{\rm LOS,S,\,\sigma_{\rm LOS,SB}}\), and \(\sigma_{\rm LOS,SCB}\) represent the \(\sigma_{\rm LOS}\) values from only single stars, both single stars and binaries, and both single stars and undetectable binaries with \(|\Delta v_{\rm LOS}|\leq 0.3\) km/s, respectively. Figure 11: The line-of-sight velocity difference of binaries (\(|\Delta v_{\rm LOS}|\)) as a function of period for multi epochs of observation. The initial snapshots of the three models are chosen at \(T=T_{\rm mat}\). Each binary type, classified according to the sse (Single Stellar Evolution) code, is represented by a different color. The stellar types include: MS (Main Sequence), HG (Herrtzsprung Gap), GB (First Giant Branch), CIeB (Core Helium Burning), AGB (Asymptotic Giant Branch), and WD (White Dwarf). Figure 12: The number counts of bright binaries with post-main-sequence component for three models at. The legend “tot” include all binaries and the ”obs” include only detectable binaries with \(|\Delta v_{\rm LOS}|>0.3\) km/s. -dimensional velocity dispersion \(\sigma_{\rm 1D}\) within \(r_{\rm h}\), assuming a virial equilibrium state of the cluster: \[\sigma_{\rm 1D}\simeq\sqrt{\frac{GM}{6r_{\rm h}}}. \tag{6}\] This normalization allows us to account for any differences in the overall dynamical state of the clusters and facilitates a more meaningful comparison of the \(\sigma_{\rm LOS}\). The presence of BHs affects the \(\sigma_{\rm LOS}\) in the cluster center. To illustrate the difference between models with and without BHs, we calculate the \(\sigma_{\rm LOS}\) of single stars within a projected distance of \(R<3\) arcmin (\(\sigma_{\rm LOS,S,hn}\)), which corresponds to the \(R_{\rm hn}\) (17 pc). All three models exhibit similar values of \(\sigma_{\rm LOS,S,hn}\). Additionally, the \(\sigma_{\rm LOS}\) values of single stars within a projected distance of \(R<10\) arcmin (58pc), which includes stars outside the effective radius of the cluster, are similar to \(\sigma_{\rm LOS,S,hn}\), except the Bin-noBH model, which has a lower value. Since the normalization factor \(\sigma_{\rm 1D}\) is different for the three models, and the observation cannot directly obtain \(M\) and \(r_{\rm h}\), the difference in the observed estimates of \(\sigma_{\rm LOS}\) for the three models may be larger than what we found in our simulations. This should be taken into consideration when interpreting the results and comparing them with observations. The sample that includes all bright singles and binaries exhibits much larger dispersion values (\(\sigma_{\rm LOS,SB}\)) than the values (\(\sigma_{\rm LOS,S}\)) of the sample containing only singles. By excluding detectable binaries, the values (\(\sigma_{\rm LOS,SCB}\)) are significantly lower than \(\sigma_{\rm LOS,SB}\), roughly 1.5-2 times of \(\sigma_{\rm LOS,S}\). This procedure helps to obtain more accurate estimates of \(\sigma_{\rm LOS}\). #### 3.2.5 Binaries with BHs The Bin-BH model at 11.5 Gyr exhibits several binaries which contain one or two BHs (BwBHs), as depicted in Figure17. It is important to investigate whether these BwBHs can be detected, serving as evidence for the existence of BHs. Table 3 provides a summary of the parameters for these binaries, which include three types: BBHs, BH with MS (BH-MS), and BH with WD (BH-WD). Other types of BH-star binaries are not detected. The presence of BBHs has also been illustrated in Figure 6, with the possibility of some being detected by GW detectors. Three BBHs are inside the clusters and the other three distribute in the tidal stream. An interacting BwBH that contains an accreting BH primary and a non-BH secondary star is particularly interesting as a potential X-ray or radio source that could be detected, providing evidence for the presence of BHs in Pal 5. Unfortunately, there is no BwBH that contains a bright post-main sequence star at 11.5 Gyr, only a few BH-MS and BH-WD exist. We calculate the Roche lobe radius using Equation 53 from Eggleton (1983); Hurley et al. (2002), with the semi-major axis replaced by the peri-center distance \(p\): \[\frac{R_{\rm RL,2}}{p}=\frac{0.49q^{2/3}}{0.6q^{2/3}+\ln{(1+q^{1/3})}} \tag{7}\] where \(q=m_{2}/m_{1}\). The original formula assumes a circular orbit, which misses the eccentric binaries where the accretion may occur at the peri-center separation. To account for this, we use the pericenter distance \(p\) instead. When the stellar radius of the secondary star (\(R_{2}\)) is greater than or equal to the Roche lobe radius (\(R_{\rm RL,2}\)), the secondary star fills its Roche lobe, and the accretion process might result in observable radiation. The \(R_{2}/R_{\rm RL,2}\) values of BH-MS binaries in our models are below \(10^{-3}\), indicating that no accretion occurs in these cases. The BH-WD binaries have the potential to become ultraluminous X-ray sources (ULXs). Detailed studies of the dynamical formation scenarios for these ULXs in globular cluster environments have been conducted by Ivanova et al. (2010). One BH-WD binary in our simulations has a period of 2.5 days and a peri-center distance (\(p\)) of \(2R_{\odot}\), located \(\sim 4.5\) pc away from the cluster center. The ratio \(R_{2}/R_{\rm RL,2}\) is approximately \(\sim 0.04\), which does not yet reach the criterion for accretion. In our investigation of the BH-MS binaries, we have discovered that their formation occurs through a similar dynamical channel. The MS star originates from a primordial binary of two MS stars (MS-MS). The BH originates from a primordial binary of two massive stars, which forms a BBH. The formation process of the BH-MS binaries in the Bin-BH model involves several steps: 1. The BBH undergoes several interactions with other BHs in the cluster. Figure 14: Illustration of the BH-MS formation process. The black and grey circles represent BHs, and the blue circles represent MS stars. Figure 13: The line-of-sight velocities of individual bright stars and binaries are plotted, and detectable binaries with \(|\Delta\rm{\nu_{\rm LOS}}|>0.3\) km/s are indicated as green dots. 2. After one of the BHs escapes from the cluster following a strong interaction with an intruder, it becomes a single BH. 3. This single BH eventually encounters the MS-MS binary and participates in a binary exchange event. 4. As a result of the binary exchange, the BH joins the MS-MS binary, forming the BH-MS binary. The described process is visually illustrated in Figure 14. The dynamical formation of BH-MS binaries in star clusters have been discussed in several works (Kremer et al., 2018; Di Carlo et al., 2023; Rastello et al., 2023; Tanikawa et al., 2023). Although no observable events from interacting BwBH occur at \(11.5\,\mathrm{Gyr}\), we can estimate the frequency of such events by collecting the interacting BwBHs recorded in the evolution of star clusters. The criterion to select interacting BwBHs are \(R_{2}/R_{\mathrm{RL2}}\geq 1\). Events that occurred in the first 100 Myr are excluded, as they mostly involve primordial binaries that are not significantly affected by stellar dynamics. The results are summarized in Table 4. The Bin-BH and Bin-BH-Alt models have a dozen of such interacting BwBHs, including both primordial and dynamically formed BwBHs. The dynamically formed BwBHs contribute to approximately half of the interacting BwBHs. The secondary stars involved in these BwBHs include several types, with one being BH-NS, which can trigger a GW merger. The Bin-noBH model also includes 5 events, all of which consist of primordial binaries. Among these events, four are BH-MS binaries, and one is a BH-NS binary. Despite the high supernovae kick velocities in the Bin-noBH model, these binaries were strongly bound before the supernovae, and the random natal kick did not disrupt the binaries. Instead, the binaries escaped from the cluster after the kick. In general, the formation rate of an interacting BwBH is estimated to be about one per 2 Gyr. Therefore, the possibility of detecting an interacting BwBH in the present-day Pal5 is practically zero. The noBin-BH and Bin-noBH-F models do not exhibit any interacting BwBH events, and thus, they are not included in the table. One common feature of these two models is the absence of massive primordial binaries, which is different from all other models that have OB binary properties from Sana et al. (2012). As a result, the possibility of dynamical formation of BwBHs is also low in these models. One important channel for the formation of interacting BwBHs is through the dynamical exchange of binary components after a close encounter between a BH and a binary. The lack of primordial binaries in these models suppresses this formation channel. Multi-epoch observations of \(|\Delta v_{\mathrm{LOS}}|\) can also be used to detect non-interacting BwBHs. For instance, utilizing multi-epoch MUSE spectroscopy, Giesers et al. (2018, 2019) discovered three BwBHs in NGC3201. The stellar companions in these BwBHs have mass values of \(0.6-0.8~{}M_{\odot}\). The four BH-MS binaries in the Bin-BH model at \(11.5\) Gyr have comparable companion masses. Therefore, it is possible to detect BHs in Pal 5 via multi-epoch observations of \(|\Delta v_{\mathrm{LOS}}|\). However, due to the long periods of these binaries, a long-term observation plan (several years) is needed to accurately constrain the masses of the BHs. Despite fact that these binaries are not \(v_{\mathrm{LOS}}\) variable over a short baseline of a few months, they may still be found: they should appear as member stars according to their position in the CMD, parallax and proper motion, but they have a large \(v_{\mathrm{LOS}}\) offset. A solar-type star orbiting a \(15\,\mathrm{M}_{\odot}\) BH with a \(10^{4}\,\mathrm{d}\) period has an orbital velocity of \(\sim 25\,\mathrm{km/s}\). This predicted signal is worth looking for. ### Color-magnitude diagram By utilizing the galevmb code, we can convert our simulation data into mock photometry. As an example, we present the color-magnitude diagram (CMD) of the Bin-BH model at \(11.5\) Gyr, using HST \(F555W\) and \(F814W\) filters, and CSST \(g\) and \(u-i\) filters (Figure15). In the CSST filters, we observe binary stars distributed between the MS and WD sequence. These binaries consist of a WD and a low-mass main sequence star (LMS). Similar features in the CMD have been seen in \(N\)-body simulations by Pang et al. (2022) (see figure 5 in Pang et al. 2022)1. In these binary systems, the luminosity is mainly dominated by the WD, as both components have very similar masses. They are considered as candidates for cataclysmic variable (CV) stars. Footnote 1: In Pang et al. (2022), the CMD contained some horizontal strips of WD-LMS binaries, which was caused by a bug in the peta code. In that version of the code, some WDs had not evolved to the age of the snapshot, leading to this issue. However, in the CMD generated for this work, we have fixed this bug (in the commit on Jul 25, 2023 of the master branch of the peta code on GitHub), resulting in a more accurate representation of the stellar populations. The CSST \(g\)-band magnitudes of WD and CV are below 26 mag, while the corresponding HST F555W magnitudes are above 26 mag. Therefore, CSST has the advantage of potentially detecting many WD and CV candidates in Pal 5. We also highlight the BH-MS binaries shown in Table 3. Among them, three have the HST F555W magnitude below 21 mag and the CSST \(g\)-band magnitude below 16 mag. If the multi-epoch spectroscopy observation can reach this magnitude limit, it is possible to detect these binaries via the observation of \(|\Delta v_{\mathrm{LOS}}|\). ### Mass functions The present-day mass function of a star cluster is influenced by various factors, including the IMF, mass segregation, and tidal evaporation. To investigate the impact of primordial binaries and black holes (BHs) on the mass function, we compare the mass functions of our \(N\)-body models with the observed ones. In order to make a meaningful comparison with the observed data, we select snapshots from our models that closely match the observed surface number density profile (\(\Sigma(R)\)), as shown in the lower panel of Figure 7. It is important to consider the resolution limitations when comparing with observations. The widest binary in our models has a semi-major axis of approximately \(1.8\times 10^{4}\) AU. Given the distance to Pal 5, a spatial resolution of less than \(1^{\prime\prime}\) is required to resolve this binary. The best resolution achievable by HST is around \(0.05^{\prime\prime}\), which means that only a small fraction of wide binaries with periods above \(1.4\times 10^{7}\) days can potentially be resolved. Therefore, we assume that most binaries remain unresolved in observations and calculate their magnitudes by summing the fluxes of their two components. Figure 15 shows the color-magnitude diagram (CMD) of unresolved binaries, which appear redder and brighter compared to the single stars. To investigate this effect, we compare the (actual) total masses (\(m_{\mathrm{tot}}\)) of binaries with the masses converted from their F555W-band magnitudes (\(m_{\mathrm{obs}}\)). For main-sequence binaries, we calculate the absolute F555W-band flux and then determine the mass of a single star that has the closest flux value, which serves as the converted mass \(m_{\mathrm{obs}}\). The comparison between \(m_{\mathrm{tot}}\) and \(m_{\mathrm{obs}}\) is depicted in Figure 16. The difference between \(m_{\rm tot}\) and \(m_{\rm obs}\) is highly sensitive to the mass ratio \(q=m_{1}/m_{2}\) and luminosity ratio as well. Here, the mass ratio \(q\) is defined as the minimum mass divided by the maximum mass of the two components in a binary. A higher \(q\) leads to a larger difference between the \(m_{\rm tot}\) and \(m_{\rm obs}\) values. Consequently, the \(m_{\rm obs}\) of equal-mass unresolved binaries can be significantly lower than their true \(m_{\rm tot}\). Furthermore, for binaries with the lowest \(q\) values, there is a systematic offset between \(m_{\rm tot}\) and \(m_{\rm obs}\). As a result, if unresolved main-sequence binaries cannot be distinguished from single stars, the total masses of all these binaries would be underestimated. The offset between \(m_{\rm tot}\) and \(m_{\rm obs}\) is determined by the minimum \(q\). There is a nonlinear relation between stellar luminosity (\(L\)) and mass (\(m\)). For MS stars in the mass range of 0.3-0.8 \(M_{\odot}\), \(L\propto m^{4}\), and thus, we can roughly estimate the relation between the total binary mass (\(m_{\rm tot}\)) and the binary mass used in the mass function estimation \begin{table} \begin{tabular}{r l l r r r r r} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Bin-BH} \\ \hline Time[Myr] & Primordial & Type & \(m_{1}\)[\(\,M_{\odot}\)] & \(m_{2}\)[\(\,M_{\odot}\)] & period[days] & \(p\)[\(\,R_{\odot}\)] & eccentricity & \(R_{2}/R_{\rm RL2}\) \\ \hline 109 & True & BH-HeHG & 6.7 & 0.92 & 1.1e+02 & 1.9e+02 & 2.560109e-05 & 1.0 \\ 188 & True & BH-MS & 7.5 & 3.3 & 0.84 & 8.3 & 1.692295e-05 & 1.0 \\ 268 & True & BH-AGB & 20 & 2.3 & 2.7e+03 & 2.3e+03 & 3.729159e-09 & 1.0 \\ 861 & True & BH-WD & 6.3 & 0.0083 & 0.061 & 1.2 & 0.04394221 & 1.0 \\ 5997 & False & BH-MS & 32 & 0.42 & 7e+05 & 0.38 & 0.999964 & 9.0 \\ 7141 & False & BH-MS & 18 & 0.2 & 9.9e+03 & 0.14 & 0.9999717 & 14.1 \\ 7474 & True & BH-NS & 7.5 & 1.2 & 1.7e-08 & 5.8e-05 & 4.307228e-09 & 1.0 \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Bin-BH-Alt} \\ \hline 132 & True & BH-HeHG & 11 & 0.84 & 2e+02 & 3.3e+02 & 1.158092e-05 & 1.0 \\ 134 & True & BH-HG & 6.8 & 4.1 & 3.6 & 14 & 0.3512435 & 1.5 \\ 138 & True & BH-AGB & 20 & 1.6 & 8.6e+03 & 4.4e+03 & 0.09723035 & 1.1 \\ 190 & True & BH-HG & 8.3 & 3.5 & 40 & 1.1e+02 & 0 & 1.0 \\ 4125 & False & BH-MS & 17 & 0.34 & 1.1e+07 & 0.22 & 0.9999996 & 11.2 \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Bin-noBH} \\ \hline 116 & True & BH-MS & 9.2 & 2.9 & 1.1 & 10 & 6.648911e-05 & 1.0 \\ 147 & True & BH-MS & 10 & 2.8 & 1.1 & 10 & 6.142399e-05 & 1.0 \\ 159 & True & BH-MS & 8.2 & 2.7 & 1.1 & 9.7 & 0.0001996311 & 1.0 \\ 209 & True & BH-NS & 7.5 & 1.5 & 1.7e-08 & 5.9e-05 & 2.710078e-08 & 1.0 \\ 1232 & True & BH-MS & 2.5 & 0.99 & 0.48 & 3.9 & 3.125196e-05 & 1.1 \\ \hline \hline \end{tabular} \end{table} Table 4: The accretion events of BwBHs after 100 Myr. The “Primordial” column indicates whether the binary is primordial (formed during the initial star cluster formation) or dynamically formed (formed through interactions within the star cluster after its formation). The “Type” column indicates the combination of binary companions. The secondary stellar types involved in the accretion events include: MS, HG, GB, CHeB, AGB, HeHeHG (Hertzsprung Gap Naked Helium star), WD and NS (Neutron star). \begin{table} \begin{tabular}{r r r r r r r r} \hline \hline Type & \(m_{1}\)[\(\,M_{\odot}\)] & \(m_{2}\)[\(\,M_{\odot}\)] & period[days] & \(p\)[\(\,R_{\odot}\)] & eccentricity & \(R_{2}/R_{\rm RL2}\) & \(r\)[pc] \\ \hline BBH & 39 & 27 & 5.9 & 41 & 0.26 & 8.1e-06 & 5.1e+03 \\ & 37 & 30 & 5.9e+02 & 12 & 0.99 & 2.9e-05 & 9.1e+03 \\ & 7.5 & 7.4 & 3.8 & 13 & 0.49 & 6.5e-06 & 9e+03 \\ & 8.2 & 7.8 & 18 & 67 & 0.07 & 1.3e-06 & 5.4 \\ & 7.6 & 7.6 & 24 & 61 & 0.29 & 1.4e-06 & 6.6 \\ & 35 & 31 & 2.1e+04 & 6.1e+03 & 0.53 & 5.8e-08 & 8.7 \\ \hline BH-MS & 21 & 0.66 & 1.8e+05 & 2.1e+04 & 0.45 & 0.00021 & 3.3 \\ & 16 & 0.71 & 3.3e+04 & 1.1e+03 & 0.90 & 0.0041 & 8.1 \\ & 13 & 0.68 & 4.8e+04 & 9e+03 & 0.33 & 0.00044 & 3.4 \\ & 15 & 0.21 & 1.1e+07 & 7.5e+04 & 0.86 & 2.7e-05 & 5.5 \\ \hline BH-WD & 8.4 & 1.1 & 1.4e+02 & 2.4e+02 & 0.01 & 0.00014 & 9.2 \\ & 7.5 & 1 & 2e+02 & 2.7e+02 & 0.06 & 0.00012 & 13 \\ & 16 & 0.74 & 1.8e+05 & 7.7e+03 & 0.78 & 8.7e-06 & 4.7 \\ & 15 & 1 & 5.4e+06 & 9.6e+04 & 0.71 & 4.4e-07 & 6.4 \\ & 8.2 & 0.52 & 2 & 0.87 & 0.039 & 4.5 \\ & 15 & 0.69 & 3e+06 & 1e+05 & 0.52 & 6.7e-07 & 7.7 \\ \hline \end{tabular} \end{table} Table 3: The parameters of BwBHs for the Bin-BH model at 11.5 Gyr. \(m_{1}\) and \(m_{2}\) denote the masses of the primary and secondary components, respectively; \(p\) represents the peri-center distance; \(R_{2}/R_{\rm RL2}\) indicates the secondary stellar radius relative to the Roche lobe overflow radius; and \(r\) represents the distance of the binary from the cluster center. (\(m_{\rm obs}\)) as follows: \[\frac{m_{\rm obs}}{m_{\rm tot}}\approx\frac{1+q^{4}}{1+q}. \tag{8}\] In our model, the minimum \(q\) is about 0.12, which corresponds to a maximum \(m_{\rm obs}/m_{\rm tot}\approx 0.93\). To compute the mass functions, we collect stars within the same observational fields used by the HST observation from the Smith field (Grillmair and Smith, 2001) and the Kuepper field (unpublished; reported in Baumgardt et al., 2023), as shown in Figure 17. The center position of the star cluster model is defined as the centre-of-mass of stars located within the core of the star cluster. We adjust the center position to match the observed position of Pal 5. The Smith field encompasses both the core and halo regions of Pal 5, while the Kuepper field covers the outer region. To investigate the radial dependence of the mass function in different regions of Pal 5, we divided the Smith and Kuepper fields into three radial bins. These bins correspond to different distances from the cluster center, Figure 16: The total masses (\(m_{\rm tot}\)) v.s. the F555W-band flux-converted masses (\(m_{\rm obs}\)) for main-sequence binaries of the Bin-BH model at 12 Gyr. The grey line shows the case of \(m_{\rm tot}=m_{\rm obs}\). Colors represent mass ratio (\(q\)). Figure 17: The 2-dimensional density map of the noBin-BH model at 11.8 Gyr. The color contours with solid lines represent the Smith and Kuepper fields, which have available HST data. The boundaries of the three ring radial bins are indicated by dashed grey circles. Two approaches are employed for selecting samples to measure the mass functions: 1) using the intersection between the Smith/Kuepper fields and the ring regions (referred to as ”Field” regions); and 2) using only the ring regions themselves (referred to as ”Ring” regions) to enhance statistical accuracy. Figure 15: The color-magnitude diagram of the Bin-BH model at 11.5 Gyr. Red points are single stars. Other points are unresolved binaries where colors represent mass ratio (\(q\)). The black crosses are BH-MS binaries shown in Table 3. The left panel corresponds to the HST F555W-F814W and F555W filters, while the right two panels correspond to the CSST g-i and g, and u-y and u filters, respectively. allowing us to obtain mass functions as a function of radial distance. The intersection between the two observational fields and the three radial bins (referred to "Field" regions) are used for selecting samples of stars. It's important to note that due to the limited observational coverage and stochastic scatter, the comparison between the observed and modeled mass functions may be affected. To improve statistical robustness, we also select stars for measuring the mass functions using only the three radial bins of the \(N\)-body models (referred to "Ring" regions). By comparing the mass functions obtained from the \(N\)-body models and from the observed data, we can investigate the effects of primordial binaries and black holes on the mass function of Pal 5. We conducted an analysis to assess the impact of unresolved binaries on the determination of the mass function in the Kuepper field, using the Bin-BH model. The results are depicted in Figure 18. We considered two scenarios for the treatment of binaries in the mass function: * RB (Resolved Binaries): All binaries are resolved, meaning that individual masses of binary components are counted in the mass function. * URB (Unresolved Binaries): \(m_{\rm obs}\) is utilized for mass estimation. This scenario represents a real observation where binaries are unresolved. The mass functions obtained from the RB and URB scenarios display steeper slopes compared to the observational mass function. In Figure 19, we present a comparison between the mass functions obtained from the \(N\)-body models using the URB method and the observational data. The upper panel of Figure 19 shows the number counts \(n(m)\). The \(N\)-body models exhibit a comparable number of stars within the three Field regions when compared to the observed data. The Bin-noBH model shows a slightly higher number of stars, indicating that a longer evolution time of more than 12 Gyr might be necessary for a better match. However, this slight discrepancy does not impact our comparison with the observed normalized counts. The median and lower panels of Figure 19 display the normalized cumulative distributions, \(\overline{N_{\rm f}}(m)\), for the Field regions and \(\overline{N_{\rm a}}(m)\) for the Ring regions. In the inner radial bin, no significant difference is observed when comparing \(\overline{N_{\rm f}}(m)\) and \(\overline{N_{\rm a}}(m)\). However, for the median and outer radial bins, a noticeable stochastic scatter is present in \(\overline{N_{\rm f}}(m)\). This scatter is particularly evident in the \(\overline{N_{\rm f}}(m)\) of the Bin-BH model in the outer radial bin. These findings suggest that the observational data may also exhibit similar scatter, and it is important to consider this when comparing the \(N\)-body model with the observational data. The standard way to characterize a mass function is by using a power-law form given by the equation: \[n(m)=Cm^{-\alpha}, \tag{9}\] where \(C\) is a normalisation constant and \(\alpha\) is the power-law index used for fitting. We employ the fitting method outlined in Khalaj & Baumgardt (2013) to determine the statistical error accurately. The formula for fitting \(\alpha\) is: \[\alpha=1+n\left[\sum_{i=1}^{n}\ln\frac{m_{i}}{m_{\rm min}}-n\frac{\ln X}{1-X^ {\alpha-1}}\right]^{-1}, \tag{10}\] where \(n\) represents the total number of stars, \(m_{i}\) is the mass of an individual star, \(m_{\rm min}\) is the minimum mass of stars, and \(X\) is the ratio of the maximum to the minimum masses of stars. Iterative calculations are necessary to solve this fitting equation. The corresponding error can be described as: \[\sigma(\alpha)=\frac{1}{\sqrt{n}}\left((\alpha-1)^{-2}-\ln^{2}X\frac{X^{ \alpha-1}}{(1-X^{\alpha-1})^{2}}\right)^{-1/2} \tag{11}\] The power-law indices of the mass functions (\(\alpha\)) obtained from fitting are summarized in Table 5. In the inner radial bin, the \(\alpha\) values for the three Bin models are in rough agreement with the observational data, while the noBin-BH model shows a significantly higher \(\alpha\). This result remains consistent when comparing the mass functions within the Field and the Ring regions. In the middle and outer radial bins, all of the \(N\)-body models exhibit higher \(\alpha\) values compared to the observational data. This discrepancy is more pronounced when considering the normalized cumulative distribution in the Ring regions (\(\overline{N_{\rm a}}(m)\)). These differences suggest that the \(N\)-body models exhibit more pronounced mass segregation than what is indicated by the observational data, although we need to take into account the potential stochastic scatter inherent in the observational data. The presence of BHs does not appear to have a clear impact on the mass functions. The models incorporating primordial binaries exhibit better agreement with the observed data, particularly in the inner radial bin. Figure 18: The mass functions of the Bin-BH model at 12 Gyr are presented in the Kuepper field, with the radial region indicated in the title. The observational data is shown as a reference. We compare different treatments of binaries in the mass function. “URB” indicates the use of \(m_{\rm obs}\) for mass estimation, and “RB” denotes the counting of masses for individual binary components. The upper panel displays the normalized cumulative counts, while the lower panel shows the normalized histograms. ## 4 Limitations and Future Directions ### Uncertainty of initial condition Due to the computational expense, we are unable to explore the entire parameter space of the initial condition of Pal 5, resulting in several aspects not being addressed in this study. These include assumptions regarding the properties of primordial binaries, the evolution of the Galaxy, the uncertainty associated with stellar evolution, the gravitational wave kicks following mergers of binary black holes (BBHs), and the realistic formation environment of the cluster. In our study, we have adopted two extreme assumptions for the primordial binaries (Kroupa and FlatLog) with a 100% initial binary fraction. However, these assumptions may not accurately reflect the true properties of primordial binaries in Pal 5. Nonetheless, Fig. 9 suggests that the initial period distribution has no significant impact on the survival fraction of binaries as a function of period, as long as the cluster possesses a similar initial density profile and orbit in the Galaxy. Furthermore, the evolution of the binary fraction (\(\overline{N}_{\rm bb}\)) can be utilized to derive the period evolution for different assumptions regarding the initial binary populations. By using a 100% initial binary fraction, we also explore the maximum potential dynamical impact of primordial binaries. The wide range of periods considered allows us to investigate the behavior of hard and soft binaries with and without black holes (BHs). Figure 19: The mass functions of four \(N\)-body models in three radial bins, with the observational data shown as a reference. The upper panel displays the number counts \(n(m)\), the middle panel shows the normalized cumulative distribution \(\overline{N}_{t}(m)\) for the Field regions, and the lower panel shows the normalized cumulative distribution \(\overline{N}_{\rm a}(m)\) for the Ring regions. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline R[arcmin] & region & Observation & noBin-BH & Bin-BH & Bin-noBH & Bin-BH-Alt \\ \hline 0.000 - 1.250 & Field & 0.390\(\pm\)0.131 & 0.835\(\pm\)0.119 & 0.490\(\pm\)0.129 & 0.198\(\pm\)0.107 & 0.600\(\pm\)0.129 \\ & Ring & & 0.882\(\pm\)0.104 & 0.545\(\pm\)0.114 & 0.300\(\pm\)0.093 & 0.607\(\pm\)0.117 \\ 1.250 - 3.667 & Field & 0.188\(\pm\)0.138 & 0.987\(\pm\)0.135 & 0.602\(\pm\)0.141 & 0.637\(\pm\)0.115 & 0.759\(\pm\)0.141 \\ & Ring & & 0.997\(\pm\)0.052 & 0.526\(\pm\)0.053 & 0.508\(\pm\)0.044 & 0.678\(\pm\)0.056 \\ 2.833 - 8.333 & Field & 0.280\(\pm\)0.221 & 1.174\(\pm\)0.226 & 0.525\(\pm\)0.187 & 1.140\(\pm\)0.154 & 1.222\(\pm\)0.198 \\ & Ring & & 1.127\(\pm\)0.061 & 0.819\(\pm\)0.052 & 0.840\(\pm\)0.046 & 0.920\(\pm\)0.062 \\ \hline \hline \end{tabular} \end{table} Table 5: Fitting result of the power-law indices (\(\alpha\)) of the mass functions in different radial bins. The column labeled “region” distinguishes between the Smith and Kuepper fields (referred to “Field”) and the ring regions (referred to “Ring”). Our model assumes a static Galactic environment, which is consistent with the setup employed in G21 to facilitate proper comparison. Incorporating a realistic time-dependent Galactic potential, which may be important to understand the density profile of the stream (Pearson, Price-Whelan & Johnston, 2017), is challenging due to the limited observational constraints on Galactic evolution. It is plausible that Pal 5 was formed in a significantly different Galactic environment, potentially leading to variations in mass loss and density evolution compared to our models. However, we believe that the overall trend driven by the presence of BHs should be similar. Thus, our results offer a general perspective on how the existence of BHs impacts the binary populations. The retention of BHs in clusters after supernovae remains an open question based on stellar evolution models. Our models do not consider gravitational wave kicks following BBH mergers, which could lead to an overprediction of massive BBHs with masses exceeding 100 \(M_{\odot}\). Although such BBHs can influence the timescale of cluster disruption as shown in Figure 4 and 6, their impact on the period distribution of binaries is limited since the hard-soft boundary is not determined by a single specific BBH. The initial conditions of the clusters assume spherically symmetric Plummer models, similar to previous N-body simulations of GCs. However, the initial complexity of GC formation, including irregular cluster structures prior to achieving virial equilibrium and the presence of gas, may affect the binary populations during the gas-embedded phase. ### Observation of binaries In Section 3.2.4, we conducted an analysis to assess the feasibility of detecting binaries by measuring the radial velocity difference (\(|\mathrm{a}\mathrm{r}_{\mathrm{LOS}}|\)) through multiple half-year observations. The maximum time-interval reaches half a year. The results indicate that approximately 40 binaries could be identified, covering a period distribution ranging from a few to \(10^{4}\) days. The model without BHs tends to exhibit a higher fraction of long-period binaries. While this observation cannot directly constrain the existence of BHs, it can provide insights into the presence of wide (long-period) binaries. Such information may be valuable in constraining the initial period distribution by utilizing the \(\overline{N}_{\mathrm{hb}}\) values depicted in Figure 9. To obtain a stronger constraint on the existence of BHs, it is crucial to obtain additional observations of binaries in the period range around \(10^{5}\) days, which has proven to be challenging thus far. Furthermore, it is necessary to observe binaries in different regions of Pal 5, including the inner region and the distant tail. Given the uncertainties associated with the properties of primordial binaries, assuming an initial period distribution becomes essential for constraining the density evolution based on the observed period distribution of present-day binaries. Notably, wide binaries disrupted within the dense cluster can survive along the low-density tidal tail. Therefore, the difference in the fraction of wide binaries inside the cluster and in the distant tail can help constrain both the initial period distribution of binaries and the density evolution of clusters, ultimately shedding light on the existence of BHs. Another approach to constrain the BH population is by detecting BH-star binaries. We find four BH-MS binaries with relatively high MS masses, as shown in Table 3 and 4 and also illustrated in Figure 8. Figure 15 suggests that the CSST has the potential to detect CVs, thereby providing additional constraints on binaries with WDs. Multi-epoch spectroscopic observations for \(|\mathrm{a}\mathrm{r}_{\mathrm{LOS}}|\) offer another possibility to detect non-interacting BH-star binaries. By utilizing this data, we can obtain better constraints on \(\sigma_{\mathrm{LOS}}\), providing an indirect constraint on the dynamical impact from BHs in the cluster center. ## 5 Conclusions In this study, we performed \(N\)-body simulations of the Galactic halo globular cluster Pal 5 with and without the inclusion of BHs, while considering a significant fraction of primordial binaries. Our main objectives were to investigate the influence of binaries and BHs on the cluster's dynamical evolution and to understand how the presence of BHs affects the binary populations within Pal 5. Additionally, we aimed to determine whether the observations of binary populations could provide indirect evidence for the existence of BHs in Pal 5. Our findings indicate that the presence of primordial binaries has a noticeable but not drastic effect on the cluster's dynamical evolution, consistent with previous work Wang et al. (2022). In models with BHs, the existence of primordial binaries alters the half-mass relaxation time (\(t_{\mathrm{rh}}\)) and reduces the number of BBHs that contribute to binary heating. However, the influence on mass loss and radial evolution is more complex. Models with primordial binaries (Bin-BH and Bin-BH-Alt) exhibit shorter initial \(t_{\mathrm{rh}}\) compared to models without primordial binaries (noBin-BH model). After 1 Gyr, the situation reverses due to larger half-mass radius (\(r_{\mathrm{h}}\)) and lower total BH mass (\(M_{\mathrm{BH}}\)) in the Bin models. This trend changes again after 8 Gyr when a massive BBH forms in Bin-BH-Alt, accelerating the cluster's dissolution (see Figure 6). Thus, the tidal dissolution time does not exhibit a simple dependence on the presence of primordial binaries. In models without BHs and a low initial density (Bin-noBH and Bin-noBH-F), the evolution is more sensitive to the presence of primordial binaries compared to the BH models. Achieving a similar cluster at 11.5 Gyr requires a higher initial density in these cases. Conversely, the assumption of BH existence significantly affects the population of wide binaries. Over long-term evolution, hard binaries are less affected by dynamical disruption. The fraction of hard binaries remains independent of the initial period distribution (Figure 9). The remaining fraction of wide binaries depends on the evolution of the hard-soft boundary. The period distribution of models with BHs peaks at a shorter period compared to models without BHs, consistent with the hard-soft boundary. However, we find that not all wide binaries outside the hard-soft boundary are immediately disrupted. Many wide binaries outside this boundary can persist in the cluster for a long time. This suggests that the observation of wide binaries may not readily constrain the actual hard-soft boundary and be used to determine the cluster's density evolution history. We have found that multi-epoch spectroscopic observations can detect most binaries with bright stars and periods below \(10^{4}\) days. By excluding these binaries, the measurement of \(\sigma_{\mathrm{LOS}}\) of bright stars can be significantly improved, providing better indirect constraints on the BH population through dynamical analysis. Additionally, we have identified 4 BH-MS binaries in the Bin-BH model at 11.5 Gyr, which could potentially be detected using the same method, offering an additional possibility to provide evidence for the existence of BHs. We also investigated how binaries and BHs influence the present-day mass function of Pal 5. Our results suggest that models with primordial binaries have mass function more consistent with the observational data, while the impact of BHs on the mass function is weak. All \(N\)-body models exhibit mass segregation features that are not observed in the outer region of Pal 5. However, it is important to consider the potential impact of stochastic scatter, which may in fluence the conclusions drawn from the comparison. This indicates the need for alternative initial mass functions or additional observations of mass functions, with improved statistical precision, to better understand the underlying reasons for this discrepancy. ## Acknowledgements L.W. thanks the support from the one-hundred-talent project of Sun Yat-sen University, the Fundamental Research Funds for the Central Universities, Sun Yat-sen University (22hydt09). L.W. and C.L. thank the support from the National Natural Science Foundation of China (NSFC) through grant 12073090. L.W., C.L., X.P. and B.T. thank the support from NSFC through grant 12233013. M.G. acknowledges financial support from the grants PID2021-125485NB-C22, EUR2020-112157, CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033 (State Agency for Research of the Spanish Ministry of Science and Innovation) and SGR-2021-01069 grant (AGAUR). ## Data Availability The simulations underlying this article were performed on the personal computing server of the first author. The data were generated by the software petar, which is available in GitHub, at [https://github.com/wang-astro/PeTar](https://github.com/wang-astro/PeTar). The stellar evolution code sse is included in petar. The galpy code for Galactic potential is available in GitHub, at [https://github.com/jobowy/galpy](https://github.com/jobowy/galpy). The initial conditions of star cluster models are generated by the software mcluster, which is available in GitHub, at [https://github.com/lwang-astro/mcluster](https://github.com/lwang-astro/mcluster). The galvenb code for mock photometry is available in GitHub, at [https://github.com/xiaoyingpang/GalveNB](https://github.com/xiaoyingpang/GalveNB). The simulation data will be shared via private communication with a reasonable request.
2309.13160
How to train your VAE
Variational Autoencoders (VAEs) have become a cornerstone in generative modeling and representation learning within machine learning. This paper explores a nuanced aspect of VAEs, focusing on interpreting the Kullback-Leibler (KL) Divergence, a critical component within the Evidence Lower Bound (ELBO) that governs the trade-off between reconstruction accuracy and regularization. Meanwhile, the KL Divergence enforces alignment between latent variable distributions and a prior imposing a structure on the overall latent space but leaves individual variable distributions unconstrained. The proposed method redefines the ELBO with a mixture of Gaussians for the posterior probability, introduces a regularization term to prevent variance collapse, and employs a PatchGAN discriminator to enhance texture realism. Implementation details involve ResNetV2 architectures for both the Encoder and Decoder. The experiments demonstrate the ability to generate realistic faces, offering a promising solution for enhancing VAE-based generative models.
Mariano Rivera
2023-09-22T19:52:28Z
http://arxiv.org/abs/2309.13160v3
# Gamix-Vae: A VAE with Gaussian Mixture Based Posterior ###### Abstract Variational Autoencoders (VAEs) have become a cornerstone in generative modeling and representation learning within machine learning. This paper explores a nuanced aspect of VAEs, focusing on interpreting the Kullback-Leibler (KL) Divergence, a critical component within the Evidence Lower Bound (ELBO) that governs the trade-off between reconstruction accuracy and regularization [1, 2]. While the KL Divergence enforces alignment between latent variable distributions and a prior imposing a structure on the overall latent space but leaves individual variable distributions unconstrained. The proposed method redefines the ELBO with a mixture of Gaussians for the posterior probability, introduces a regularization term to prevent variance collapse, and employs a PatchGAN discriminator to enhance texture realism. Implementation details involve ResNetV2 architectures for both the Encoder and Decoder. The experiments demonstrate the ability to generate realistic faces, offering a promising solution for enhancing VAE-based generative models. Mariano Rivera+ Centro de Investigacion en Matematicas A.C. Guanajuato, Gto. 36120, Mexico VAE, ELBO, Posterior Collapse, Face Generation, Gaussian Mixture. Footnote †: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ## 1 Introduction Variational Autoencoders (VAEs) have emerged as a powerful framework for generative modeling and representation learning in machine learning [1, 3]. Although more complex variations of VAEs have been proposed (e.g., Hierarchical VAEs [4] Wasserstein-AE [5], Beta-VAE [6] and VQ-VAEs [7]), the original VAEs continue to be of great interest due to their elegant theoretical formulation. In the pursuit of understanding and enhancing the capabilities of VAEs, this paper presents a variant of a fundamental component: the Evidence Lower Bound (ELBO). The ELBO is a critical objective function for training VAEs, encapsulating the trade-off between reconstruction accuracy and regularization. Our focus in this work centers on a detailed analysis of the interpretation of the regularization term within the ELBO, known as the Kullback-Leibler (KL) Divergence [1, 3, 8]. The KL Divergence has a pivotal role in VAEs as it enforces alignment between the distribution of latent variables and a prior distribution, often assumed to be a Normal distribution with zero mean and unit covariance. This divergence term promotes the emergence of structured latent representations, a cornerstone of effective generative models. However, our investigation delves deeper to unveil an intriguing insight: while encouraging the desired structure in the overall latent space, the KL Divergence does not impose any constraints on the individual distributions of each latent variable. This nuanced revelation has significant implications for diversity of latent vector behaviors in VAEs. In light of these findings, our research contributes a refined perspective on the role of KL Divergence and its implications for latent space modeling. Our proposal addresses a well-known challenge in VAEs: the posterior collapse phenomenon [9], which can lead to a loss of distinctiveness in generated data. Specifically, we demonstrate the successful application of our approach in generating realistic facial images while retaining crucial variations in features like hairstyles and specific facial attributes [10, 7] This paper's subsequent sections elaborate on the standard ELBO's derivation. Then we present a mixture of Gaussian's for parametrizing the posterior probability \(q(\mathbf{z}|\mathbf{x})\) in the VAE, which lead us to redefine the ELBO; and we present practical implementation steps. Thus, we present experimental results that support our claims. By advancing our understanding of the regularization mechanism in VAEs and its practical implications, we contribute to the ongoing pursuit of enhancing generative modeling techniques for complex data distributions. Through this work, we hope to shed light on the nuanced interplay between regularization and diversity in latent space representations, contributing to the broader advancement of generative models in machine learning and artificial intelligence. ## 2 Background Variational Autoencoder (VAE) is an extension of the traditional Autoencoder (AE) concept [1, 3]. The pivotal breakthrough of the VAE lies in its novel approach to shaping the distribution of latent variables, urging them to adhere to a simplified parametric distribution. This streamlining eases the generation of latent variables. Hence, it also facilitates the creation of intricate data instances, as demonstrated by the case of faces--the example we utilize to illustrate our proposition. The goal is to learn the generative model of a set of \(N\) data points (faces) \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\); assuming a set of latent variables, one for each data point: \(\mathbf{Z}=\{\mathbf{z}_{i}\}_{i=1}^{N}\); with \(\mathbf{x}_{i}\in\mathbb{R}^{m}\), \(\mathbf{z}_{i}\in\mathbb{R}^{d}\), and \(d<m\). In this way, we have \((\mathbf{x}_{i},\mathbf{z}_{i})\sim p(\mathbf{x},\mathbf{z})=p(\mathbf{x}| \mathbf{z})p(\mathbf{z})\), which suggests a two-step generative model: 1. Generation of a latent variable \(\mathbf{z}_{i}\sim p(\mathbf{z})\); 2. Generation of the associated face \(\mathbf{x}_{i}\sim p(\mathbf{x}|\mathbf{z}=\mathbf{z}_{i})\). The distribution of the latent variables \(p(\mathbf{z})\) (prior) can be assumed, for example, the Gaussian with a mean of zero and diagonal covariance. Then one must estimate the conditional distribution \(p(\mathbf{x}|\mathbf{z})\) (likelihood). The problem arises because only the faces, \(\mathbf{X}\), are observed, while the latent variables, \(\mathbf{Z}\), remain hidden. That is, one does not know the pairs \((\mathbf{x}_{i},\mathbf{z}_{i})\). To determine the \(\mathbf{z}_{i}\) that corresponds to each \(\mathbf{x}_{i}\), it is necessary to estimate the conditional distribution \(p(\mathbf{z}|\mathbf{x})\). Using Bayes' rule, one can write the posterior as \(p(\mathbf{z}|\mathbf{x})=p(\mathbf{x}|\mathbf{z})p(\mathbf{z})/p(\mathbf{x})\). However, calculating the evidence \(p(\mathbf{x})\) is an intractable problem [1, 3]. Therefore, instead of calculating the true posterior \(p(\mathbf{z}|\mathbf{x})\) an approximation \(q(\mathbf{z}|\mathbf{x})\) is calculated using a neural network. A VAE is an autoencoder where: \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is a neural network that encodes the data \(\mathbf{x}_{i}\) into latent variables \(\mathbf{z}_{i}\), ensuring \(\mathbf{z}\sim p_{\lambda}(\mathbf{z})\), and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) decodes the latent variable \(\mathbf{z}_{i}\) into \(\mathbf{x}_{i}\); where, \(\phi\) and \(\theta\) are the parameters of the encoder and decoder, respectively, and \(\lambda\) are the parameters of the prior. The most common approach is to assume the prior distribution \(p_{\lambda}(\mathbf{z})\) as a multivariate Gaussian with a mean of zero and an identity covariance matrix: \[p_{\lambda}(\mathbf{z})=\mathcal{N}(0,I). \tag{1}\] The training of the VAE is performed by minimizing a cost function called the ELBO (Evidence Lower Bound): \[\mathrm{ELBO}(\phi,\theta)=\mathrm{KL}\big{(}q_{\phi}(\mathbf{z}|\mathbf{x}) ||p_{\lambda}(\mathbf{z})\big{)}-\mathbb{E}_{q_{\lambda}(\mathbf{z}|\mathbf{x })}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] \tag{2}\] where \(\mathrm{KL}\) denotes the Kullback-Leibler divergence between two probability distributions. To calculate the KL divergence, it is necessary to define \(q_{\phi}(\mathbf{z}|\mathbf{x})\). First, we note that the stochastic generation process can be represented as: \[\mathbf{x}_{i}\overset{q_{\phi}}{\sim}\mathbf{z}_{i}\overset{p_{\theta}}{\to} \hat{\mathbf{x}}_{i}. \tag{3}\] The encoder is stochastic, meaning that if we encode the input \(\mathbf{x}_{i}\)\(K\) times, we obtain \(K\) distinct values of the latent variable \(Z_{i}=\{\mathbf{z}_{i}^{(k)}\}_{k=1}^{K}\). Since neural networks are deterministic, this stochastic behavior is implemented using the reparametrization trick: \[\mathbf{z}_{i}=\mu_{i}+\epsilon\sigma_{i}; \tag{4}\] where \(\epsilon\sim\mathcal{N}(0,I)\) and \(\lambda_{i}=(\mu_{i},\sigma_{i})\) are the (deterministic) vectors that the network encodes the data \(\mathbf{x}_{i}\) into. ## 3 Method ### The posterior as a mixture of Gaussians To introduce our proposal, first, we note that the latent variable \(\mathbf{z}_{i}\) is also a stochastic variable with its particular distribution: \(\mathbf{z}_{i}^{(k)}\sim q_{\phi_{i}}(\mathbf{z}|\mathbf{x}_{i})\). Hence, the distribution of the latent variables across all data points is not the same as the distribution of the latent variable for each data point: \(q_{\phi}(\mathbf{z}|\mathbf{x})\neq q_{\phi_{i}}(\mathbf{z}|\mathbf{x}_{i})\). To relate these distributions, we propose the mixture model: \[q_{\phi}(\mathbf{z}|\mathbf{x})=\sum_{i}\alpha_{i}q_{\phi_{i}}(\mathbf{z}| \mathbf{x}_{i}), \tag{5}\] with \(\alpha_{i}=1/N\), which implies that all data points are equally relevant. Now, the mean of the mixture is given by: \[\mathbb{E}\left[q_{\phi}(\mathbf{z}|\mathbf{x})\right]=\sum_{i}\alpha_{i} \mathbb{E}\left[q_{\phi_{i}}(\mathbf{z}|\mathbf{x}_{i})\right]=\mathbb{E} \left[\mu_{i}\right]. \tag{6}\] Note that the means of each latent variable data point serve as samples, and their mean is the mean of the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x})\). Then, given that we have imposed Gaussianity, we can estimate the variance of the posterior as the variance of the means of the latent variables: \[\mathrm{var}\left[q_{\phi}(\mathbf{z}|\mathbf{x})\right]=\mathrm{var}\left[\mu_ {i}\right]. \tag{7}\] This way of estimating the posterior's mean and variance distinguishes our proposal from the standard formulation. It is a simple modification that will significantly affect data generation, making the generated data more realistic. In the classic formulation, the \(\mathrm{KL}\) term encourages the individual means \(\mu_{i}\)'s to be zero, contributing to the so-called posterior collapse effect. In our case, we encourage the mean of these \(\mu_{i}\)'s to approach zero. Furthermore, in the standard formulation, it is promoted that individual variances are unitary. In our case, there is no such enforced restriction, although, as we will see later, we will apply regularization to individual variances. The Fig. 1 illustrates the proposed Gaussian mixture model for the posterior. The dashed line represents the posterior \(q_{\phi}\), which means the sum of the individual Gaussian \(q_{\phi_{i}}\) Figure 1: Illustration of the global posterior \(q_{\phi}\) as a mixture of individual posteriors \(q_{\phi_{i}}\). associated with each data point (in color). Note that while \(q_{\phi}\) is enforced to be similar to the prior \(p_{\lambda}\) (Gaussian with mean equal to zero and unitary variance), the individual posteriors \(q_{\phi_{i}}\) have means concentrated around zero, and their variances are unrestricted; at least for the moment. Assuming batches of data with a size of \(b\) and a latent space dimension equal to \(d\), as a result of the mixture model, the \(\mathrm{KL}\) divergence we propose to use takes the following form: \[\mathrm{KL}\big{(}q_{\phi}(\mathbf{z}|\mathbf{x})\,\|\,p_{ \lambda}(\mathbf{z})\big{)}\\ =\mathrm{KL}\big{(}\mathcal{N}(\bar{\mu}(\mathbf{x})^{2},\bar{ \sigma}(\mathbf{x})^{2})\,\|\,\mathcal{N}(0,I)\big{)}\\ =\frac{1}{2}\sum_{j=1}^{d}\big{[}\bar{\sigma}_{j}^{2}(\mathbf{x}) +\bar{\mu}_{j}^{2}(\mathbf{x})-1-\log\bar{\sigma}_{j}^{2}(\mathbf{x})\big{]}\,; \tag{8}\] where the mean and variance of the global posterior are estimated from the means of the individual posteriors: \[\bar{\mu}_{j}(\mathbf{x}) =\frac{1}{b}\sum_{i=1}^{b}\mu_{ij}(\mathbf{x}), \tag{9}\] \[\bar{\sigma}_{j}^{2}(\mathbf{x}) =\frac{1}{b}\sum_{i=1}^{b}\left(\mu_{ij}(\mathbf{x})-\bar{\mu}_{j }(\mathbf{x})\right)^{2}. \tag{10}\] One behavior we aim to prevent is the collapse of the variances of individual posteriors \(\sigma_{i}^{2}\) to zero. Such a collapse eliminates the stochastic behavior of the encoder, causing the VAE to behave like an AE. In such a scenario, the VAE would be limited to generalizing: generating faces that were not part of its training data [11]. We propose incorporating a regularization term into the Evidence Lower Bound (ELBO) that penalizes individual variances that become close to zero to address the collapsing individual posteriors. Our objective is to encourage these variances to resemble the limiting variance of the global posterior. This regularization term is: \[\mathrm{KL}\big{(}q_{\phi_{i}}(\mathbf{z}_{i}|\mu(\mathbf{x}_{i} )\,\|\,\mathcal{N}(\mu(\mathbf{x}_{i}),I)\big{)}\\ =\sum_{i=1}^{b}\sum_{j=1}^{d}\left[\sigma_{ij}^{2}(\mathbf{x})-1 -\log\sigma_{ij}^{2}(\mathbf{x})\right]. \tag{11}\] Another important component in the ELBO is the log-likelihood term, which compares pixel by pixel the data \(\mathbf{x}_{i}\) with the prediction \(\hat{\mathbf{x}}_{i}\). To make this term robust to errors in the exact reconstruction of structures' positions in high-frequency texture areas, we use the L1 norm. This is equivalent to assuming that \(p(\mathbf{x}|\mathbf{z})\) follows a Laplacian distribution: \[\mathbb{E}_{q_{\lambda}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{ x}|\mathbf{z})\right]=\sum_{j=1}^{d}|\mathbf{x}_{ij}-\hat{\mathbf{x}}_{ij}|. \tag{12}\] Note that as the batch size \(b\) becomes larger, the estimation of ELBO terms [Eqs. (8), (11) and (12)] becomes more precise. Therefore, avoiding using tiny batches is important to ensure stable convergence of the stochastic gradient. Finally, by using the losses (8), (11) and (12), the proposed ELBO takes form \[\mathrm{ELBO}(\phi,\theta)=\beta_{1}\,\mathrm{KL}\big{(}q_{\phi} (\mathbf{z}|\mathbf{x})\,\|\,p_{\lambda}(\mathbf{z})\big{)}\\ +\beta_{2}\,\sum_{i=1}^{b}\mathrm{KL}\big{(}q_{\phi_{i}}(\mathbf{ z}_{i}|\mu(\mathbf{x}_{i})\,\|\,\mathcal{N}(\mu(\mathbf{x}_{i}),I)\big{)}\\ -\beta_{3}\,\mathbb{E}_{q_{\lambda}(\mathbf{z}|\mathbf{x})}\left[ \log p_{\theta}(\mathbf{x}|\mathbf{z})\right]; \tag{13}\] where \(\beta=[\beta_{1},\beta_{2},\beta_{3}]\) are hyper-parameters that weigh the relative contribution of the losses. ### Implementation Details As we mentioned, we are willing to tolerate significant reconstruction errors as long as the reconstruction looks realistic. In the example we use, this means that we prefer to generate images where the hairstyle seems realistic, even if the positions of the curls in the reconstruction do not match the input image, rather than over-smoothing regions of the hair (as is common in a typical VAE). To achieve this, we will include a new loss that evaluates how realistic each reconstruction region appears. We introduce this loss through a GAN-like training approach with a specific PatchGAN discriminator [12, 13, 14]. Regarding the architectures of the VAEs we propose, we implemented both the encoder and the decoder using a ResNetV2 architecture [15]. This choice of architecture enables a richer encoding of the latent space. Figure 2: VAE generated variants. ## 4 Experiments To demonstrate our proposal's performance, we generate faces of 256x256 pixels in RGB using the CelebA-HD database [16]. This database consists of 30,000 images that we have separated, after the alphanumeric ordering of files, in the first 24,000 for training and the rest for testing. We generate variants of random face \(\mathbf{x}^{\prime}\), in the testing set by sampling the posterior approximation: \(z^{\prime}\sim q_{\phi}(\mathbf{z}|\mathbf{x}^{\prime})\). Then, we select two entries, \(z^{\prime}_{i}\) and \(z^{\prime}_{j}\), from this vector: \(i,j\in[1,m]\). Next, with all entries of \(z^{\prime}\) fixed except for \(z^{\prime}_{i}\) and \(z^{\prime}_{j}\), we generate images sampling the likelihood (\(\hat{\mathbf{x}}\sim p_{\theta}(\mathbf{z}|\hat{\mathbf{x}})\) by varying these entries for combinations of \(z^{\prime}_{i}=\{-6,0,-6\}\) and \(z^{\prime}_{j}=\{-6,0,-6\}\). Fig. 2 depicts the generated variants. As we can observe, the generated faces retain much of the central face. For this particular face, the variable corresponding to the horizontal axis makes the face vary from more oval to less oval. The other variable (vertical axis) is associated with the smile. Another interesting feature of VAEs is the ability to generate transitions between two faces, where the convex combination of two latent variables corresponds to a face with characteristics smoothly transitioning from one face to another. Fig. 3 depicts generated transitions. Each row corresponds to the transition between two latent variables \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\), corresponding to randomly selected images in the test dataset. The columns correspond to \[\mathbf{z}^{\prime}_{k}=(1-\alpha_{k})\,\mathbf{z}_{1}+\alpha_{k}\,\mathbf{z} _{2}, \tag{14}\] with \(\alpha_{k}=\frac{k}{(n-1)}\); for \(k=0,1,\ldots,5\). ## 5 Conclusions Although more complex variations of VAEs have been proposed (e.g., Hierarchical VAEs [4] and VQ-VAEs [7]), the original VAEs continue to be of great interest due to their elegant theoretical formulation and the promise of generating complex data from a simple distribution. Their main drawback is the posterior collapse, resulting in smoothed data. In this work, we propose a solution to reduce these effects. Through experiments, we have demonstrated that reinterpreting the posterior as a mixture of Gaussians leads to a variant of the ELBO with a marginal computational cost, and using a training strategy based on a PatchGAN discriminator, substantially improves the results, allowing for the generation of data with realistic texture using a standard VAE while preserving the essential generative characteristics of the VAE. We implemented both the Encoder and the Decoder using a ResNetV2 architecture for a richer encoding of the latent space. However, this architecture can only generate complex textures with our introduced modifications. **Acknowledges.** Work supported in part by Conahcyt, Mexico (Grant CB-A1-43858). Figure 3: VAE generated transitions.
2309.16018
Angular Correlation Function from sample covariance with BOSS and eBOSS LRG
The Baryon Acoustic Oscillations (BAO) are one of the most used probes to understand the accelerated expansion of the Universe. Traditional methods rely on fiducial model information within their statistical analysis, which may be a problem when constraining different families of models. This work aims to provide a method that constrains $\theta_{BAO}$ through a model-independent approach using the covariance matrix from the galaxy sample from thin redshift bins, later validated with a mock sample covariance matrix. We used widths of $\delta z = 0.002$ separation for all samples as the basis for a sample covariance matrix weighted by the statistical importance of the redshift bin. Each sample belongs to the Sloan Digital Sky Survey: BOSS1, BOSS2, and eBOSS, with effective redshift $z_{eff}$: 0.35, 0.51, 0.71, and different numbers of bins with 50, 100, and 200. To get $\theta_{BAO}$, we correct the angular separation from the polynomial fit ($\theta_{fit}$) by comparing each bin correlation function with the correlation function of the whole set, a parameter named $\tilde{\alpha}$. We also tested such correction by choosing the bin at $z_{eff}$ and found that for eBOSS $\theta_{BAO}$ is in $1 \sigma$ agreement with the Planck 18 model. Finally, we found that the sample covariances are noisy compared to the mocks for lower $z$ samples, something expected due to nonlinear effects. Such noise impact can be seen in the parameter constraints but does not affect the eBOSS covariance sample. It is shown that mocks' results do tend to its chosen fiducial cosmology $\theta_{BAO}$. BOSS1 and BOSS2 showed agreement with Planck 18 and an agreement with Pantheon + S$H_0$ES when $\tilde{\alpha}$ is based on the bin $z=z_{eff}$.
Paula S. Ferreira, Ribamar R. R. Reis
2023-09-27T20:52:05Z
http://arxiv.org/abs/2309.16018v2
# Angular Correlation Function from sample covariance with BOSS and eBOSS LRG ###### Abstract The Baryon Acoustic Oscillations (BAO) are one of the most used probes to understand the accelerated expansion of the Universe. Traditional methods rely on fiducial model information within their statistical analysis, which may be a problem when constraining different families of models. The aim of this work is to provide a method that constrains \(\theta_{BAO}\) through a model-independent and compare parameter estimation of the angular correlation function polynomial approach, using the covariance matrix from the galaxy sample from thin redshift bins, with the usual mock sample covariance matrix. We proposed a different approach to finding the BAO angular feature revisiting previous work in the literature, we take the bias between the correlation function between the bins and the whole sample. We used widths of \(\delta z=0.002\) separation for all samples as the basis for a sample covariance matrix weighted by the statistical importance of the redshift bin. We propose a different weighting scheme based only on random pair counting. We also propose an alternate shift parameter based only on the data. Each sample belongs to the Sloan Digital Sky Survey Luminous Red Galaxies (LRG): BOSS1, BOSS2, and eBOSS, with effective redshift \(z_{eff}\): 0.35, 0.51, 0.71, respectively, and different numbers of bins with 50, 100, and 200 respectively. In addition, we correct the angular separation from the polynomial fit (\(\theta_{fit}\)) that encodes the BAO feature with a bias function obtained by comparing each bin correlation function with the correlation function of the whole set. We also tested the same correction choosing the bin at \(z_{eff}\) and found that for eBOSS \(\theta_{BAO}\) is in \(1\sigma\) agreement with the Planck 18 model. BOSS1 and BOSS2 \(\theta_{BAO}\) agreed in \(1\sigma\) with the Pantheon+ & S\(H_{0}\)ES Flat\(\Lambda\)CDM model, in tension with Planck 18. cosmology observations surveys large-scale structure of Universe BAO ## 1 Introduction The Baryon Acoustic Oscillations (BAO) are one of the most used probes to understand the accelerated expansion of the Universe. Cosmological information can be extracted through the two-point correlation function and power spectra estimated with the sky distribution and redshift of standard tracers Peebles (2001). Among the tracers, the most used are the luminous red galaxy (LRG) first used by Eisenstein et al. (2005) and Percival et al. (2001). Now, a multi-tracer analysis is possible with emission line galaxies (Wang et al., 2020; De Mattia et al., 2021), quasars Hou et al. (2021) and Lyman-\(\alpha\) forests (Des Bourboux et al., 2020). Future and current surveys will reach a larger number of observed objects, such as the Dark Energy Spectroscopic Instrument (DESI) Flaugher & Bebek (2014), the Dark Energy Survey (DES) Rosell et al. (2022), the Large Synoptic
2309.09138
An exhaustive review of studies on bio-inspired convergent-divergent riblets
Inspired by the unique textures of shark skin and bird flight feathers and tails, the convergent-divergent surface pattern holds promise in modulating boundary layer structures. This surface pattern exhibits protrusions precisely aligned obliquely (angled in the streamwise direction), often referred to as riblets. These riblets are renowned for their ability to influence the large-scale and very-large-scale structures that dominate the boundary layer. This study seeks to elucidate the influence of convergent-divergent riblets on the boundary layer, with a particular focus on the spanwise direction. We offer a review of research concerning vortex generation physics, emphasizing helicoidal and rotational motions within and adjacent to the riblet valleys. In addition, we examine research, both experimental and numerical, addressing key physical parameters of convergent-divergent riblets, including yaw angle, wavelength, viscous-scaled riblet height, fetch length, and the transition from riblets to a smooth surface. The potential for drag reduction using these bio-inspired riblets is examined. In addition, we delve into the different manufacturing techniques for convergent-divergent riblets. Finally, we discuss the possible commercial applications of the convergent-divergent design.
Arash Mohammadikarachi, Mustafa Z. Yousif, Bagus Nugroho, Hee-Chang Lim
2023-09-17T02:51:29Z
http://arxiv.org/abs/2309.09138v1
# An exhaustive review of studies on bio-inspired convergent-divergent riblets ###### Abstract Inspired by the unique textures of shark skin and bird flight feathers and tails, the convergent-divergent surface pattern holds promise in modulating boundary layer structures. This surface pattern exhibits protrusions precisely aligned obliquely (angled in the streamwise direction), often referred to as riblets. These riblets are renowned for their ability to influence the large-scale and very-large-scale structures that dominate the boundary layer. This study seeks to elucidate the influence of convergent-divergent riblets on the boundary layer, with a particular focus on the spanwise direction. We offer a review of research concerning vortex generation physics, emphasizing helicoidal and rotational motions within and adjacent to the riblet valleys. In addition, we examine research, both experimental and numerical, addressing key physical parameters of convergent-divergent riblets, including yaw angle, wavelength, viscous-scaled riblet height, fetch length, and the transition from riblets to a smooth surface. The potential for drag reduction using these bio-inspired riblets is examined. In addition, we delve into the different manufacturing techniques for convergent-divergent riblets. Finally, we discuss the possible commercial applications of the convergent-divergent design. + Footnote †: preprint: PRF-submitted/manuscript Introduction As fuel costs continue to rise and global warming becomes an increasingly pressing issue, researchers are driven to explore effective means of reducing energy consumption. Discussions about drag reduction in vehicles and airplanes, which directly impacts energy consumption, have engaged many researchers, engineers, and companies for a long time. Approximately 50% of the overall drag experienced by civil or commercial transport aircraft can be attributed to skin friction drag[1]. The reduction of skin friction drag directly correlates to decreased energy consumption, resulting in lower fuel consumption and reduced emission of greenhouse gases. To overcome all of these challenges, using novel flow control techniques in the new generation of vehicles and aircraft can play a key role in controlling the laminar and turbulent boundary layer flow and drag reduction that significantly decreases energy use. Drag reduction and flow control techniques can be generally classified into two groups: passive techniques that require no energy (vortex generators, splitter plates, riblets, etc.) and active techniques that need energy and a control system (blowing and suction mechanism, jet, and plasma, etc.)[2]. These techniques aim to modify the flow around the body, avoid early separation, diminish wake, and encourage mass transfer within the boundary layer. Due to the complexity, high cost of design, and maintenance of the added system, active flow control techniques are less favorable than passive flow control methods[3]. In the last few decades, one nature-inspired passive flow control method in the form of small, groove-like straight structures called riblets has attracted substantial attention[4; 5]. This unique pattern is inspired by the skin of fast-swimming sharks[6; 7]. It is a promising flow control technique because it can modify wall-bounded turbulent flow and reduce skin friction drag without generating significant form drag[8; 9]. Figure 1 shows the groove-like structures on the skin. These tiny structures can dampen and modify cross-flow or the near-wall cycle of streaks and quasi-streamwise vortices and reduce the near-wall velocity gradient[10]. Riblets traditionally have a triangular cross-section. However, different cross-sections have emerged recently (i.e. symmetric triangular, asymmetric triangular, blade, trapezoidal, etc.) influencing the turbulent flow structure and drag reduction strength differently[11; 12]. Despite the potential of the regular straight riblets to act as a flow control and drag reduction mechanism, their effectiveness only at certain Reynolds numbers constitutes their Achilles heel[5; 13]. Beyond this threshold, their drag reduction capability diminishes. Such a situation raises further questions regarding the efficacy of riblets as a flow control and drag reduction technique, because most applied engineering systems that can benefit from riblets, such as aircraft and ships operate at a high Reynolds number range [15]. Furthermore, in the last two decades, various reports have shown that at high or very high Reynolds numbers wall-bounded canonical flow (smooth-wall boundary layer, pipe flow and channel flow), the large and very-large-scale motions that reside at the logarithmic region dominate the near-wall cycle of small-scale streaks and streamwise vortices [16; 17]. These large-scale and very-large-scale motions are also modulated and strongly influence the near-wall small-scale structure usually controlled by a regular straight riblet. A recent report by Zhang _et al._ shows that similar large-scale and small-scale interactions are also observed on the flow over the riblet surface [18]. Hence, the riblets will be a more effective flow control mechanism if they can target and affect large and very-large-scale structures. Koeltzsch _et al._ designed a novel class of directional riblets in the form of herringbone or convergent-divergent riblets (referred to as C-D riblets) for first time [19]. These riblets are angled in a streamwise direction called a yaw angle. Figure 2 illustrates that the combination of spanwise arrangement of the left-tilted and the right-tilted riblets creates C-D riblets. These riblets are inspired by shark skin and bird flight feathers and tails. As seen in Fig. 3 (a and b), the upstream of shark hearing sensors and shark sensory receptors display diverging and converging patterns respectively [19], and Fig. 3 (c and d) illustrates the bird's wing and tail feathers as a diverging Figure 1: (a) Scale view of the skin of fast-swimming sharks (adapted from Reif and Bechert _et al._[6; 7], (b) cross-section of traditional riblets [14]. pattern [20; 21]. Recently, the capability of C-D riblets to control wall-bounded flow has motivated notable research activities. Using C-D microgrooves is a contemporary and innovative bio-inspired passive flow control method that offers advantages over earlier generations of bio-inspired surfaces. Unlike longitudinal riblets, C-D microgrooves can distinctly impact boundary layer flows, vortical structures, near-wall motions and flow separation [9]. Furthermore, they can target the large and very large-scale features that dominate the boundary layer at high Reynolds numbers [22]. Due to the C-D riblet's potential as a more robust flow control mechanism than the standard riblets, there has recently been a significant increase in the investigation of this novel pattern. Hence, there is a need for a systematic study that summarises the research findings and progress related to C-D riblets to provide a clear understanding for both researchers and practicing engineers. To address this, the present review delves into the various characteristics of C-D riblets, providing a comprehensive understanding of their influence on multiple aspects within fluid dynamics. Primarily, this study examines their impact on the boundary layer and sheds light on how these micro-structures modify the flow behavior near solid surfaces. In addition, it investigates the interaction of C-D riblets with the large-scale and very-large-scale structures that reside on the logarithmic region of turbulent boundary layer and uncovering their potential to influence the overall flow patterns and turbulence in a given system. This study also encompasses an in-depth analysis of different effective physical parameters that govern the performance of C-D riblets, offering valuable insights into their design and optimisation. Furthermore, the applications where Figure 2: Schematic of a surface covered by convergent-divergent riblets. C-D riblets can be deployed for flow separation control is discussed. Moreover, the manufacturing methods of C-D riblets are also thoroughly explained, covering both traditional techniques and emerging advanced processes, enabling researchers and engineers to implement these microstructures effectively in practical applications. Finally, the possible commercial applications of this surface pattern are expressed. Overall, the present study is a pivotal source of knowledge, paving the way for further advancements and applications of C-D riblets in diverse engineering and commercial industries. Figure 4 presents an overview of the topics related to C-D riblets discussed in this study. Figure 3: (a) Upstream of the sensory receptor of sharks (converging surface), (b) Upstream of the hearing sensors of the sharks (diverging surface) (adapted from Koeltzsch _et al._[19]). (c) Characteristic structure and, (d) Microscopic schematic of bird flight feathers (adapted from Chen _et al._[20; 21]). II The general effect of C-D pattern on the spanwise direction of boundary layer (including upwelling-downwelling motions and counter-rotating roll mode) Convergent-divergent riblets can induce alterations in the turbulent boundary layer in the spanwise direction. Koeltzsch _et al._ applied C-D pattern riblets inside the surface of a turbulent pipe flow (See Figure 5) [19]. The results indicate that this unique pattern can generate large-scale azimuthal variations in the mean velocity and turbulence intensity. To the best of our knowledge, this is the first time that conventional riblets have been modified to have a unique direction. Koeltzsch _et al._ discovered that the time-averaged streamwise velocity and the root mean square velocity above the convergent and divergent riblet textures are considerably different near the wall. Clearly, convergent riblet patterns decrease time-averaged velocity and increase velocity fluctuations, whereas divergent riblet patterns exhibit the opposite trend. Nugroho _et al._ extended the studies done by Koeltzsch _et al._ They designed a plate coated with C-D roughness to check the large-scale spanwise periodicity in a turbulent boundary layer via hot-wire and cross-wire anemometry techniques [22; 23; 24; 25]. Different experimental parameters were employed based on various effective physical parameters such as yaw angle, and riblet height. Figure 6 shows the general effect of C-D riblets on turbulent flow, which illustrates the spanwise variation of mean velocity and turbulence intensity due to C-D riblets. These riblets profoundly modified the mean velocity and turbulence intensity in the spanwise direction. Here over the diverging region, the mean velocity is higher than that of Figure 4: An overview of the topics discussed in this study. the converging region, whereas the turbulence intensity is lower. This behavior can be explained based on the velocity vectors shown in Fig. 6. Over the diverging region, there is a downward flow that directs the high-speed and low-turbulence flow from the upper region of the boundary layer towards the wall. Conversely, the reverse is the case over the converging region, with an upward flow pushing the low-speed and high-turbulent flow from the wall toward the edge of the boundary layer. The combination of this upwelling and downwelling flow creates large-scale counter-rotating vortices. Fig. 6 also indicates that C-D riblets can significantly change the boundary layer in the spanwise direction. Such variation is remarkable, considering that the riblet height is around 100 times smaller than the boundary layer thickness. Kevin _et al._[26] and Xu _et al._[27] used Particle Image Velocimetry (PIV) measurement to study the vortical structures in the turbulent boundary layer over a flat plate coated with C-D riblets and to provide a clearer visualization of the secondary flow. Their statistical results show a similar behaviour as those by Nugroho et al[22], in which the fluid flow over the DL and CL has downwelling and upwelling motions respectively. These in turn cause the local streamwise velocity to increase over the DL, whilst it reduces over the CL. ## III Vortex Generation Physics (Including Helicoidal and Rotational Motions). Different complex flow structures can be produced due to the particular arrangement of the nature-inspired C-D riblets. The profound effect of C-D riblets on vortices, especially near-wall motions, has become an attractive topic for scholars. Because of the distinct fabrication of the Figure 5: convergent and Divergent riblet patterns inside the pipe flow[19] riblets which have an angle with the streamwise flow, the formation of the secondary flow inside the gap between the riblets as well as over the riblets differs from the regular straight riblet types. Xu _et al._ investigated the vortical structures over a plate covered by triangular riblets in the laminar flow [28]. The micro-scale vortices along and inside the valleys of the riblets and the large-scale vortices across the boundary layer were checked to determine the flow topology. The research demonstrates that the fluid flow has a helicoidal motion inside the valleys. Figure 7 exhibits that the fluid flow rotates around an axis and travels along the valleys simultaneously. Whenever the fluid moves inside the valleys, it can be assessed as a channel flow under the effect of the cross-flow caused by the riblets' yaw angle. Meanwhile, the axial flow in the valley of the riblets interacts with the cross-flow. As a result of this interaction, the secondary flow and subsequently, helicoidal movement are created. Guo _et al._ used a plane perpendicular to the valleys of the riblets to illustrate the clockwise helicoidal motion of the streamlines inside the passage of the riblets [29]. This flow motion is generated because of the specific arrangement of the converging and diverging riblets. Notably, another cross-flow with high momentum over the riblets was observed to include two velocity components. One of these velocity components is aligned with the axial component of the helicoidal streamlines inside the valleys. The other component is perpendicular to the riblets and contributes Figure 6: Spanwise variation of (a) averaged mean velocity and (b) averaged turbulent intensity [22; 23]. The horizontal solid line, the dot-dashed line, and the dashed line show the boundary layer thickness over the smooth surface, the boundary layer thickness over the C-D riblets, and the spanwise average thickness of the boundary layer respectively. The converging area is located at \(y/\Lambda=0.5\), and -0.5, while the diverging area is at \(y/\Lambda=0\) to the rotational component of the helicoidal motion (See Fig. 8). Xu _et al._ investigated the zonal characteristics of uniform streamwise momentum within the boundary layer [27]. They conducted a comparison between the probability distribution function (p.d.f) of the number of uniform momentum zones (UMZs) on both the convergent-divergent riblets and the smooth surface. They calculated the average number of uniform momentum zones (\(\bar{N}\)umz). A higher value of \(\bar{N}\)umz over the converging line was observed compared to the smooth surface. Besides, they also found that there is a positive correlation between riblet height and \(\bar{N}\)umz. For different riblet heights (\(h^{\pm}\)= 8, 14, 12), \(\bar{N}\)umz=2.567, 2.608, 2.658 were found correspondingly. Beyond the boundary layer, narrower uniform momentum zones were observed on the converging line. An increase in the riblet height could boost this trend. Figure 8: Schematic of the streamlines inside the passage of the riblets and in a perpendicular plane [29]. CL and DL are abbreviations of converging and diverging lines respectively. Figure 7: The topology of the helical motion along the valleys of the riblets [28]. Large scale motions Further analysis by Nugroho _et al_[22] indicates that the large-scale counter-rotating vortices generated by convergent-divergent riblet pattern can significantly influence the large and very-large-scale structures that reside on the logarithmic region of the turbulent boundary layer. The alterations to the turbulent structure are clearly apparent in the pre-multiplied energy spectra plots (See Fig. 9) due to the presence of a convergent-divergent surface. These plots are made from individual one-dimensional pre-multiplied energy spectra \(K_{x}\Phi_{uu}\) where \(K_{x}\) is streamwise wavenumber and \(\Phi_{uu}\) is the energy spectra of streamwise velocity fluctuations. Here the horizontal axis is a function of the outer scaled wall-normal position, while the vertical axis is the energetic streamwise length-scale \(\lambda_{x}/\delta_{s}\) in which \(\lambda_{x}\)=2\(\pi\)/\(K_{x}\) and \(\lambda_{x}\) is boundary layer thickness of the smooth wall (see Hutchins and Marusic [17]) for further details about the pre-multiplied energy spectra construction) The smooth wall reference (Figure 9a) shows a highly energetic signature near the wall due to the near-wall cycle of streaks and quasi-streamwise vortices, marked by + symbol. This near-wall peak is commonly termed an inner peak. Further from the wall (around the log region) and at a sufficiently high Reynolds number, it has been observed that there is a much larger scale peak with a length of more than 6\(\delta\). In Figure 9a, the predicted location is marked by \(\times\) symbol, and such peak is commonly termed the outer peak. This outer peak is believed to be an energetic imprint of the large (or very large) scale structure. Over the diverging region (See Figure 9b), the highly energetic inner peak lies beneath the initial measurement point, hence it is difficult to observe. Further from the wall, there is no noticeable outer peak detected in the diverging spectra. Here the energy spectra's overall magnitude is lesser than that of the smooth wall instance. For the converging region (as depicted in Figure 9c), an inner peak is clearly observed, however, it is slightly displaced from the surface compared to the smooth wall scenario. This displacement is likely attributed to the hypothesized secondary flows over the converging that tend to move upwards in this region. Kevin _et al._ used stereoscopic PIV measurement to study the turbulent structures, especially large-scale motions formed over convergent-divergent microgrooves in more detail [26]. For this purpose, they manufactured C-D riblets featuring a trapezoidal cross-section and examined the spanwise periodicity across the riblets. They found that the presence of mean counter-rotating roll modes accompanies this periodicity. They also indicate that significant spanwise variations in Reynolds stress and turbulent kinetic energy with increased quantities are consistently observed above the converging regions. In comparison with the converging and smooth-wall cases, all second-order statistics exhibited lower values over the diverging region. They also studied the streamwise velocity field instantaneously. They indicated the presence of large-scale low-momentum structures above the converging line. These large-scale motions are frequently observed in the spanwise direction, prevalent in other large-scale asymmetric rotational motions. Figure 9: Contours of streamwise energy spectra at different skin friction Reynolds and viscous scaled riblet height for (a) smooth, (b) the diverging region, and (c) the converging region [22]. These structures with low momentum on the converging line were observed to be dominant in the flow field. The common flow-down tendency of secondary flows is often associated with the observed occurrence of large-scale free-stream engulfing behavior. They checked the turbulent boundary layer over converging-diverging riblets and illustrated that as a result of the large-scale motions, the free-stream pockets formed above the diverging line in comparison to the smooth wall flow. Bai _et al._ used proper orthogonal decomposition (POD) to examine the energetic and large-scale motions of the turbulent boundary layer over the convergent-divergent roughness [30]. They used the dataset of Kevin _et al._[26] corresponding to the instantaneous velocity fields in a cross-stream plane of the turbulent boundary layer at Re=13,000. The mode energy fraction was defined as the contribution of the POD modes and the total turbulent kinetic energy of the turbulent boundary layer. It was observed that whenever the spanwise domain of the POD calculation is set to one wavelength of the converging riblets pattern, the first two modes account for approximately 14.3% of the total kinetic energy. Alternatively, when the spanwise domain is extended to two wavelengths of the converging riblet pattern, the first four modes demonstrate distinctive characteristics connected to the prominent large-scale low and high-speed structures observed across the converging riblet section. Furthermore, they used the large-magnitude coefficients of the first two POD modes (in the case of one wavelength of converging line) to choose the large-scale low-momentum vortices among the data. Then, they clarified that the primary responsibility for generating mean secondary flow lies with the large-scale low-momentum structures. In summary, the dominant energetic structures in the flow field are associated with the significant coefficients of lower POD modes that exhibit the key characteristics of instantaneous flows. Moreover, combining the dominant large-scale structures in the turbulent boundary layer with the spanwise-heterogeneous roughness yields the mean secondary flows. Kevin _et al._ employed particle image velocimetry on different spanwise and wall-normal planes to study the characteristics of large-scale coherent structures in a turbulent boundary layer over C-D riblets [31]. They analysed the instantaneous streamwise velocity fields above C-D riblets at multiple time points. They investigated the logarithmic region, during which the long low-momentum structures over the converging line were observed. Although these structures are formed and sustained by the C-D riblets, the characteristics of these turbulent events such as meandering, breaking, and branching are similar to the structures over smooth surfaces. A noteworthy observation in the far outer region is that these low-momentum regions exhibit lateral instability, resulting in pronounced meandering over the yawed riblets, meaning that detached or floating coherence was identified between the upwelling and downwelling regions. The detached or floating coherence originates from the momentum transfer to the outer layer. ## V The influence of various riblet physical parameters on the flow structure Various studies about the converging and diverging riblets in the last decade show that there are several critical parameters that can affect the strength of the spanwise variation, namely: viscous scaled riblet height, riblet yaw angle, wavelength, fetch length, and reversion from C-D riblets to the smooth surface. This section will discuss these physical parameters and illustrate some of the influences. ### Influence of yaw angle The first important component is the riblet yaw angle, which defines the angle between the riblet and the incoming flow (See Fig. 10). Guo _et al._ analysed the effect of the yaw angle in the range of 20 to 70 degrees on the boundary layer and secondary flow motion[29]. They checked the streamwise velocity and velocity vectors in the cross-stream plane. Their investigation showed that the strength of the roll motion and the streamwise velocity in the spanwise direction increase with an increase in the yaw angle from 20 to 45 degrees. A higher yaw angle ranging from 45 to 60 degrees reduces the strength of the roll mode and the streamwise velocity in the spanwise direction. Nugroho _et al._ investigated the influence of yaw angle on the strength of the secondary flows[22]. They examined the results for yaw angles equal to 10 and 30. As shown in Fig. 10, their findings indicated that decreasing the yaw angle reduced the intensity of spanwise variations caused by the surface. When the yaw angle is set to 10, the magnitude of the spanwise variation is notably smaller compared to when it is set to 30. These results indicate that there is a maximum threshold limit in riblet yaw angle. Guo _et al._ extended this parameter study and used different yaw angles in the range of 0 to 90 degrees for laminar channel flow to examine the flow structure and separation over a ramp downstream[32]. At a yaw angle of 0\({}^{\circ}\), C-D riblets transform into longitudinal riblets, and in-plane velocity is not observed above and within the riblet valleys, aligning with the findings of Djenidi et al[33]. At 90\({}^{\circ}\) yaw angle, C-D riblets transform into transverse riblets, orienting perpendicular to the free-stream flow. The research revealed that due to yaw angle increment, riblets having specific height and spacing exhibit a parabolic relationship between the strength of the secondary flow and the net reduction of the separation zone. The maximum values for both variables occur at a 45-degree yaw angle. ### Influence of viscous scaled riblet height (\(h^{+}\)) Another effective parameter related to C-D riblets is the viscous scaled riblet height (\(h^{+}\)). There are two methods to obtain different \(h^{+}\), namely: firstly by varying the free stream velocity or downstream location while holding the riblet physical height constant, or secondly by keeping the free stream velocity or downstream location constant and changing the riblets physical height. Nugroho _et al._ conducted the first method, by changing the streamwise velocity, to obtain various viscous scaled riblet heights (\(h^{+}\)) [22]. Their findings demonstrated how the height of the viscous-scaled riblets influences the strength of the three-dimensionality caused by surface roughness. As Figure 10: The influence of yaw angles 30 (a, b) and 10 (c, d) on spanwise variation for mean velocity (left-side images) and turbulence root-mean-squared fluctuations (right-side images) about the spanwise averaged value for the convergent-divergent riblets [22]. \(h^{+}\) decreases to its smallest value, the conditions approach a hydrodynamically smooth case, and the riblets become incapable of inducing significant large-scale three-dimensionality. It can be seen in Fig. 11, that with increasing \(h^{+}\), the surface imposes a greater three-dimensional effect on the mean velocity and turbulent intensity. This leads to a more noticeable difference in boundary layer thickness between the diverging and converging regions. In the work done by Xu _et al._, the free stream velocity was fixed like the baseline turbulent Figure 11: Spanwise variation of the mean velocity (left-side images) and turbulence root-mean-squared fluctuations (right-side images) about the spanwise averaged value for the convergent-divergent riblets for (a and b) \(h^{+}\)=24, (c and d) \(h^{+}\)=19, (e and f) \(h^{+}\)=13, (g and h) \(h^{+}\)=7 [22]. boundary layer and the physical height (h) was changed to vary (\(h^{+}\))[27]. They conducted the experimental study of turbulent boundary layer over C-D riblets by using PIV measurements. The tests were carried out for three heights (\(h^{+}\)=8, 14, and 20) at Re=723. Their results are similar with Nugroho et al[22], in which an increase in (\(h^{+}\)) would result in stronger spanwise variation. Beyond looking at the influence of \(h^{+}\), the investigation of Xu et al[28] focused on studying the population of spanwise prograde and retrograde vortices. Their results indicate that when the riblet height is set to 8, the population densities of spanwise vortices moving in the prograde and retrograde directions show an increase of over 50% in the converging region compared to the smooth-wall scenario. This suggests a substantial rise in turbulence production activities. By increasing the height of C-D riblets, there is a significant rise in the population of both the prograde and retrograde spanwise vortices. Throughout the diverging region, except for minor modifications in the near-wall area, the population density of both prograde and retrograde spanwise vortices remains relatively unchanged across a significant portion of the boundary layer. Moreover, it exhibits significantly less sensitivity to an increase in the riblet height. ### Influence of wavelength The riblet wavelength is recognized as another important parameter when investigating the physical characteristics of C-D riblets. Guo _et al._ examined the impact of varying riblet wavelengths on the laminar boundary layer, with different values of \(\Lambda\)/\(\delta_{s}\) containing 0.44, 1.32, 3.97, and 6.61 (\(\Lambda\) and \(\delta_{s}\) are wavelength and the local boundary layer thickness respectively). They established \(\delta\) as the boundary layer thickness using the wall-normal coordinate y. Their report indicates that when the wavelength is small, both the spanwise velocity and the intensity of the secondary flow exhibit relatively low levels of strength (See Fig. 12). In situations involving a short wavelength, the restricted spanwise distance for the development of the spanwise velocity between the two planes could cause the requirement for zero spanwise velocity to influence the entire wavelength. As a result, both the spanwise velocity and the intensity of the secondary flow appear relatively weak. Xu _et al._ employed PIV measurement to examine how riblet wavelength influences the flow field characteristics[28]. The study revealed a notable increase in the amplitude of the induced streamwise velocity across the converging and diverging regions as the riblet wavelength increases. Regardless of the riblet wavelengths, the region exhibiting upwash centred at the converging line is considerably broader than the region with downwash straddling along the diverging line. The influence of riblets on the magnitude of vorticity over the converging region is negligible. Conversely, a larger wavelength induces an increased magnitude of vorticity. Furthermore, it has been observed that variations in riblet wavelength have minimal impact on the spanwise width of vorticity peaks along both the converging and diverging lines. Apart from a regular sub-sonic flow, the C-D riblets wavelength variation is also effective in supersonic flow environment. Guo _et al._ used direct numerical simulation to investigate the impact of convergent-divergent riblets on the secondary flow induced in supersonic turbulent boundary layers over a 24 degree compression ramp [34; 35]. They studied the effect of C-D riblets on the secondary rolling motion, momentum transfer, turbulent fluctuations and flow separation with two riblet cases using wavelengths \(\Lambda\) of \(1.1\delta\) and \(1.65\delta\) (where \(\delta\) is the boundary layer thickness). As the boundary layer advances over the riblet section in the streamwise direction, the magnitude and strength of the secondary rolling motion increase rapidly at first and then seem to rise gradually. The case with \(\Lambda\)/\(\delta\)=1.1 exhibited a single roll mode with a size of a half of wavelength, whereas Figure 12: The streamwise velocity contours and the vector of in-plane velocity within the cross-stream plane [29] \(\Lambda\)/\(\delta\)=1.65 produced a pair of co-rotating vortical structures. Both patterns produce an evident spanwise change in the flow field. According to their findings, the secondary flow increases the average momentum flux and turbulent fluctuations. They acquired that both riblet cases mitigate flow separation with varying vortical structures. Compared to the case without C-D riblets, the area of the separation zone is reduced by 56% and 38% for the riblet cases with \(\Lambda\)/\(\delta\)=1.1 and \(\Lambda\)/\(\delta\)=1.65, respectively. ### Fetch length and reversion from rough to smooth surface The final parameters that we will discuss are the streamwise fetch length (\(F_{x}\)) and the streamwise reversion length (\(F_{xs}\)). The streamwise length fetch (\(F_{x}\)) is defined as the rough surface along the streamwise direction where the boundary layer has developed (See Fig. 13). Nugroho _et al._[22] evaluated the influence of fetch on the mean velocity and turbulent intensity, by measurements at a fixed streamwise location above C-D riblets and then systematically replacing the upstream rough tiles with a smooth surface. They found that a decrease in the streamwise fetch reduces the extent of regions exhibiting spanwise variation in the mean velocity and turbulent intensity (See Figure 14). The induced spanwise variations appear to be more confined closer to the wall for shorter fetch distances. Consequently, the shorter fetch cases show significantly reduced impact on the outer portion of the layer and exhibit notably less spanwise variation in boundary layer thickness. Nevertheless, even at the lowest fetch, the roughness can still impose a discernible three-dimensionality onto the flow. The final component of the discussed parametric study is the effect of a reversion from C-D riblets to a smooth surface. Here streamwise reversion length (\(F_{xs}\)) is defined as the distance downstream of a step change in surface from C-D riblets to smooth. A general schematic of the reversion from rough surface to smooth is illustrated in Figure 13. The findings of Nugroho _et al._ indicated that the strength or magnitude of the perturbation decreases as the flow transitions from the rough to the smooth surface (See Figure 15). Nevertheless, the large-scale roll-modes persist and a significant spanwise variation is still induced in the boundary layer thickness. Additionally, the size of the roll modes which is related to the size of the spanwise variation region remains unchanged. ## VI The performance of C-D riblets in drag reduction The ability of C-D riblets to modify the mean velocity and turbulent intensity within the turbulent boundary layer has raised questions about its ability to reduce skin friction drag. To address this, Chen at al. used C-D riblets in a pipe to evaluate the feasibility of flow control and drag reduction [20; 21]. Four distinct riblet arrangements, namely herringbone-smooth-herringbone (H-sm-H), reverse herringbone-smooth-herringbone (Reverse H-sm-H), herringbone-herringbone (H-H) and U/V riblets (R-sm-R), were tested (see Fig. 16a and 16b). H-sm-H is an arrangement of a smooth surface gap between two diverging pattern. Reverse H-sm-H combines a smooth surface gap between two converging surface. H-H is named for a C-D surface pattern. Finally, R-sm-R is the traditional microgrooves in which the riblets and valleys make U or V shapes. The fluid velocity for all tested cases ranged from 3 to 8 m/s and the corresponding drag reduction rates are depicted in Fig. 16c. The outcomes revealed that the H-sm-H roughness configuration yielded a drag reduction rate of nearly 16%, which is significantly higher than that of the traditional U/V riblets (R-sm-R) at 6%. The analysis of the findings demonstrates that the most effective drag reduction by herringbone microgrooves is achieved by the H-sm-H pattern with a diverging arrangement. Benschop _et al._ extended the experiment of Chen _et al._ and conducted a direct numerical simulation of a similar C-D riblet pattern inside a turbulent channel flow [37]. They found that the Figure 13: Schematic of fetch length and surface reversion from C-D riblets to smooth. herringbone pattern can either increase or decrease drag depending on the wavelength of the spanwise texture. Specifically, the drag was observed to increase to a maximum of 73% for narrow feathers with smaller wavelengths. This augmentation is attributed to the C-D riblets, which generate a fluctuating secondary flow comprising two opposite-rotating vortices positioned above the convergent/divergent regions of riblets on average. This robust secondary flow increases the mean velocity of the two vortices in the spanwise averaged region. This is due to the fact that the and turbulent advective transport, eventually boosting the drag considerable. Wide feathers were observed to exhibit a 2% drag reduction. Because of their larger width, the C-D riblets generate a secondary flow that affects only a fraction of the overall texture. Thus, their role in augmenting drag is minimal. Most directional riblet texture behaves like a conventional parallel-riblet texture during yaw, with the reduction in turbulent advective transport primarily responsible for the marginal drag reduction achieved. The discrepancies in the turbulent flow drag reduction results between Chen at al.[20; 21] and Benschop _et al.[37]_, have raised further questions about the drag reduction capability of the C-D riblets. Figure 15: Spanwise variation of the mean velocity (left-side images) and turbulence root-mean-squared fluctuations (right-side images) about the spanwise averaged value for the cases with C-D riblets for case B5 \(F_{xs}\) =0.5 m (a and b), case B6 \(F_{xs}\) =1 m (c and d), case B7 \(F_{xs}\) =1.5 m (e and f), case B8 \(F_{xs}\)=2m (g and h)[36]. Guo _et al._ also used direct numerical simulation to investigate the drag in a turbulent channel flow over C-D riblets [38]. Their results indicate that all riblet cases experienced an augmentation in drag. A closer look at the spanwise drag variation shows that a minor reduction in drag occurs exclusively over a narrow region near the converging line. The rise in total drag results from a significant increase in drag over the diverging section, primarily caused by the downward flow, along with a minor increase in drag between the diverging and converging lines. They decomposed the drag into Reynolds stress and dispersive stress. The major drag was related to Reynolds stress. Subsequently, Guo _et al._, extended their work by studying the laminar channel flow with C-D riblets [39]. They examined how factors such as Reynolds number, riblet wavelength and riblet cross-sectional shape influenced drag characteristics. The behavior of drag was studied at different Re numbers (12.5-800). Additionally, their studies showed that the drag increment normalized by the baseline cases of drag coefficient remains relatively stable until Re=100. Afterward, a sudden rise caused by the dispersive velocity arising from the secondary flow. This trend differs significantly from that observed in a laminar channel flow developing over homogeneous roughness, where Figure 16: Schematic of biomimetic herringbone riblets [20]. (a) herringbone-smooth-herringbone and (b) herringbone-herringbone. (c) shows the amount of drag reduction percentage for various riblets arrangements at different viscous scaled space distances. the normalized drag increment remains consistent up to a much higher Reynolds number due to the lack of secondary flow. However, despite the studies conducted about the effect of C-D riblets on drag reduction, the available information in this field is limited and the detail about the capability of C-D riblets in terms of drag reduction is not yet fully clear; therefore, it is not easy to draw conclusions about the performance of directional riblet texture in drag reduction. Thus, more studies on the effect of C-D riblets on reducing drag at various canonical cases (turbulent boundary layer, turbulent channel flow, and turbulent pipe flow) at matching Reynolds number is worthwhile to give researchers a complete understanding. VII Manufacturing techniques for riblets: applications in fundamental research and commercial industries In the last few decades there have been many efforts to replicate the nature-inspired riblets, including C-D riblets. The design and manufacture of these riblets have always been considered a challenge due to their small size, varying cross-sectional area and arrangements. The main objective of this section of the paper is to elaborate on the different techniques employed in making riblets. Although most of the discussed techniques are for standard riblets making, they can be employed to directional riblets. Note that in this section we will discuss both the manufacturing and replication process for both research and development environment and real-world application. ### Manufacturing by machining techniques The first and most basic manufacturing method is via small-scale machining such as mill or lathe (both manual or automatic via CNC) and directly applied onto fluid flow apparatus (wind tunnel, water tunnel, channel flow, etc) or used as a master model that is replicated via other methods. This technique was used by the pioneer of this field at NASA in the early 1980's by Walsh et al[40; 41; 4; 14]. They investigated looking at different riblet cross sections that are cut via different end mill heads (triangular, scallop, etc). Their machining technique is followed by others such as Bechert et al[5], Chen et al[20], Nugroho et al[22; 23], etc. Apart from the traditional machining process, a more sophisticated machining concept in the form of laser machining is also gaining momentum[42; 43; 44]. Such a process allows the manufacturing of a surface with accuracy down to the low micro- to high nanoscale and little or no post processing [44]. However, the challenge with laser machining is the introduction of random and unwanted structures. ### Manufacturing by Rolling Rolling methods have potential benefits in various fields, including aerospace and fluid transport. The surfaces characterized by micro-scale riblets can be produced by rolling methods [45; 46; 47]. As seen in Figure 17. in general, the process of using two rollers involves rolls extending over a set distance. The material is fed between these rolls. While the rolls rotate, frictional and compressive forces compel the rolling stock to enter the roll gap, resulting in a reduction in its thickness. Within the roll gap, there is a potential opportunity to texture the surface of the rolling stock if the rolls can be equipped with a negative imprint. While rolling the imprint can be effectively transferred onto the rolling stock, thereby achieving the intended profile. ### Manufacturing by 3D printing In the last 10 years, the use of 3D printing has gained widespread acceptance as a viable method for manufacturing micro-scale devices in research laboratories [48]. Various 3D printing-related techniques have been developed to print enclosed microchannels or pipes in many different fields such as medical devices or sensors. Such technology can be utilized to generate riblets that generally Figure 17: Sketch of riblet production by rolling process [45]. have small dimensions. Recent riblet studies by The University of Manchester have shown that such manufacturing techniques are indeed feasible [27; 28] and provide an alternative to the more traditional cutting and rolling technique. Their advanced 3D printer is capable of generating accurate riblets due to its high precision level (in the vicinity of 25 microns). The main challenge with this method, however, is the relatively longer time duration needed to generate a plate of riblets. This is because the 3D printer would not just be generating the riblets, it would also need to assemble the thicker base surface. Figure 18 exemplifies a model of convergent-divergent riblets, manufactured using the 3D printing method. ### Manufacturing by film One of the earliest mass-produced applied riblets were manufactured by the 3M company in the 1980's in the form of thin plastic film with an adhesive backing. Since then the riblet technology has been used on pipe flow [49], ship model [50], operating aircraft wings/surface [51; 52], full-sized ship hulls [53], and wind turbine [54]. Such types of riblet films can be manufactured by applying polydimethylsiloxane (PDMS) onto a master cut and can be shaped into a flexible and thin riblet film that can be applied to the surface [55; 56]. ### Manufacturing by UV paint Another novel method to apply riblets over a surface with a wide range area is via the paint application technique. This method was first introduced by the Fraunhofer Institute for Manufac Figure 18: A manufactured model of C-D riblets by 3D printing techniques [27]. turing Technology and Applied Materials Research (IFAM) in Bremen, Germany [57]. Here they applied a paint application apparatus with a flexible microstructure endless belt that is guided over three rollers (see Figure 19). As the apparatus moves over a surface, a unique mix of paint that consists of a two-component polyurethane material is applied and also cured by the built-in UV lamp inside the apparatus. Such a technique has been applied in a laboratory for fluid flow experiments and it can produce good-quality riblets [58; 59]. ## VIII Can riblets be applied commercially? As per the report by Spalart and McLean [60], the primary challenges associated with the utilization of riblets for flow control or drag reduction in practical engineering contexts like aircraft and ships primarily revolve around cost factors related to production and labor. Furthermore, riblets are prone to erosion due to substantial fluctuations in dirt, pressure, and temperature. Therefore, the present challenges are not primarily due to fluid mechanics but instead pertain to various aspects within the broader domain of engineering, encompassing aspects such as painting methods, materials, and manufacturing processes. However, despite this, recent progress and field studies on real-world applications by Airbus, Microtau, BASF, and others may provide solutions for the issues mentioned above. ## IX Conclusion In conclusion, the primary aim of this review paper is to offer an in-depth understanding of the various aspects of innovative bio-inspired convergent-divergent riblets. Unlike longitudinal riblets, Figure 19: Paint application apparatus [57] convergent-divergent (C-D) patterns uniquely demonstrate the ability to influence boundary layer flows, vortical structures, near-wall motions, and the onset of flow separation. Moreover, they can specifically target the dominant large and very large-scale structures that dictate the boundary layer in high Reynolds number conditions. This study delves into the flow patterns generated by C-D riblets. It's crucial to highlight that the vortex generation principles exhibited by convergent-divergent riblets differ from those seen in traditional riblet designs. The configuration of the converging and diverging orientations of the riblets, in contrast to the streamwise flow, results in varied flow patterns. These include helical and rotational motions within the riblet valleys, upward and downward movements, and counter-rotating roll modes. The unique vortex generation mechanisms of C-D riblets significantly influence the boundary layer. A central focus of this research was on the key physical parameters associated with C-D riblets. These parameters include the yaw angle, wavelength, viscous-scaled riblet height, fetch length, and the transition from C-D riblets to a smooth surface. These factors can profoundly affect the intensity of spanwise variation and the formation of flow structures. From the research conducted by various scholars, it's clear that C-D riblets hold potential for effective drag reduction. However, despite the existing studies on the effects of C-D riblets on drag reduction, our understanding in this area remains limited. This limitation makes it difficult to draw definitive conclusions about the efficacy of C-D riblets in reducing drag. Therefore, more research on the drag-reducing effects of C-D riblets would be beneficial, providing a fuller understanding for researchers in this field. Given the riblets' small size and concerns about their durability and the potential for dust accumulation in the valleys, the production methods for riblets have always posed challenges for both researchers and the commercial sector. Convergent-divergent riblets are no exception. In this paper, we discuss various manufacturing techniques, including machining, rolling, 3D printing, film, and UV paint methods. This section aims to inform researchers and industries about the latest advancements in C-D riblet production. Additionally, we explored the potential commercial applications of convergent-divergent riblets, with a particular focus on the aviation industry. In summary, despite challenges related to their production and practical application in the commercial realm, C-D riblets present a novel passive flow control strategy. Their unique ability to positively impact fluid flow, due to their distinct riblet design, positions them as a promising avenue for future flow control methods. ## Acknowledgments This work was supported by 'Human Resources Program in Energy Technology' of the Korea Institute of Energy Technology Evaluation and Planning (KETEP), granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (no. 20214000000140). In addition, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (no. 2019R1I1A3A01058576). ## Data Availability The data that supports the findings of this study are available within this article.
2309.10469
RUEL: Retrieval-Augmented User Representation with Edge Browser Logs for Sequential Recommendation
Online recommender systems (RS) aim to match user needs with the vast amount of resources available on various platforms. A key challenge is to model user preferences accurately under the condition of data sparsity. To address this challenge, some methods have leveraged external user behavior data from multiple platforms to enrich user representation. However, all of these methods require a consistent user ID across platforms and ignore the information from similar users. In this study, we propose RUEL, a novel retrieval-based sequential recommender that can effectively incorporate external anonymous user behavior data from Edge browser logs to enhance recommendation. We first collect and preprocess a large volume of Edge browser logs over a one-year period and link them to target entities that correspond to candidate items in recommendation datasets. We then design a contrastive learning framework with a momentum encoder and a memory bank to retrieve the most relevant and diverse browsing sequences from the full browsing log based on the semantic similarity between user representations. After retrieval, we apply an item-level attentive selector to filter out noisy items and generate refined sequence embeddings for the final predictor. RUEL is the first method that connects user browsing data with typical recommendation datasets and can be generalized to various recommendation scenarios and datasets. We conduct extensive experiments on four real datasets for sequential recommendation tasks and demonstrate that RUEL significantly outperforms state-of-the-art baselines. We also conduct ablation studies and qualitative analysis to validate the effectiveness of each component of RUEL and provide additional insights into our method.
Ning Wu, Ming Gong, Linjun Shou, Jian Pei, Daxin Jiang
2023-09-19T09:37:56Z
http://arxiv.org/abs/2309.10469v1
# Ruel: Retrieval-Augmented User Representation with Edge Browser Logs for Sequential Recommendation ###### Abstract. Online recommender systems (RS) aim to match user needs with the vast amount of resources available on various platforms. A key challenge is to model user preferences accurately under the condition of data sparsity. To address this challenge, some methods have leveraged external user behavior data from multiple platforms to enrich user representation. However, all of these methods require a consistent user ID across platforms and ignore the information from similar users. In this study, we propose RuelL, a novel retrieval-based sequential recommender that can effectively incorporate external anonymous user behavior data from Edge browser logs to enhance recommendation. We first collect and prepores a large volume of Edge browser logs over a one-year period and link them to target entities that correspond to candidate items in recommendation datasets. We then design a contrastive learning framework with a momentum encoder and a memory bank to retrieve the most relevant and diverse browsing sequences from the full browsing log based on the semantic similarity between user representations. After retrieval, we apply an item-level attentive selector to filter out noisy items and generate refined sequence embeddings for the final predictor. RuelL is the first method that connects user browsing data with typical recommendation datasets and can be generalized to various recommendation scenarios and datasets. We conduct extensive experiments on four real datasets for sequential recommendation tasks and demonstrate that RuelL significantly outperforms state-of-the-art baselines. We also conduct ablation studies and qualitative analysis to validate the effectiveness of each component of RuelL and provide additional insights into our method. User Browsing Log, Sequential Recommendation, Dense Retrieval, Contrastive Learning + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — Personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems — personalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systems —journalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + Footnote †: journal: Information systemsjournalization + to the candidate items. The key challenge is how to select and represent useful browsing data in the presence of noise. To address this challenge, we propose a novel retrieval-augmented sequential recommender based on Edge browsing log, named RUEL, which stands for Retrieval-Augmented User Representation with Edge Browser Logs for Sequential Recommendation. First, we apply an augmented contrastive learning framework to the encoder that takes the current user behavior sequence as input. We generate two augmented views from each user behavior sequence and feed them into a transformer encoder. The objective of contrastive learning is to maximize the agreement between the two views from the same sequence. This has two advantages: (1) it improves the embedding space and reduces anisotropy (Zhou et al., 2017), and (2) it enhances the model robustness to noise by using data augmentation techniques (Zhou et al., 2017; Wang et al., 2018). Moreover, we construct a momentum encoder and a memory bank for browsing sequences. The momentum encoder encodes browsing sequences and stores their embeddings in the memory bank. All embeddings in the memory bank are used as negative samples in contrastive learning. The encoder is trained to distinguish the augmented original user behavior sequence from a large number of browsing sequences. Finally, we convert all browsing sequences into a retrieval index using the retriever for fast retrieval. At inference time, we retrieve top k browsing sequences from the index for each user, and assign them different weights by an item-level attentive selector. The predictor aggregates multiple weighted sequence embeddings by attention mechanism, and predicts the target item. In summary, our main contributions are as follows. * We propose a novel approach to mine useful patterns from anonymous browsing data to improve recommender systems. We bridge the gap between anonymous webpage browsing data and various recommendation tasks. * We use a momentum contrastive learning framework on user behavior sequences and anonymous browsing sequences to train a powerful retriever, and design an attentive selector to generate fine-grained weights for each retrieved sequence. * We conduct extensive experiments on four real-world datasets to demonstrate the effectiveness and robustness of our proposed approach. ## 2. Related Work Our work is related to the following research directions. **Sequential Recommendation.** By modeling high-order dependency between between each items, sequential recommenders aims to recommend appropriate items to users. Previous (Han et al., 2016; Wang et al., 2018; Liu et al., 2019) efforts utilize MCs to identify first-order transition relationships. Subsequently, with the rapid growth of deep learning techniques, numerous works (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) has been developed to apply deep models to SR. GRU4Rec (Liu et al., 2019) first introduces RNN to the SR task, taking into account practical features of the task and a number of modifications to standard RNNs, such as a ranking loss function. Caser (Caser, 2017) uses convolutional filters to learn sequential patterns as local features of the image. In addition, influenced by the success of the attention mechanism and Transformers in other domains (Han et al., 2016; Wang et al., 2018; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), SASRec (Liu et al., 2019) has achieved significant performance improvements by first utilizing self-attention to represent the interplay of past interactions. Then, BERT4Rec (Liu et al., 2019) models user behavior sequences using deep bidirectional self-attention by adopting the Cloze objective to SR. ReDA(Liu et al., 2019) generates relevant and diverse augmentation by the related information from similar users. **Contrastive Learning.** Self-supervised learning (SSL) has attracted widespread attention in past several years, (Han et al., 2016; Wang et al., 2018; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). MoCo (Liu et al., 2019) design a momentum mechanism to enhance memory bank mechanism and SimCLR (Han et al., 2016) proposes a simple contrastive learning framework without memory bank and any specialized architectures. SimSiam (Chen et al., 2016) is a conclusive work on doing contrastive learning with convolutional neural networks. S3-Rec (Liu et al., 2019) is built on self-attention architecture, and proposes to use attribute information to produce self-supervision signals and augment data representations. SGL (Wang et al., 2018) generates multiple views of a subgraph, and maximizes the agreement between different views of the same node in two subgraphs CLS4Rec (Wang et al., 2018), DuoRec (Zhou et al., 2017), MMInfoRec (Liu et al., 2019), CoSeRec (Liu et al., 2019) and ContraRec (Wang et al., 2018) proposes to utilize contrastive learning to empower sequential recommendation. ## 3. Preprocessing We preprocess the browsing data in three stages: entity linking, session segmentation, and item alignment. In the first stage, we use Microsoft Satori to link webpages to entities based on their title and \begin{table} \begin{tabular}{l|c c c c} \hline \hline **Dataset** & **Raw** & **Stage 1** & **Stage 2** & **Stage 3** \\ \hline **ML-1m** & 32b & 871m & 537m & 22m \\ **ML-20m** & 32b & 871m & 537m & 95m \\ **Amazon-Book** & 32b & 108m & 75m & 27m \\ **Last FM** & 32b & 978m & 538m & 14m \\ \hline \hline \end{tabular} \end{table} Table 1. Interaction information statistics for browsing datasets. We present the total browsing webpage numbers of each dataset after preprocessing of three stages. Figure 1. Illustration of the overall procedure on movie recommendation. Webpages browsed by users are first linked to entities by entity linking technology. Then these entities are mapped to items based on item name, publish years and etc. Finally, a well-trained retriever will search for useful browsing sequence and feed them into recommender. main text. We build a billion-level webpage-entity dictionary by retrieving and ranking candidate entities for each webpage using BM25 and a roberta-based ranking model. The best entity with a ranking score model above 0.9 is selected. With the webpage-entity dictionary, we filter raw browsing data by only keeping webpages that are linked to Movie/Book/Artist entities. In the second stage, we split the browsing log of each user into sequences with lengths greater than 4 using a 4-hour time interval. Then we use side information in the dataset to match these entities to candidate items in three datasets (Wang et al., 2019). For Movie-lens dataset, we compare the release year and movie name. For Amazon-book dataset, we use the author name, and book name. For the Last FM dataset, we use the artist's name. We choose the top-1 candidate after verification for each item. Table 1 shows the statistics of the browsing data after each stage. ## 4. Task Formulation We consider a sequential recommendation task with a set of users \(\mathcal{U}=\{u_{1},\cdots,u_{|\mathcal{U}|}\}\) and a set of items \(\mathcal{V}=\{v_{1},\cdots,v_{|\mathcal{V}|}\}\). The user-item interaction matrix \(Y=\{y_{uu}|u\in\mathcal{U},v\in\mathcal{V}\}\) captures the implicit feedback of users, where \(y_{uu}=1\) indicates that user \(u\) has interacted with item \(v\), and \(y_{uu}=0\) otherwise. The interaction can be any type of behavior such as clicking, watching, browsing, etc. For each user \(u\), we can also obtain the interaction sequence \(s_{u}=(v_{1},...,v_{j},...,v_{l_{u}})\), where \(s_{u}\in\mathcal{C}\), \(v_{j}\) is the item that \(u\) has interacted with at time step \(j\), and \(l_{u}\) is the length of the interaction history for user \(u\). Moreover, we assume that we have access to a large amount of webpage browsing data \(\mathcal{D}\) from Edge browser, which consists of numerous webpage sequences \(s_{i}^{r}\), where \(i\) denotes the \(i\)-th browsing session of anonymous users from \(\mathcal{D}\). Based on these definitions, we formulate the retrieval-augmented recommendation problem as follows. Given the interaction sequence \(s_{u}=(v_{1},...,v_{l_{u}},...,v_{l_{u}})\) of user \(u\), and the webpage browsing data \(\mathcal{D}\) from Edge browser, our goal is to predict the next item \(v_{t}\) that user \(u\) will interact with. ## 5. The Ruel Model In the section, we present the proposed _RUEL: Retrieval-Augmented User Representation with Edge Browser Logs for Sequential Recommendation_. Figure 2 illustrates our model framework, which consists of three main components: 1) Contrastive learning with multiple data augmentation strategies; 2) Momentum encoder and memory bank. They help the encoder learn to discriminate positive sample from enormous browsing negative samples; and 3) prediction module, which consists of an attentive selector on item-level and a predictor combines retrieved information and current item sequence to predict the next item. ### Transformer-based Recommender Following previous work (Kang et al., 2019; Wang et al., 2019; Wang et al., 2019), we also choose transformer encoder (Wang et al., 2019) as modeling tool \(f(\cdot)\) of input sequence \(s_{u}\), which has been widely applied to numerous CV and NLP tasks. For a browsing sequence \(s_{i}\), we use \(\mathbf{h}_{u}\) to denote its representation generated by transformer encoder. Furthermore, we use \(\mathbf{h}_{u,j}\) to denote embedding of \(j\)-th item in \(u\)-th user in \(\mathcal{C}\), and use \(\mathbf{h}^{\prime}_{i,j}\) to denote embedding of \(j\)-th item in \(i\)-th browsing session in \(\mathcal{D}\). ### Contrastive Learning for Sequential Modeling We first introduce the data augmentation methods used in this paper. Then, we explain the technical aspects of momentum contrastive learning on browsing sequences. #### 5.2.1. Sequential Augmentation Given current items sequence \(s_{u}\), we directly introduce some typical sequence-based augmentations (Wang et al., 2019). We randomly select an operator from _Mask_, _Crop_, _Reorder_ in each augmentation. * **Mask**(M). The item masking approach (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) randomly replaces each token with special token _Mask_ by a probability \(\gamma\in(0,1)\). This augmentation strategy can be formulated as: \(s_{u^{\prime}}^{\intercal}=[v_{1},v_{[\text{Mask}]},\cdots,v_{l_{u}}]\). * **Reorder**(R). Different item orders may reflect the same user interests(Wang et al., 2019; Wang et al., 2019). We can shuffle a continuous sub-sequence \([v_{r+1},\cdots,v_{r+l_{r}}]\) to \([v_{r}^{{}^{\prime}},v_{r+1}^{{}^{\prime}},\cdots,v_{r+l_{r}}^{{}^{\prime}}]\), where \(l_{r}=\mu*k\) is the sub-sequence length and \(\mu\in(0,1)\). * **Crop**(C). Given a current behavior sequence \(s_{u}\), we randomly select a sub-sequence \(s_{u^{\prime}}^{C}=[v_{c+1},\cdots,v_{c+l_{c}}]\) with length \(l_{c}=\eta*l_{u}\), where \(\eta\in(0,1)\) is a length hyper-parameter. #### 5.2.2. Momentum Contrastive Learning Moreover, we adopt momentum contrastive learning, which uses a large memory bank with many negative examples, and maintains a consistent encoder for generating negative embeddings with a momentum-based update mechanism. The memory bank is updated in a _FIFO_ (First-In-First-Out) fashion. At each iteration, the representation \(\mathbf{h}^{\prime}_{i}\) is generated by a key encoder \(f_{k}(\cdot)\) that has the same structure as \(f(\cdot)\). Then \(\mathbf{h}^{\prime}_{i}\) is added to \(\mathbf{M}\), and the oldest representation in the memory bank is removed. Notably, our memory bank \(\mathbf{M}\) stores the embeddings of browsing sequences as negative examples to enhance the discriminative power of the query encoder. Using a queue-based memory bank allows us to increase the size of negative samples in each loss computation, but it prevents us from updating the key encoder \(f_{k}(\cdot)\) via back-propagation (the gradient is disconnected for all samples Figure 2. The overall architecture of the RUEL model. \(f(\cdot)\) is trained to maximize agreement between positive pairs, and discriminate positive pairs from enormous negative embeddings generated by momentum encoder \(f_{k}(\cdot)\). Blue part depicts browsing sequences are sent into \(f_{k}(\cdot)\), and converted into negative embeddings in memory bank. After first stage training, all browsing sequences are encoded into embeddings to construct full retrieval index. in the queue). Following (Kang et al., 2017), the parameters of the key encoder \(\theta_{k}\) are updated by following those of the query encoder \(\theta_{q}\) with a momentum coefficient \(m\), resulting in stable representations for similar sequences. Formally, we have: \[\theta_{k}\gets m\theta_{k}+(1-m)\theta_{q}, \tag{1}\] where \(m\in[0,1)\) is a hyper-parameter. Back-propagation only modifies \(\theta_{q}\). The momentum update in Equation 1 makes \(\theta_{k}\) evolve more smoothly than \(\theta_{q}\). We simply set \(m\) to 0.999 following (Kang et al., 2017). Based on the memory bank, we optimize the sequence representation with a momentum contrastive loss function as: \[\mathcal{L}_{s_{u}s_{u}^{r}}=-\log\frac{\exp(\mathbf{h}_{u}\cdot\mathbf{h}_{u^{\prime} }/\tau)}{\sum_{i=1}^{2N}\mathds{1}_{[u\neq i]}\exp(\mathbf{h}_{u}\cdot\mathbf{h}_{i}/ \tau)+\sum_{K^{\prime}=1}^{K}\exp(\mathbf{h}_{u}\cdot\mathbf{h}_{K^{\prime}}^{r}/\tau)}, \tag{2}\] where \(\tau\) is a temperature hyper-parameter, \(K\) is the size of memory bank. This loss is equivalent to minimizing the negative log likelihood of a \((2N+K)\)-way softmax classifier that tries to distinguish \(j\)-th sequence from other sequences in the batch and many browsing sequences in the memory bank. \[\mathcal{L}_{CTS}=\sum_{u\in\mathcal{D}}\mathcal{L}_{s_{u},s_{u^{\prime}}} \tag{3}\] In above equation, \(s_{u}\) denotes each behavior sequence in dataset \(\mathcal{C}\), and we use \(s_{u^{\prime}}\) to denote augmented view of it. ### Retrieval-augmented Sequential Modeling In this section, we present our retrieval-augmented sequential recommendation model, which uses the observed user behavior session data \(s_{u}\) to retrieve top-k browsing sequences \(\{s_{1}^{r},s_{2}^{r},...,s_{k}^{r}\}\) and use them as additional context for sequential recommendation. As shown on the right of Figure 2, after top-k sessions are retrieved from full index, we perform fine-grained attentive selection on item level. For each retrieved behavior sequence, its final representation is computed by a weighted sum of its item representations. Then, a predictor takes both weighted retrieved sequence embedding and current user sequence embedding as input, and predicts target item. #### 5.3.1. Session-level Retrieval We use transformer encoders \(f_{k}(\cdot)\) to build index of retriever. For each user in browsing data, their records are split into different sessions by time interval as stated above. The encoder \(f_{k}(\cdot)\) takes these sessions as input, and produces dense embedding \(\mathbf{h}_{i}^{r}\) for each browsing sequence \(s_{i}^{r}\) in \(\mathcal{D}\). Then a relevance score \(f(s_{u},s_{i}^{r})=\mathbf{h}_{u}^{T}\mathbf{h}_{i}^{r}\) is calculated between dense session vector \(\mathbf{h}_{u}\) of user \(u\) and browsing sequence vector \(\mathbf{h}_{i}^{r}\) by inner product. Thus, we can employ Maximum Inner Product Search (MIPS) algorithms to find the approximate top k sessions\(\{s_{1}^{r},s_{2}^{r},...,s_{k}^{r}\}\), using running time and storage space that scale sub-linearly with the number of browsing sequences. We use faiss1 to implement above-mentioned retrival procedures. Footnote 1: [https://github.com/facebookresearch/faiss](https://github.com/facebookresearch/faiss) #### 5.3.2. Attentive Selection After top \(k\) sessions are retrieved from numerous browsing data, we aim to reduce the impact of noisy items on prediction. Therefore we apply an item-level attention mechanism to compute attention weight for each item in retrieved sessions. \[\alpha_{u}^{l} =\frac{\exp((W_{1}\mathbf{h}_{u}+W_{2}\mathbf{h}_{i,j}^{r}))}{\sum_{j=1} ^{l_{l}}\exp(W_{1}\mathbf{h}_{u}+W_{2}\mathbf{h}_{i,j}^{r})} \tag{5}\] \[\mathbf{o}_{l} =\sum_{j\in\{1...l_{i}\}}\alpha_{u}^{j}\mathbf{h}_{i,j}^{r} \tag{4}\] where \(l_{i}\) denotes the length of \(s_{i}^{r}\), \(\mathbf{h}_{i,j}^{r}\) denotes the output hidden state of \(j\)-th item in \(i\)-th retrieved browsing sequence \(s_{i}^{r}\), and \(\mathbf{o}_{i}\) denotes the weighted embedding of sequence \(s_{i}^{r}\). \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are learnable parameters. \(\alpha_{u}^{j}\) represents the attention weight between \(s_{u}\) and the \(j\)-th items in \(i\)-th retrieved browsing behavior sequence \(s_{i}^{r}\). After obtaining \(\mathbf{o}_{i}\) for each retrieved sequence, we directly use the retrieve score \(f(s_{u},s_{i}^{r})\) as weight of \(\mathbf{o}_{i}\) to compute the final context vector \(\mathbf{o}\). \[\mathbf{o}=\sum_{i\in\{1...k\}}f(s_{u},s_{i}^{r})\mathbf{o}_{i} \tag{6}\] where \(f(s_{u},s_{i})\) is the dot product relevance score generated by retriever. In summary, we select top-k retrieved browsing sequences from both sequence level and item level, and obtain a comprehensive context vector \(\mathbf{o}\) to enhance the predictor. #### 5.3.3. Prediction and Loss Function Given the aggregated context vector \(\mathbf{o}\), we use a two-layer multilayer perceptron (MLP) to capture the high-level interaction between \(\mathbf{o}\) and \(\mathbf{h}_{u}\). \[\mathbf{p}(\mathbf{v}_{l}|s_{u},\mathcal{D})=\frac{\exp(\mathbf{w}_{l}^{T}\text{MLP}(\mathbf{h} _{u}\parallel\mathbf{o}))}{\sum_{j=1}^{N}\exp(\mathbf{w}_{j}^{T}\text{MLP}(\mathbf{h}_{u} \parallel\mathbf{o}))} \tag{7}\] where \(\mathbf{w}_{j}^{T}\) is a learnable embedding for item \(j\), and \(\mathcal{D}\) denotes the full set of browsing data. The final loss function is cross entropy, which can be written as: \[\mathcal{L}_{CF}=\sum_{s_{u}\in\mathcal{C}}-\log p(\mathbf{v}_{l}|s_{u},\mathcal{D }), \tag{8}\] where \(\mathcal{C}\) denotes the set of all user behavior sequences. \(\mathbf{v}_{l}\) denotes the target item of behavior sequence \(s_{u}\). #### 5.3.4. Optimization and Training Strategy We adopt a two-stage training strategy. In the first stage, we train the transformer encoder \(f(\cdot)\) by minimizing the contrastive loss in Equation 2, and update the momentum encoder \(f_{k}(\cdot)\) by Equation 1. Then we construct a fixed browsing sequence index using \(f_{k}(\cdot)\). In the second stage, we jointly optimize the contrastive loss and the cross entropy loss. \[\mathcal{L}=\mathcal{L}_{CTS}+\mathcal{L}_{CF}. \tag{9}\] Unlike some previous work (Kang et al., 2017), which rely on a reward to guide the gradient of the retriever and constantly refresh the index with the updated retriever, we use a fixed browsing data index, which allows us to directly optimize the transformer encoder \(f(\cdot)\) in the item prediction task by gradient backpropagation. By sharing the encoder in both momentum contrastive learning and item prediction task, the transformer encoder \(f(\cdot)\) will also benefit from the contrastive learning, because a good embedding space is also conducive to item prediction (Wang et al., 2019; Wang et al., 2019). ## 6. Experiments In this section, we first set up the experiments, and then present the performance comparison and analysis. ### Experimental Setup **Edge Browser Log Mining.** We present the details of entity linking and entity-item mapping in Section 4. Then, we can transform the user browsing log into a sequence of entities or items. Due to the limitation of resources, we only use user browsing log of Edge browser in 2021. We collect more than 32 billion records and convert them to browsing sequences for four datasets. Table 2 shows detailed information. **Evaluation Metrics.** In this paper, we use Normalized Discounted Cumulative Gain (NDCG) and Hit Ratio (HR) as metrics (Dong et al., 2017; Wang et al., 2018; Wang et al., 2019). We set k=5 and k=10 in experiments. **Task Setting.** For data preprocessing, we follow the common practice in previous work. To ensure the quality of the dataset, we only keep users with at least five interactions following previous practice. To evaluate the sequential recommendation models, we adopt the leave-one-out evaluation (i.e., next item recommendation) task. For each user, we hold out the last item of the behavior sequence as the test data and use the item just before the last as the validation set. The remaining items are used for training. We implemented our baselines based on the RecBole (Wang et al., 2019) framework. All baseline settings and training strategies refer to the original authors' implementation and further tune the parameters based on it. For fairness, the embedding size, layers number and heads number of all transformer-based models are fixed to the widely used 128, 2 and 2. The size of memory bank K is set as 8096 directly. We use the Adam optimizer (Kingma and Ba, 2014) to optimize all models. Moreover, all models are trained for 500 epochs, and the early stopping strategy is applied, _i.e._, premature stopping if ndcg@10 on the validation set does not increase for 20 successive epochs. **Methods to Compare.** We consider the following methods for comparison: GRU4Rec (Krizhevsky et al., 2015), Caser (Caser, 2015), BERT4Rec (Cai et al., 2017), CL4Rec (Cai et al., 2017), CoSeRec (Cai et al., 2017). Shallow models like BPR-MF (Cai et al., 2017), NCF (Cai et al., 2017) and FPMC (Cai et al., 2017) are omitted due to length limitation. Among these baselines, GRU4Rec, Caser and BERT4Rec belongs to typical deep learning based sequential recommender. CL4Rec and CoSeRec are state-of-the-art contrastive sequential recommender. The parameters in all the models have been optimized using the validation set. ### Performance Comparison Table 3 shows the results of the performance comparison between our RUEL method and the baseline methods on four datasets. We can observe that RUEL consistently achieves the best performance on all datasets. Among the baselines, CL4Rec and CoSeRec are the most competitive ones, especially CoSeRec, which leverages multiple data augmentation strategies to enhance its contrastive learning. We can also see that the transformer-based recommenders have an advantage over other deep learning architectures. Moreover, we notice that our method has a larger gain on ML-1M than on ML-20M, indicating that our method is more effective on relatively small datasets. ### Ablation Analysis We performed ablation experiments on sequential recommendation and retrieval tasks to examine the contribution of each component of RUEL. The results are presented in Table 4. For the retrieval task, we utilized our encoder to generate embeddings for a user behavior sequence and its augmented counterpart. We then evaluated the encoder's ability to retrieve the augmented sequence from numerous browsing sequences using HR@10 and HR@20 metrics. In this \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{3}{*}{Arch.} & \multicolumn{3}{c|}{Amazon-Book} & \multicolumn{3}{c}{ML-1M} \\ \cline{2-9} & \multicolumn{2}{c|}{Reco} & \multicolumn{2}{c|}{Retrieval} & \multicolumn{2}{c|}{Reco} & \multicolumn{2}{c}{Retrieval} \\ \cline{2-9} & \multicolumn{2}{c|}{H@10} & N@10 & H@10 & H@20 & H@10 & N@10 & H@10 & H@20 \\ \hline RUEL\_RA & 0.2164 & 0.1026 & 0.2348 & 0.3104 & 0.2044 & 0.0985 & 0.1852 & 0.2494 \\ RUEL\_MC & 0.2811 & 0.1095 & 0.6591 & 0.8519 & 0.2203 & 0.1015 & 0.6354 & 0.8354 \\ RUEL\_DA & 0.2194 & 0.1101 & 0.7214 & 0.8984 & 0.2206 & 0.1032 & 0.6774 & 0.8694 \\ RUEL\_AS & 0.2189 & 0.1084 & - & 0.2204 & 0.1018 & - & - \\ RUEL & 0.2220 & 0.1106 & 0.7249 & 0.9025 & 0.2231 & 0.1046 & 0.6841 & 0.8751 \\ \hline \hline \end{tabular} \end{table} Table 4. Ablation analysis on the Amazon-Book and ML-1m datasets. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline **Dataset** & **\#User** & **\#Item** & **\#Inter** & **\#Inter\({}_{\text{\bf Browsing}}\)** \\ \hline **ML-1m** & 6,040 & 3,416 & 1,000,000 & 22,704,450 \\ **ML-20m** & 138,493 & 26,744 & 20,000,000 & 95,148,210 \\ **Amazon-Book** & 281,428 & 13,044 & 3,500,000 & 27,439,633 \\ **Last FM** & 1,090 & 3,646 & 52,538 & 14,247,218 \\ \hline \hline \end{tabular} \end{table} Table 2. Interaction information statistics for datasets. We use _Inter_ to denote interaction. \begin{table} \begin{tabular}{l|l c c c c c c} \hline \hline Datasets & Metrics & GRU4Rec & SASRec & Caser & BERT4Rec & CL4Rec & CoSeRec & RUEL & \% limp. \\ \hline \multirow{4}{*}{LF} & H@5 & 0.0301 & 0.0416 & 0.0385 & 0.0401 & 0.0447 & 0.0484 & **0.0526** & 8.687 \\ & H@10 & 0.0509 & 0.0615 & 0.0582 & 0.0598 & 0.0751 & 0.0778 & **0.0831** & 6.815 \\ & N@5 & 0.0218 & 0.0256 & 0.2049 & 0.022 & 0.0362 & 0.0332 & **0.0366** & 10.247 \\ & N@10 & 0.0258 & 0.0319 & 0.0285 & 0.0294 & 0.0442 & 0.0450 & **0.0492** & 9.33\% \\ \hline \multirow{4}{*}{AB} & H@5 & 0.0641 & 0.0975 & 0.0795 & 0.0741 & 0.1086 & 0.1305 & **0.1422** & 8.967 \\ & H@10 & 0.1452 & 0.1806 & 0.1540 & 0.1331 & 0.1951 & 0.2115 & **0.2229** & 4.967 \\ & N@5 & 0.0341 & 0.0614 & 0.0351 & 0.0408 & 0.0693 & 0.0780 & **0.0862** & 0.1516 \\ & N@10 & 0.066 & 0.0851 & 0.0634 & 0.0681 & 0.0995 & 0.1013 & **0.1106** & 9.188 \\ \hline \multirow{4}{*}{ML-1M} & H@5 & 0.0763 & 0.1087 & 0.0816 & 0.0733 & 0.1147 & 0.1275 & **0.1403** & 10.044 \\ & H@10 & 0.1683 & 0.1904 & 0.1593 & 0.1329 & 0.1975 & 0.2043 & **0.2231** & **0.205** \\ & N@5 & 0.0385 & 0.0638 & 0.0372 & 0.0482 & 0.0632 & 0.0580 & **0.0880** & **0.2444** \\ & N@10 & 0.0671 & 0.0910 & 0.0624 & 0.0619 & 0.0928 & 0.0978 & **0.1046** & 6.955 \\ \hline \multirow{4}{*}{ML-20M} & H@5 & 0.0825 & 0.1135 & 0.0915 & 0.0801 & 0.1285 & 0.1396 & **0.1487** & 6.511 \\ & H@10 & 0.1865 & 0.205 & 0.1641 & 0.1401 & 0.2041 & 0.2132 & **0.2391** & 7.933 \\ \cline{1-1} & N@5 & 0.0425 & 0.0648 & 0.0415 & 0.0501 & 0.0715 & 0.0785 & **0.0866** & 10.315 \\ \cline{1-1} & N@10 & 0.0731 & 0.0992 & 0.0698 & 0.0645 & 0.1051 & 0.1141 & **0.1215** & 6.488 \\ \hline \hline \end{tabular} \end{table} Table 3. Performance comparison using four metrics on four datasets. All the results are better with larger values. With paired \(t\)-test, the improvement of the RUEL over the best baselines is significant at the level of 0.05. table, RUEL denotes our full model with all components. RUEL\({}_{-RA}\) eliminates the retriever and the attentive selector components, and directly infers the target item by employing the output embedding of the transformer encoder \(f(\cdot)\). RUEL\({}_{-DA}\) discards all data augmentation strategies. RUEL\({}_{-MC}\) removes the memory bank and the momentum mechanism, and only retains batch negatives in contrastive learning. RUEL\({}_{-AS}\) removes the attentive selector, and simply adopts average pooling to obtain the sequence embedding. The results in Table 4 indicate that RUEL\({}_{-RA}\) slightly outperforms CoSeRec, which implies that our momentum contrast and memory bank are also advantageous for direct recommendation. RUEL\({}_{-DA}\) has a very similar performance to RUEL. It implies that our method does not depend on manually designed sequence augmentation strategies. Discarding the momentum mechanism and the memory bank also deteriorates the model performance. The model needs to be trained to discriminate the original sequence from numerous browsing sequences. The attentive selector plays a vital role in enhancing the model robustness and effectiveness by applying token-level attention. Eliminating this component will substantially impair the model performance. ### Online A/B Test We have deployed our model on Bing desktop search by replacing the existed transformer-based sequential ranker with RUEL. To get a stable conclusion, we observe the online experiment for two weeks. Three common metrics in desktop search systems are used to measure the online performance: WHR (Weighted Hover Rate), CTR (Click Through Rate), Short-Term DAU. As the result shown in Table 5, the present method RUEL gets overall improvements on multiple regions in our online A/B test experiment. ### Parameter Analysis We experiment with RUEL using different \(k\) values in [3; 5; 10; 20; 30] and report the results in Figure 3(a). The metrics peak at \(k\)=10 on the Amazon-Book dataset and decline afterwards. The attentive selector performs poorly when \(k\) is small due to less noise in the top-ranked items. The performance gap between RUEL and RUEL\({}_{-AS}\) grows as \(k\) increases because of more irrelevant items in the retrieved sequences. We also vary the embedding size in [32; 64; 128; 32] and show the results in Figure 3(b). RUEL consistently outperforms RUEL\({}_{-RA}\) on the Amazon-Book dataset, especially when the embedding size is 128. RUEL benefits more from higher embedding size than RUEL\({}_{-RA}\). ### Qualitative Analysis We show how RUEL works in Figure 4. The red frames are retrieved browsing sequences with popular movies in science fiction and romance genres. The first sequence has _The Shawshank Redemption, Star War: Episode 1_, and _The Blade Runner_. The second sequence has _Roman Holiday, Titanic_ and _Saving Private Ryan_. The retriever finds the 10 most similar browsing sequences using the current sequence embedding. The attentive selector assigns a weight to each item in the retrieved sequences. Items that are irrelevant to the user interests, such as _The Shawshank Redemption_ in sequence 1 and _Saving Private Ryan_ in sequence 2, get lower weights. The item-level and sentence-level weights are combined to produce a context vector. Then the enhanced recommender uses the context embedding and the current sequence embedding to predict the next item. _The Blade Runner_ and _Romance Holiday_ are the top two candidates because they match the user interests better. ## 7. Conclusions This paper proposes a retrieval-augmented sequential recommendation model RUEL to fully utilize Edge browsing information. For an item sequence in the recommendation dataset, it's encoded into a vector, and the vector is used to retrieve similar user behavior sequences from cross-domain behavior. Then retrieved sequences are filtered by an item attentive selector, and refined sequences are used to enhance next item prediction. In future, we will further explore more heterogeneous cross-domain user behavior data to enhance user modeling. Figure 4. Prediction procedure for a sample user in Bing Movie dataset. We use the red frame to denote useful sequences in browsing sequences. For item and sequence-level attention weights, we use the color darkness to indicate attention weights: darker is larger. For predicting results, we use the size to represent predicting probability. Figure 3. Effect of retrieval number and embedding size. \begin{table} \begin{tabular}{l|c c c} \hline Online Metric & Region \#1 & Region \#2 & Region \#3 \\ \hline WHR & +2.9\% & +2.7\% & +3.4\% \\ CTR & +1.2\% & +1.3\% & +1.8\% \\ Short-Term DAU & +0.07\% & +0.11\% & +0.14\% \\ \hline \end{tabular} \end{table} Table 5. A/B Test on Bing Movie.
2301.13742
Ultrafast Umklapp-assisted electron-phonon cooling in magic-angle twisted bilayer graphene
Carrier relaxation measurements in moir\'e materials offer a unique probe of the microscopic interactions, in particular the ones that are not easily measured by transport. Umklapp scattering between phonons is a ubiquitous momentum-nonconserving process that governs the thermal conductivity of semiconductors and insulators. In contrast, Umklapp scattering between electrons and phonons has not been demonstrated experimentally. Here, we study the cooling of hot electrons in moir\'e graphene using time- and frequency-resolved photovoltage measurements as a direct probe of its complex energy pathways including electron-phonon coupling. We report on a dramatic speedup in hot carrier cooling of twisted bilayer graphene near the magic angle: the cooling time is a few picoseconds from room temperature down to 5 K, whereas in pristine graphene coupling to acoustic phonons takes nanoseconds. Our analysis indicates that this ultrafast cooling is a combined effect of the formation of a superlattice with low-energy moir\'e phonons, spatially compressed electronic Wannier orbitals, and a reduced superlattice Brillouin zone, enabling Umklapp scattering that overcomes electron-phonon momentum mismatch. These results demonstrate a way to engineer electron-phonon coupling in twistronic systems, an approach that could contribute to the fundamental understanding of their transport properties and enable applications in thermal management and ultrafast photodetection.
Jake Dudley Mehew, Rafael Luque Merino, Hiroaki Ishizuka, Alexander Block, Jaime Díez Mérida, Andrés Díez Carlón, Kenji Watanabe, Takashi Taniguchi, Leonid S. Levitov, Dmitri K. Efetov, Klaas-Jan Tielrooij
2023-01-31T16:21:42Z
http://arxiv.org/abs/2301.13742v1
# Ultrafast Umklapp-assisted electron-phonon cooling in magic-angle twisted bilayer graphene ###### Abstract Carrier relaxation measurements in moire materials offer a unique probe of the microscopic interactions, in particular the ones that are not easily measured by transport. Umklapp scattering between phonons is a ubiquitous momentum-nonconserving process that governs the thermal conductivity of semiconductors and insulators. In contrast, Umklapp scattering between electrons and phonons has not been demonstrated experimentally. Here, we study the cooling of hot electrons in moire graphene using time- and frequency-resolved photovoltage measurements as a direct probe of its complex energy pathways including electron-phonon coupling. We report on a dramatic speedup in hot carrier cooling of twisted bilayer graphene near the magic angle: the cooling time is a few picoseconds from room temperature down to 5 K, whereas in pristine graphene coupling to acoustic phonons takes nanoseconds. Our analysis indicates that this ultrafast cooling is a combined effect of the formation of a superlattice with low-energy moire phonons, spatially compressed electronic Wannier orbitals, and a reduced superlattice Brillouin zone, enabling Umklapp scattering that overcomes electron-phonon momentum mismatch. These results demonstrate a way to engineer electron-phonon coupling in twistronic systems, an approach that could contribute to the fundamental understanding of their transport properties and enable applications in thermal management and ultrafast photodetection. ## I Introduction Moire superlattices provide a novel material platform in which twist angle controls the effective lattice constant. As the twist angle decreases, the larger moire unit cell corresponds to a smaller electron momentum. This tunes the relative strength of the kinetic energy of electrons and the interaction energy between them. In magic-angle twisted bilayer graphene (MATBG), these interactions result in a rich phase diagram that includes superconductors, [1, 2, 3, 4] correlated insulators [5, 6] and orbital magnets. [2, 7] In transition metal dichalcogenides (TMDs), correlated insulating [8, 9] and ferromagnetic states [10] are observed over a broad range of angles, with moire excitons [11, 12] providing a test-bed for exploring Hubbard model physics. [9] In addition, the moire potential modifies the phonon spectra for small twist angles. [13] This results in phonon renormalization in MoS\({}_{2}\) homobilayers [14] and the emergence of phonon minibands in twisted bilayer graphene. [15] Theoretical studies predict that the moire potential strongly affects electron-phonon coupling, [16, 17, 18] which has important implications for electrical transport, excited-state relaxation dynamics, and beyond. Excited-state relaxation measurements are particularly well-suited probes to quantitatively assess electron-phonon coupling. The relaxation dynamics in graphene after excitation involve thermalization of high-energy carriers through carrier-carrier scattering within tens of femtoseconds, [19] creating a hot carrier distribution that subsequently cools via phonons. Inelastic electron-phonon scattering allows electrons to gain (lose) energy by the absorption (emission) of a phonon. In graphene, cooling typically occurs via the emission of optical and acoustic graphene phonons, and near-field coupling to substrate phonons. [20, 21, 22, 23, 24, 25, 26, 27] Importantly, in all cases, cooling becomes increasingly slow for lower lattice temperatures, with predicted electron-phonon cooling times in the nanosecond regime for the case of pristine graphene at cryogenic temperatures. [20] Experimental studies of the relaxation dynamics of twisted bilayer graphene have so far been limited to large twist angles (\(\theta>5^{\circ}\)). In these systems, a dark exciton state emerges between van Hove singularities, leading to slower dynamics. [28; 29] At such relatively large angles, the moire potential has limited influence on electron-phonon coupling. [15; 18] Recent Raman spectroscopy measurements suggest an enhanced electron-phonon coupling strength for small twist angles around the magic angle (\(\theta\approx 1.1^{\circ}\)). [30] However, direct experimental measurements of moire-enhanced electron-phonon coupling and its implications for cooling dynamics, are lacking, nor is there any clear understanding of the origin of the enhanced coupling. In this paper, we report the observation of ultrafast cooling in magic-angle twisted bilayer graphene (MATBG) through Umklapp-assisted electron-phonon scattering. We directly probe the electron-phonon interaction by measuring carrier cooling dynamics using two well-established optoelectronic techniques - time-resolved photovoltage microscopy [19; 31; 32] and continuous-wave photomixing [33; 34]. We make a direct comparison between a non-twisted Bernal bilayer graphene sample (BLG, \(\theta=0^{\circ}\), see Fig. 1a) and a near-magic-angle twisted bilayer graphene sample (MATBG, \(\theta=1.24^{\circ}\), see Fig. 1b). At low temperature, the cooling dynamics are much faster in MATBG than in non-twisted bilayer graphene, see Fig. 1c. This unexpected result highlights the crucial role of the moire pattern and suggests the emergence of an enhanced electron-phonon interaction in small twist angle systems. We explain the observed relaxation dynamics using a theoretical model based on Umklapp-assisted electron-phonon scattering, which can occur in both the dispersive and flat bands of MATBG, see Fig. 1d. The Umklapp processes are enabled by the presence of compressed electronic Wannier orbitals (see Fig.1e), and the superlattice with reduced Brillouin zone (see Fig. 1f). ## Relaxation dynamics We study relaxation dynamics in hBN-encapsulated MATBG and BLG Hall bar devices as shown in Fig. 1a-b (see Methods for details on the device fabrication and characterization). These devices enable both electrical and optoelectronic measurements, as they are equipped with a split gate that we employ to create a photoactive pn-junction region. The resistance map as a function of the gate voltage applied to each of the two sides of the split gate, shown in Extended Data Figure 1, displays clear peaks at the usual Dirac points with vanishing carrier density. The MATBG device exhibits additional peaks at integer fillings of the superlattice unit cell. By illuminating the pn-junction with light, a photovoltage is generated via the photothermoelectric effect. This effect has a characteristic six-fold symmetry in dual-gate photovoltage maps, as shown in Extended Data Fig. 2 for both devices. This indicates that the measured photovoltage is a direct probe of the electron temperature. [35] We study hot electron relaxation using ultrafast time-resolved photovoltage microscopy (TrPV) as implemented in Refs. [19; 24] and continuous-wave heterodyne photomixing (CW-PM) as implemented in Ref. [34] In the former, ultrashort laser pulses are incident upon the pn-junction whereas for the latter two continuous wave lasers are used. In both cases we probe the generated photovoltage. These two techniques allow us to obtain directly the carrier cooling dynamics - in the time domain by varying the time delay between two ultrashort laser pulses, and in the frequency domain by varying the spectral detuning of two spectrally narrow laser beams. Both techniques independently show that charge carriers cool much faster in MATBG than in BLG at low temperature, see Fig. 2a (and Extended Data Figs. 3-6). In BLG the cooling time increases from 3 ps to 25 ps as the temperature decreases from 300 K to 5 K, which is expected as it takes longer for hot carriers to couple to phonons at lower temperature due to the reduced phonon occupation. [20; 21] Surprisingly, for MATBG the cooling time remains short, around 3 ps, across a broad temperature range (5-300 K). This suggests the involvement of low-energy phonons that still have occupation at such low temperature, which are likely phonons related to the superlattice. Indeed, the moire potential breaks the original linear phonon dispersion into minibands with enhanced density of states. [15] The energy of the lowest band is below 1 meV corresponding to temperatures below 10 K. In order to understand the origin of the observed cooling dynamics, we first consider the case of relaxation through energy transfer to phonons in non-twisted BLG. Coupling to optical phonons is highly inefficient at low temperature due to the large optical phonon energy, which is \(>160\) meV, corresponding to \(T>2000\) K. [26] Coupling between electrons and acoustic phonons is normally also inefficient due to the reduced phase space available for scattering, and would give cooling times well above a nanosecond below 25 K. [20] The presence of defects can help overcome the electron-phonon momentum mismatch through disorder-assisted cooling, which speeds up this acoustic phonon cooling process. [21; 22; 23] However, even with this mechanism, we expect cooling times between \(10^{-10}\) s and \(10^{-8}\) s for the lowest temperatures, depending on the electron mean free path (see Methods). We therefore consider diffusive cooling, where electronic heat diffuses out of the initially excited hot spot, thus leading to a lower average electron temperature. [25] In this diffusive cooling mechanism, the cooling time will thus depend on laser spot size. Indeed, for non-twisted BLG, we observe an increase in cooling time for larger spot sizes, which is largest for the lowest temperatures (25 K, 50 K), see Fig. 2b-c. At 100 K, the cooling length is shorter and therefore diffusive cooling has a smaller contribution. We thus understand the cooling dynamics for non-twisted BLG from a combination of disorder-assisted and diffusive cooling. Indeed, our calculations of the cooling time based on these two mechanisms are close to the experimentally observed ones (see Methods for details on the calculations). Importantly, for MATBG we observe no dependence of the cooling time on spot size (see Fig. 2b-c), which suggests that diffusive cooling does not play a role for this system. We next study the effect of changing the laser power and therefore initial electron temperature, see Fig. 3a. This corresponds to increasing the population of the dispersive band (Fig. 3b). The peak power density is roughly five orders of magnitude larger for our pulsed laser experiment (TrPV) than our continuous wave experiment (CW-PM). For the non-twisted BLG device, we observe somewhat slower cooling at higher initial electron temperature, which has also been observed for high-quality monolayer graphene samples and was ascribed to a bottleneck involving optical and acoustic phonons. [26] Interestingly, the role of electron temperature is minor for the relaxation dynamics of MATBG, suggesting that there are no electron-phonon or phonon-phonon bottlenecks. This result indicates that Figure 1: **Excited carrier relaxation in MATBG.****a-b**, Illustration of the hBN-encapsulated BLG device with \(0^{\circ}\) twist angle **(a)** and the hBN-encapsulated MATBG device with twist angle \(1.24^{\circ}\)**(b)**, each equipped with split gates. By applying voltages of opposite sign (\(\pm V\)) to the split gates, we create a pn-junction (the interface between yellow and orange regions). Illuminating the junction generates a photovoltage via the photovoltage electric, which is proportional to the electron temperature (\(T_{e}\)). We obtain the temperature dynamics either by using two ultrashort laser pulses separated in time by a variable temporal delay, [19; 31; 32] or by using two spectrally narrow laser beams with variable frequency detuning. [33; 34]**c**, Photovoltage as a function of time delay for a lattice temperature of 25 K. The decay, which represents the cooling dynamics, is much faster in MATBG (blue pluses) than BLG (red circles). **d**, Schematic of the MATBG band structure. Umklapp scattering processes (solid arrow) allow for efficient electron (black circle) relaxation via coupling to moire phonons (wiggly lines). These Umklapp processes can occur in both the flat and the dispersive bands. The dashed arrows represent the equivalent final state in the first Brillouin zone. **e**, Schematic of the compressed Wannier orbitals of radius \(\xi\). Electrons are localised to AA sites in the reconstructed superlattice. **f**, Umklapp scattering processes (blue arrows) couple electrons in the first Brillouin zone (white hexagon) to large-momentum phonons in higher-order Brillouin zones (blue hexagons). direct optical phonon emission does not play a role in MATBG. The much faster cooling for MATBG compared to BLG at low temperatures thus suggests that a completely different mechanism is responsible for cooling, outcompeting all currently known cooling mechanisms in non-twisted graphene. We therefore explore the effect of the superlattice on electron cooling by examining the cooling time in MATBG as a function of filling factor (\(\nu\)) which represents the electronic occupation of the superlattice unit cell. For most filling factors (\(|\nu|<4\)), we observe a nearly constant cooling time of 3 ps across a wide temperature range (5-300 K). However, at \(\nu=\pm 4\) the cooling time increases dramatically. Low-temperature transport measurements on the same device reveal an increase in resistance at the same voltages, see Fig. 3d, which confirms the full filling of the superlattice unit cell. In Extended Data Figure 7, we show that the cooling time increases strongly upon increasing laser power at full filling for a second MATBG device (\(\theta=1.08^{\circ}\)). The strong dependence of cooling upon the flat band filling - with the cooling rates high at partial filling and lower at full filling - indicates that the moire pattern and its low-energy phonons are crucial for explaining the ultrafast cooling dynamics observed in MATBG. ## Origin of enhanced cooling To gain insight into the different mechanisms that govern electron-lattice cooling pathways, we consider in detail the microscopic electron-phonon scattering processes in the MATBG system. To this end, we consider a four-band model consisting of two nearly-flat and two dispersive bands (Fig. 1d). There are two main types of electron-phonon scattering in this model, interband and intraband. The intraband processes for the intra-dispersive band and intra-flat-band transitions are different and must be evaluated separately. At temperatures higher than the bandgap, which corresponds to the highest temperatures in our measurement, the electrons are thermally excited to the dispersive bands allowing both dispersive and flat bands to contribute to cooling. To the contrary, when the electron temperature is low, all carriers reside in the flat band. Therefore, we consider two regimes: i) the high temperature regime (\(T\sim 150-300\) K), where the dispersive bands contribute to the cooling process, and ii) the low temperature regime (\(T\sim 10\) K), wherein cooling is dominated by intra-flat-band processes. In both cases, we consider both the Umklapp and normal scattering contributions, finding that at the temperatures of interest (\(T>10\) K) Umklapp scattering consistently wins over normal scattering. For the first regime (high temperatures) we consider a four-band model consisting of two flat bands of bandwidth Figure 2: **Relaxation mechanisms in MATBG and BLG.****a**, Cooling time as a function of lattice temperature. In MATBG (\(1.24^{\circ}\), blue pluses), the cooling time is constant between 5 K and 300 K (3 ps, blue line). For BLG (\(0^{\circ}\), red circles), it is greater at lower temperatures. **b**, Laser spot size dependence of the cooling time. The strong dependence in BLG at 25 K and 50 K is a signature of diffusive cooling. This effect is weaker at 100 K, where disorder-assisted cooling becomes significant. The effect is absent in MATBG for these spot sizes. The filled (open) shapes are measured using the TrPV (CW-PM) technique. Error bars represent the statistical spread across different gate voltages. The thick blue line in **a** and **b** represents the cooling time obtained from the low temperature model of Umklapp-assisted cooling (see Main text). **c** Schematics of diffusive cooling for BLG (upper) and its absence for MATBG (lower). \(W\) and two dispersive bands with the eigenstate energies \(\varepsilon>\Delta\) and \(\varepsilon<-\Delta\) (\(\Delta>W\)), see Fig. 4b. The dispersive bands are separated from the flat bands by a gap \(\Delta-W\) (see Methods for details). A direct analysis based on Boltzmann theory yields cooling rates dominated by the intra-band processes in the dispersive bands, whereas the interband processes have a minor contribution. Accounting for the Umklapp processes, we estimate the cooling rates as \(\tau^{-1}=\frac{6\mu_{1}}{\pi T_{el}}\sum_{m}(\|g_{m}^{1,1}\|^{2}+\|g_{m}^{-1,- 1}\|^{2})\omega_{m}^{2}\), where \(\rho_{1}\) is the density of states of the dispersive particle and hole bands labeled by \(n=\pm 1\), \(T_{el}\) is the electron temperature, \(g_{m}^{n,n}\) is the electron-phonon coupling constant in the \(n^{\rm th}\) band and \(\omega_{m}\) is the phonon energy in the \(m^{\rm th}\) phonon band. Direct calculation gives cooling rates that are independent of the lattice temperature \(T_{ph}\), in agreement with the observed dynamics, see Fig. 2a. For the regime of low temperatures, we describe the system using a model of a flat band with electron and hole subbands (see Methods for a detailed description of the model). For a quantitative comparison with the experimental results shown in Fig. 4a, we calculate the cooling power \(J\) accounting for the Umklapp processes assuming the Wannier function radius \(\xi=a/6\) where \(a\) is the lattice parameter for the moire structure [36], see Fig. 1c. The cooling rate \(\tau^{-1}\) is estimated from the calculated cooling power and specific heat using \(\tau^{-1}=J/C(T_{el}-T_{ph})\); here we calculate the specific heat \(C\) using the fluctuation formula, Eq. 2 in the Methods section. In that temperature values are not constrained by the flat-band width and can be as large as the bandgap. The filling dependence of the cooling rate is shown in Fig. 4a. The calculated Umklapp-assisted cooling times as a function of the filling factor are seen to be in Figure 3: **Origin of enhanced cooling in MATBG.****a**, Dependence of cooling time on peak power density for BLG (red circles) and MATBG (blue pluses). The filled (open) shapes are measured using the TrPV (CW-PM) technique. The error bars signify the one sigma confidence interval from the fitting algorithm. **b,****c** Schematics of cooling power in MATBG for part filling (**b**) and full filling (**e**) of the flat bands. For part filling, the interband transition is not rate-limiting as evidenced by the absence of a power dependence in **a**. At full filling, cooling times are longer due to the interband bottleneck effect illustrated in panel (**e**). **c-d**, Gate dependence of cooling time, **c**, and four terminal resistance acquired at T = 35 mK (\(R_{xx}\)), **d**. Orange shaded region highlights full filling of the moiré unit cell, where \(R_{xx}\) and cooling time increase. The thick blue line in **a** and **c** represents the cooling time obtained from the low temperature model of Umklapp-assisted cooling (see Main text) agreement with the experimental results. For the calculated cooling times, we used a deformation potential of 16 eV. This is close to the values reported for single-layer graphene (10-30 eV). [22; 37; 38; 39] We therefore conclude that the Umklapp-assisted carrier cooling model reproduces the main experimental findings. We note that, here, we did not take account of the disorder-assisted cooling processes. [21] In pristine graphene, the bottleneck due to limited phase space due to the small Fermi surface is relieved by disorder scattering. The situation in MATBG differs from that in pristine graphene in two ways. First, as the superlattice provides additional momentum recoil, MATBG does not require defects and/or disorder for electron-lattice cooling. Second, the formation of highly localized Wannier orbitals at AA sites in the moire pattern modulates the electron-phonon interaction. These effects produce strong coupling of the electrons to moire phonons even in the absence of disorder. [36] ## Outlook Importantly, the cooling measurement is predominantly sensitive to the electron-phonon interactions, and is less sensitive to the electron-electron interactions. This presents a unique window of opportunity for probing underlying physics, and an advantage compared to other measurements types that do not easily separate these two interactions. The finding that electron-phonon Umklapp scattering dominates ultrafast electron-phonon cooling is likely to have important implications for MATBG physics. Electron-phonon scattering plays an important role in charge transport, limiting the carrier mobility at high temperatures. This interaction also mediates the pairing interaction in Bardeen-Cooper-Schrieffer superconductors. Understanding the electron-phonon coupling could give important insights into the origin of superconductivity in MATBG. [16; 40] For metals, electron-electron Umklapp scattering gives rise to finite electrical resistance at low temperatures. In graphene/hBN superlattices and MATBG, this effect dominates transport at temperatures up to 10 K or higher, leading to excess resistivity and degradation of charge carrier mobility. [41; 42; 43] In MATBG, electron-phonon Umklapp scattering could explain some of the open questions from electrical transport measurements, such as the strange metal phase or the role of phonons in superconductivity. [16; 40] Finally, the ultrafast Umklapp-assisted electron-phonon cooling, enhanced density of states, and rich phase diagram are appealing for single-photon detection in the highly sought after mid-IR wavelength range. [44; 45] ## Methods **Device fabrication** The MATBG devices were fabricated using a cut and stack technique. All flakes were first exfoliated on a Si/SiO\({}_{2}\) (285 nm) substrate and later picked up using a polycarbonate (PC)/polydimethylsiloxane (PDMS) stamp. All the layers were picked up at a temperature of \(\sim 100^{\circ}\)C. We used an AFM tip to cut the graphene in order to avoid strain during the pick-up process. The PC/PDMS stamp picks up first the top graphite layer, the Figure 4: **Quantitative comparison with Umklapp-assisted cooling.****a**, Comparison between calculated (solid line) and experimental (symbols) cooling times for MATBG at 5 K and 10 K (upper and lower panels). The grey shaded region allows for uncertainty in the value of the deformation potential (\(D=16\pm 4\) eV). **b**, Schematic of the model used for the calculations with two dispersive and two flat bands separated by an energy gap (\(\Delta-W\)). \(\gamma_{1}\) and \(\gamma_{0}\) represent intra-dispersive-band and intra-flat-band scattering processes, respectively. The low temperature calculations shown in (**a**) consider only \(\gamma_{0}\). top hBN and the first graphene layer. Before picking up the second graphene layer, we rotate the stage by an angle of \(1.1-1.2^{\circ}\). Finally, the stamp picks up the bottom hBN and bottom graphite gates. We drop the finalized stack on a Si/SiO\({}_{2}\) substrate by melting the PC at \(180^{\circ}\)C, see Supplementary Figure S1a. The resulting stack is etched into a Hall bar using a CHF\({}_{3}\)/O\({}_{2}\) plasma and a 1D contact is formed by evaporating Cr (5 nm)/Au (50 nm), see Supplementary Figure S1b. We etch a narrow channel of \(\sim 150\) nm in the top gate using an O\({}_{2}\) plasma. Before etching the top gate, the device was characterized at \(T=35\) mK to identify the pair of contacts closest to the magic angle (\(\theta\sim 1.1^{\circ}\)). The junction was made in between this pair of contacts. **Twist angle extraction** The twist angle \(\theta\) is extracted from the superlattice carrier density of the full band \(n_{s}\) by applying the relation \(n_{s}=8\theta^{2}/\sqrt{3}a^{2}\), where \(a=0.246\) nm is the graphene lattice constant. First, we calibrate the gate induced carrier density using the Hall effect data at \(\pm 1\) T. In the carrier density region close to charge neutrality, the Hall carrier density \(n_{H}=-B/eR_{xy}\) should closely follow the gate induced carrier density \(n_{H}=n\), see Supplementary Figure S2. By plotting \(n_{H}\) vs \(V_{g}\) and fitting this slope around charge neutrality we can obtain the capacitance of the device and therefore extract the real carrier density n. Then we extract the carrier density corresponding to a fully filled superlattice unit cell, in this case we find it to be \(n_{s}=(3.58\pm 0.10)\times 10^{12}\) cm\({}^{-2}\). Finally using the above relation we extract a twist angle \(\theta=1.24^{\circ}\pm 0.02^{\circ}\). In Supplementary Note 1, we verify that there is minimal twist angle disorder in the junction region. **Transport Measurements** Low-temperature transport measurements were carried out in a dilution refrigerator (Bluefors SD250) with a base temperature of 20 mK. Standard low-frequency lock-in techniques (Stanford Research SR860 amplifiers) were used to measure \(R_{xx}\) with an excitation current of 10 nA at a frequency of 13.11 Hz. **Optoelectronic measurements** In time-resolved photovoltage (TrPV) experiments, we vary the delay time (\(dt\)) between the arrival of two ultrafast pulses. [31][32][19] Due to the non-linear relationship between carrier temperature and optical heating, we observe a dip in the photovoltage when the two pulses arrive at the same time (\(dt=0\)), see Fig. 1c and Extended Data Figs. 3 and 4. At longer delay times, the signal recovers to its maximal value. We obtain the cooling time by describing the observed dynamics with an exponential function. For heterodyne photomixing (CW-PM) experiments, the wavelength detuning between the two continuous wave lasers creates an optical beating. [33; 34] The photovoltage oscillates at the beating frequency. Due to the competition between beat frequency (\(\Omega\)) and the characteristic cooling time (\(\tau_{e}\)), we observe a peak for \(\Omega=0\) whereas the oscillations are damped when \(\Omega^{-1}\ll\tau_{e}\), see Extended Data Figs. 5 and 6. The frequency response takes the form of a Lorentzian function of width \(\Gamma\), from which we extract the cooling time as: \(\Gamma=1/\pi\tau_{e}\). [33] **Estimating cooling times in untwisted graphene** The hot electron cooling time for energy transfer to acoustic phonons in monolayer graphene is given by \(\tau_{AP}\approx 848/(D^{2}T_{L}^{2})\) [\(\mu\)s], [20] where \(D\) is the deformation potential in eV. This expression is valid in the neutral limit (\(T_{F}<T_{e}\)) and close to equilibrium (\(T_{e}\gtrsim T_{L}\)). \(T_{e/L/F}\) is the electron/lattice/Fermi temperature. [20] Taking \(D=20\) eV, we calculate a cooling time of \(\tau_{AP}=3.4\) ns for \(T_{L}=25\) K. In disorder-assisted or supercollision cooling, [21; 22; 23] the dependence on lattice temperature is given by: \[\tau_{SC}=\frac{\alpha}{3AT_{L}},\,\text{with}\] \[\alpha=\frac{2\pi E_{F}k_{B}^{2}}{3\hbar^{2}v_{F}^{2}}\,\,\text{ and}\,\,A=9.62\frac{g^{2}\nu^{2}(E_{F})k_{B}^{3}}{\hbar k_{F}\ell}.\] Here, \(g\) is the electron-phonon coupling, \(\nu(E_{F})\) is the density of states at the Fermi level per valley/spin flavour, \(k_{F}\) is the Fermi wavevector and \(\ell\) is the mean free path. In high-quality samples and at cryogenic temperatures, the device size typically limits the latter. For low doping levels (\(10^{12}\) cm\({}^{-2}\)), \(0.1<\ell<2\)\(\mu\)m and \(T=25\) K, \(\tau_{SC}=0.5-11\) ns. **Cooling due to lateral diffusion** The lateral diffusion of photoexcited carriers reduces the hot electron temperature when the cooling length is greater than the laser spot size. This effect is particularly relevant in high-mobility samples, as the Wiedemann-Franz law relates electrical to thermal conductivity. [25] At low lattice temperatures efficient heat conduction manifests in our experiments as a shorter cooling time. By considering the spatial evolution of a Gaussian heat spot induced by the laser pulse, [26] we describe the temperature dynamics by: \[T_{e}(t)=2\pi A_{pu}A_{pr}\frac{\sigma_{pu}^{2}\sigma_{pr}^{2}}{\sigma_{pu}^{2 }+\sigma_{pr}^{2}+2Dt},\] where \(A\) and \(\sigma\) are the peak intensity of the pump (\(pu\)) and probe (\(pr\)). Clearly, this effect is greater for smaller spot sizes and larger electronic heat diffusivities (\(D\)). Using a diffusivity of \(D=750\) cm\({}^{2}\)s\({}^{-1}\), and pump-probe spot sizes of \(\sigma\approx 0.9\)\(\mu\)m we find a cooling time of \(\tau_{diff}\approx 18\) ps. For \(\sigma\approx 1.4\)\(\mu\)m, \(\tau_{diff}\approx 45\) ps, see Fig. 2b. **Cooling rate at low temperatures** The cooling rate in Fig. 3f is estimated by \(\frac{J(T_{el},T_{ph})}{C(T_{el})(T_{el}-T_{ph})}\), where \(J\) is the cooling power, \(C(T_{el})\) is the electron specific heat, and \(T_{el}\) (\(T_{ph}\)) is the electron (phonon) temperature. To evaluate \(J\) and \(C\), we consider an effective two-band model similar to pristine graphene used in Ref. [36]. Following the previous study, we use the electron-phonon interaction for the Wannier orbital radius \(\xi=a/6\) where \(a\) is the lattice parameter. In the Boltzmann theory, the cooling power \(J\) by electron-phonon scattering reads [36] \[J= \sum_{n,n^{\prime}}J_{n,n^{\prime}},\] \[J_{n,n^{\prime}}= \frac{2\pi}{V^{2}}\sum_{m,\vec{k},k^{\prime}}\|g^{nn^{\prime}}_{ \vec{k}-\vec{k}^{\prime},m}\|^{2}\omega^{2}_{\vec{k}-\vec{k}^{\prime},m}N_{ \vec{k}-\vec{k}^{\prime},m}\] \[\times\left\{f_{\vec{k}^{\prime}n^{\prime}}[1-f_{\vec{k}n}]e^{ \beta_{ph}\omega_{\vec{k}-\vec{k}^{\prime},m}}-f_{\vec{k}n}[1-f_{\vec{k}^{ \prime}n^{\prime}}]\right\}\] \[\times\delta(\varepsilon_{n^{\prime}}-\varepsilon_{n}-\omega_{ \vec{k}-\vec{k}^{\prime},m}), \tag{1}\] where \(J_{n,n^{\prime}}\) is the contribution from the scattering between \(n\)th and \(n^{\prime}\)th bands, \(V\) is the volume of the system, \(g^{nn^{\prime}}_{\vec{k}-\vec{k}^{\prime},m}\) is the coupling constant, \(\varepsilon_{\vec{k}n}\) is the one-particle eigenenergy of the eigenstate in \(n\)th band with momentum \(\vec{k}\), \(\omega_{\vec{q}m}\) is the phonon eigenenergy in the \(m\)th band with momentum \(\vec{q}\), and \(\beta_{el}=1/k_{B}T_{el}\) (\(\beta_{ph}=1/k_{B}T_{ph}\)) is the inverse temperature of electrons (phonons) with \(k_{B}\) being the Boltzmann constant, respectively; \(f_{\vec{k}n}=\frac{1}{e^{\beta_{e}(\varepsilon_{\vec{k}n}-\mu)}+1}\) and \(N_{\vec{q}m}=\frac{1}{e^{\beta_{ph}\omega_{\vec{k}m}-1}}\) are respectively the Fermi and Bose distribution functions. The estimation of specific heat uses the fluctuation formula \[C(T)=k_{B}\left[\langle\varepsilon_{n\vec{k}}^{2}\rangle-\frac{ \langle\varepsilon_{n\vec{k}}\rangle^{2}}{\langle 1\rangle}\right], \tag{2}\] \[\langle O_{n\vec{k}}\rangle=\sum_{n}\int\frac{dk^{d}}{(2\pi)^{d} }\frac{\beta^{2}O_{n\vec{k}}}{4\cosh^{2}\left[\frac{\beta(\varepsilon_{n}-\mu) }{2}\right]}. \tag{3}\] Note that the common formula for Fermi-degenerate electron systems does not apply here as the temperature exceeds the Fermi energy at \(T\gtrsim 100\) K. This model gives a good approximation when the temperature is much lower than the energy gap separating the flat band from high-energy dispersive bands. **Cooling rate at high temperatures** At high temperatures, we cannot neglect the high-energy bands because the electron temperature exceeds the band gap. In such a case, the Umklapp scattering involving high-energy phonons contributes to electron cooling due to a large number of high-energy phonons. Hence, we also expect that Umklapp scattering plays a key role in the high temperature regime. To study the electron-lattice cooling involving the interband processes, we assume the electrons only couple to phonons with energies below a cutoff \(\Lambda_{ph}\). This assumption is justifiable in a system where the electron-phonon coupling between the electrons and the acoustic phonons reduces exponentially as the momentum increases. In a system with compact Wannier orbitals, \(\Lambda_{ph}\) becomes a few times higher than the energy of folded acoustic bands. Hence, a large \(\Lambda_{ph}\), considerably larger than the phonon bandwidth of the folded acoustic phonons, represents the enhanced coupling by compact Wannier orbitals. Below, we label the folded acoustic bands by an integer \(m\) and define the high-temperature limit as \(T_{el}>T_{ph}\gg\Lambda_{ph}\). At high temperatures, the cooling power in Eq. (1) reads \[J_{nn^{\prime}}= \frac{\pi}{V}\sum_{m}\|g^{nn^{\prime}}_{m}\|\omega_{m}^{2}\rho_{n }\rho_{n}^{\prime}\left[T_{el}-T_{ph}\right]\times\] \[\left[\tanh(\frac{\beta(b^{m^{\prime}}_{nn^{\prime}}-\mu)}{2})- \tanh(\frac{\beta(a^{m}_{nn^{\prime}}-\mu)}{2})\right],\] where \(\rho_{n}\) is the density of states (DOS) for the \(n\)th band (we assume a constant DOS with the bandwidth \(W_{n}\)), and \(a^{m^{\prime}}_{nn^{\prime}}=\max(\varepsilon_{n}^{-}-\varepsilon_{n^{\prime} }^{-}-\omega_{m})\) [\(b^{m}_{nn^{\prime}}=\min(\varepsilon_{n}^{+}-\varepsilon_{n^{\prime}}^{+}- \omega_{m})\)] with \(\varepsilon_{n}^{\pm}\) being the energy of the top and bottom edge of the electron band. Here, we approximated the phonon energy as \(\omega_{n\vec{k}}\sim\omega_{n}\) considering the small Brillouin zone, and the coupling constant \(g^{nn^{\prime}}_{m}(\vec{k})\sim g^{nn^{\prime}}_{m}\) which is valid in the small orbital radius limit. We apply the above formula to a four-band model consisting of two flat and two dispersive bands. The two flat bands are at energies \(0\leq\varepsilon\leq W\) and \(-W\leq\varepsilon\leq 0\) with DOS \(\rho_{0}\), and the two dispersive bands are \(W<\Delta\leq\varepsilon\leq\Lambda\) and \(-\Lambda\leq\varepsilon\leq-\Delta<-W\) with DOS \(\rho_{1}\) (Fig. 4b). To the leading order in \(T_{el}\), the cooling power reads \[J=\pi\sum_{m}(\|g_{m}^{1,1}\|^{2}+\|g_{m}^{-1,-1}\|^{2})\omega_{m}^{2}[T_{el}-T_{ ph}]\rho_{1}^{2}.\] Hence, the cooling rate becomes \(\tau^{-1}=\frac{6\rho_{1}}{\pi VT_{el}}\sum_{m}(\|g_{m}^{1,1}\|^{2}+\|g_{m}^{-1,- 1}\|^{2})\omega_{m}^{2}\), independent of phonon temperature, \(T_{ph}\). ## Supplementary Information This article has an accompanying supplementary file. ## Acknowledgements We would like to thank Nick Feldman for his contribution to preliminary experiments. ICN2 was supported by the Severo Ochoa program from Spanish MINECO Grant No. SEV-2017-0706. R.L.M. acknowledges that this project has received funding from the "Secretaria d'Universitats I Recerca de la Generalitat de Catalunya, as well as the European Social Fund (L'FSE inverteix en el teu futur)--FEDER. H.I. acknowledges support from JSPS KAKENHI (Grant Numbers JP19K14649). J.D.M. acknowledges support from the INphINIT 'la Caixa' Foundation (ID 100010434) fellowship programme (LCF/BQ/DI19/11730021). K.W. and T.T. acknowledge support from the JSPS KAKENHI (Grant Numbers 19H05790 and 20H00354 and 21H05233). K.J.T. acknowledges funding from the European Union's Horizon 2020 research and innovation program under Grant Agreement No. 804349 (ERC StG CUHL), RYC fellowship No. RYC-2017-22330 and IAE project PID2019-111673GB-I00.
2309.08232
Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural Networks
This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs), a critical component in neuromorphic computing. The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks. To achieve this, we designed and implemented an astrocyte model in two distinct platforms: CPU/GPU and FPGA. Our FPGA implementation notably utilizes Dynamic Function Exchange (DFX) technology, enabling real-time hardware reconfiguration and adaptive model creation based on current operating conditions. The novel approach of leveraging astrocytes significantly improves the fault tolerance of SNNs, thereby enhancing their robustness. Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency. Through comprehensive comparative analysis with prior works, it's established that our model surpasses others in terms of neuron and synapse count while maintaining an efficient power consumption profile. These results underscore the potential of our methodology in shaping the future of neuromorphic computing, by providing robust and energy-efficient systems.
Murat Isik, Kayode Inadagbo
2023-09-15T08:02:29Z
http://arxiv.org/abs/2309.08232v1
# Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural Networks ###### Abstract This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs), a critical component in neuromorphic computing. The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks. To achieve this, we designed and implemented an astrocyte model in two distinct platforms: CPU/GPU and FPGA. Our FPGA implementation notably utilizes Dynamic Function Exchange (DFX) technology, enabling real-time hardware reconfiguration and adaptive model creation based on current operating conditions. The novel approach of leveraging astrocytes significantly improves the fault tolerance of SNNs, thereby enhancing their robustness. Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency. Through comprehensive comparative analysis with prior works, it's established that our model surpasses others in terms of neuron and synapse count while maintaining an efficient power consumption profile. These results underscore the potential of our methodology in shaping the future of neuromorphic computing, by providing robust and energy-efficient systems. Keywords:Astrocytes, Spiking Neural Networks, FPGA Implementation, Dynamic Function Exchange, Fault Tolerance ## 1 Introduction Fault tolerance has become a critical feature of today's increasingly sophisticated computational systems, which require not just high performance, but also continuous and reliable operation. This is especially true for neural networks that mimic the structure of the brain, pushing the limits of existing computing paradigms. Spiking Neural Networks (SNNs), a type of artificial neural network patterned after the brain's neuronal dynamics, are energy efficient, use time-dependent data processing, and have bio-plausible algorithms for learning. In spite of this, SNNs are susceptible to faults and failures, which could disrupt their functionality and reduce their efficiency. Therefore, fault-tolerant mechanisms within SNNs need to be explored. Recent research has demonstrated that astrocytes play a crucial role in regulating neuronal activity and synaptic transmission in the brain. It has long been believed that neurons contributed significantly to the resilience and adaptability of biological neural networks, but astrocytes have now been found to play a much more important role which is shown in Fig. 1. Dynamically modulating neuronal activity based on state, they effectively support fault tolerance at the molecular level. The hypothesis of integrating astrocytic mechanisms into SNNs is an exciting prospect, potentially leading to dynamic adjustment for fault tolerance in these systems [6][3][7]. Field Programmable Gate Arrays (FPGAs) are reprogrammable silicon chips that can be customized to perform complex computations in parallel, making them ideally suited for implementing SNNs. FPGAs have been increasingly used for emulating SNNs due to their high degree of parallelism, energy efficiency, and low latency. Further, their inherent re-programmability makes them a prime candidate for implementing adaptive mechanisms, such as those inspired by astrocytes, to handle faults dynamically. This could potentially enable SNNs implemented on FPGAs to autonomously adapt in the face of faults, mimicking the resilience observed in biological neural networks [5][4]. In this paper, we explore how FPGA-implemented SNNs could benefit from astrocyte-powered dynamic adjustments to enhance fault tolerance. The purpose of this study is to investigate whether introducing astrocyte-inspired mechanisms could enhance network performance and reliability by reducing faults and failures. The rest of the paper is organized as follows: **Section II** discusses astrocytes' significance in SNNs and reviews related works. **Section III** describes SNN architecture and the integration of astrocytes. **Section IV** details our astrocyte-augmented SNN model, emphasizing hardware implementations. **Section V** evaluates the model's fault tolerance and efficiency, comparing it with other models and introducing the Dynamic Function eXchange technology. **Section VI** concludes with our key findings and suggests future research avenues. ## 2 Background The principles of biological brains are reflected in SNNs, which are artificial neural networks. A key difference between them is the emulation of time-dependent spikes or 'action potentials', which are the primary means of communication between neurons in the brain. The SNN is a powerful computational model capable of handling complex tasks such as pattern recognition, sensory processing, and motor control in a highly energy-efficient, low-latency manner. Recent advances in neuromorphic engineering have propelled research in this field, which aims to create hardware and software solutions that mimic neuronal spike dynamics [12]. Fault-tolerance techniques are essential for ensuring robustness and Figure 1: Inserting an astrocyte in a neural network. reliability of complex systems like SNNs, particularly when uninterrupted functionality is critical. Several methods have been proposed and implemented, ranging from redundancy and error correction codes to adaptive mechanisms that enable dynamic fault recovery [18]. The disadvantages of these traditional techniques are often increased resource consumption and decreased performance. Therefore, innovative solutions are needed that minimize these trade-offs while ensuring robust fault tolerance. Astrocytes once considered mere supporting cells in the brain, are now recognized as key players in regulating neuronal activity. The ability of biological neural networks to detect and modulate neural activity contributes to their adaptability and resilience [13]. The idea of integrating these astrocytic mechanisms into artificial neural networks to enhance their resilience and adaptability is a novel and promising area of research. Previous works have explored the implementation of SNNs on FPGAs for their advantages in parallelism, energy efficiency, and re-programmability [15]. However, the integration of astrocyte-inspired fault-tolerance mechanisms in such systems has not been adequately explored. This research seeks to fill this gap, extending our understanding of fault tolerance in SNNs and paving the way for more robust and adaptive neural network architectures. By examining how astrocyte-powered dynamic adjustments could enhance fault-tolerance in FPGA-implemented SNNs, this study could provide a valuable contribution to the fields of computational neuroscience and neuromorphic engineering. ## 3 Astrocyte and Spiking Neural Networks Astrocytes constitute about 20-40% of the total glial population in the human brain. Studies have revealed that these molecules play an active role in neuronal signaling and information processing. The astrocyte extends its processes near neurons, where it senses and modulates neuronal activity through gliotransmission [16]. This remarkable capability motivates the integration of astrocyte mechanisms into SNNs, providing an intriguing avenue to enhance their fault tolerance and adaptability. An SNN is an artificial neural network that mimics time-dependent and event-driven communication between biological neurons through spikes or 'action potentials'. High temporal resolution, high power efficiency, and bio-plausible mechanisms have made them a subject of keen interest [14]. It is possible to mimic the fault tolerance and dynamic adjustment of biological neural networks by incorporating astrocyte mechanisms into SNNs. A bidirectional communication system connects astrocytes to neurons. Neurotransmitters released by neurons can be detected and responded to by them, and the gliotransmitters released can modulate neuronal activity. Among the main mechanisms of astrocyte-neuron interaction is the tripartite synapse model, in which astrocytes actively contribute to neuronal synaptic transmission [2]. Among the diverse effects of this interaction are the modification of synaptic strength, the regulation of local blood flow, and metabolic support for neurons, thus enhancing network resilience and adaptability. SNNs based on astrocyte functionality can incorporate these aspects to enhance their resilience. Synaptic weights can be modulated by astrocytes to balance neuron firing rates across a network, thereby preventing neurons from 'dying out' or 'overfiring' as a result of neural network models. Moreover, astrocytes are able to sense and respond to changes in neuronal activity, enabling them to design fault-tolerance mechanisms that dynamically adjust to faults in networks [9][10]. The incorporation of astrocyte-neuron interactions into SNNs, especially those based on FPGAs, has yet to be explored in various computational neuroscience studies. ## 4 Method ### Dataset Our project is based on the DAVIS 240C Dataset, a unique collection of event-based data ideal for pose estimation, visual odometry, and SLAM. This dataset, generated using DAVIS 240C cameras by iniLabs, offers event-based images, IMU measurements, and motion-captured ground truth. Some datasets that utilized a motorized linear slider lack motion-capture or IMU data; however, their ground truth derives from the slider's position. The "calibration" dataset provides alternative camera models, with all gray datasets sharing identical intrinsic calibration. This dataset proves invaluable for image data analysis, particularly in SNNs and related domains [11]. For this project, we employ a subset of the DAVIS 240C dataset. Figure 2 showcases the DAVIS 240C event camera which was utilized to produce this dataset. ### Training Details In our implementation, the SNN is architected to emulate astrocyte functions using a subset of the DAVIS 240C Dataset that records astrocyte activity in response to neuronal behavior. The architecture is composed of: * **Input Layer:** Simulates neuron-astrocyte interactions, customizable for specific neurological scenarios. * **Astrocyte Layer:** Represents spiking astrocytes, processing inputs and relaying spike trains to the subsequent layer. * **Output Layer:** Decodes the spike trains, producing responses analogous to biological outcomes from astrocyte activities. During compilation, the aim is to synchronize the Output Layer's reactions with the anticipated responses in the training set. We employ the 'Adam' optimizer, recognized for efficiently addressing complex problems. Performance evaluation utilizes the 'accuracy' metric, with the 'EarlyStopping' callback integrated during training to mitigate overfitting. Following training, outcomes are juxtaposed with validation data, assessing accuracy, Figure 2: DAVIS 240 DVS Event Camera precision, and recall. This implementation paves the way for deeper explorations into astrocytic roles in SNNs. Subsequent iterations may further refine the model and incorporate additional cellular dynamics, with a recommendation to consider advanced SNN metrics such as spike timing and spiking rate accuracy. ### Hardware Implementation Hardware implementation is vital for real-world applications, particularly in computationally-intensive tasks. This section presents our methodology for physically implementing the astrocyte model using two different approaches: CPU/GPU and FPGA. #### 4.3.1 CPU/GPU Implementations We utilized Python to execute implementations on the CPU and GPU. The study leveraged the computational prowess of NVIDIA's GeForce RTX 3060 GPU and Intel's Core i9 12900H CPU, both of which are optimized for different tasks, ensuring an efficient execution of our implementations. #### 4.3.2 FPGA Implementation Our FPGA implementation was executed on the XCVC1902 FPGA chip, equipped with 400 AI Chips, utilizing the 2021.1 software version of Vivado. Our central module, "Astrocyte", processes a 42-bit input and produces a 42-bit output. The internal operations of the PiP (Place-in-Place) module, which is a crucial component of this design, are depicted in Fig. 3. The efficiency of our astrocyte-augmented SNN, as presented through metrics, was evident in its low latency and theoretically infinite throughput, emphasizing its computational prowess. The presented metrics stem from an experiment involving an astrocyte-augmented SNN. Our aim was to evaluate how the astrocyte implementation impacts the network's robustness and computational efficiency. Initially, our SNN displayed a fault tolerance of 72.08% without astrocytes, signifying that a single artificially silenced neuron caused the network's output to diverge by this proportion from the original, fault-free state. Such a measure provided an estimate of the SNN's resilience to localized neuronal failures. When astrocytes were incorporated into the SNN, a remarkable reduction in latency was observed; the time required for an entire round of astrocytic updates was essentially zero as per the system clock. This extremely low latency indicated an impressive efficiency in the computational implementation. Moreover, this near-zero latency facilitated theoretically infinite throughput, implying instantaneous processing of all neurons in the network, which further emphasized the exceptional computational efficiency of our astrocyte-augmented SNN. The observed new fault tolerance was quantified Figure 3: Block Diagram of Implementation as 8.96%, highlighting the degree of enhancement in SNN's fault tolerance as a direct result of astrocyte integration. Post astrocyte integration, the SNN demonstrated an improved fault tolerance of 63.11%. The fault tolerance \(FT\) of a SNN is conceptually defined as the proportionate deviation of the SNN's output from the original, fault-free state when subject to a fault condition. 1. \(FT_{\text{initial}}\): Initial fault tolerance without astrocytes. 2. \(FT_{\text{astro}}\): Fault tolerance after integrating astrocytes. 3. \(\Delta FT\): Improvement in fault tolerance due to astrocyte integration, given by \(\Delta FT=FT_{\text{initial}}-FT_{\text{astro}}\). The fault tolerance of the SNN, considering the given description, is represented as: \[FT=\frac{O_{\text{fault}}-O_{\text{original}}}{O_{\text{original}}}\times 100\% \tag{1}\] Where: * \(O_{\text{original}}\) is the output in the original, fault-free state. * \(O_{\text{fault}}\) is the output when a fault (like a silenced neuron) is induced. From our results: \[FT_{\text{initial}} =72.08\%\] \[FT_{\text{astro}} =8.96\%\] \[\Delta FT =63.11\%\] This confirms the mathematical relationship: \[\Delta FT=FT_{\text{initial}}-FT_{\text{astro}} \tag{2}\] The reduced \(FT_{\text{astro}}\) implies that the network's output deviates less from the fault-free state when a fault condition is induced, indicating enhanced resilience of the SNN upon integrating astrocytes. ### Adaptive Model Creation with Dynamic Function eXchange Technology Our study deploys the Dynamic Function Exchange (DFX) technology to construct an adaptive and flexible model. The cornerstone of this innovative approach is on-the-fly hardware reconfiguration, enabling precise mapping of computational functions onto the hardware based on evolving demands. The process begins with "Training & Predicting", during which the model learns and generates predictions based on historical data. This learning phase allows the model to grasp underlying patterns and adapt over time. This step is succeeded by "Adjusting Hyperparameters", where the model parameters are fine-tuned for optimized performance, ensuring effective learning and accuracy in predictions. The final phase, "Execute DFX", leverages the DFX technology by reprogramming the hardware in real time, thereby facilitating the model to adjust its functionality as per the network's changing state. This dynamic adjustment leads to an optimal allocation of computational resources, enhancing adaptability to the intrinsic variability of SNNs. Moreover, the DFX technology offers an energy-efficient solution by minimizing unnecessary power consumption, which directly translates to improved performance in high-demand machine learning tasks. To summarize, by integrating DFX technology into our model and following a systematic sequence of "Training & Predicting", "Adjusting Hyperparameters", and "Execute DFX" shown in Fig. 4. We provide a model that ensures high-performance computing and flexible real-time adaptation, promising robust adaptability in the realm of astrocyte-based neuronal network implementation. ### Quantitative Analysis of the Hardware Accelerator For computational tasks, especially in real-time scenarios, metrics like throughput and latency are vital. Throughput gauges the system's capability to handle data processing, whereas latency measures the delay before a transfer of data begins. These metrics play a pivotal role in understanding and optimizing the performance of our system. \[\text{Throughput}=\frac{\text{No. of MACs}}{\text{Operational Latency}} \tag{3}\] The above equation delineates the throughput as a function of the number of Multiply-Accumulate (MAC) operations over the operational latency. The count of MAC operations is derived from specialized neural network libraries [1]. On the other hand, the operational latency, which is synonymous with simulation time in this context, predominantly emerges from the inherent characteristics and constraints of the underlying hardware architecture. This is mathematically captured by: \[\text{Operational Latency}=\frac{\text{Time for Inference}}{\text{Dataset Loader Iteration}} \tag{4}\] This equation emphasizes the interdependence between the time taken for model inference and the iterations dictated by the dataset loader. Our FPGA implementation's efficiency can be further understood through the resource utilization summary provided in Table 1. The low percentages in the utilization column Figure 4: DFX Diagram indicate efficient use of resources. However, there remains an opportunity to further leverage these resources for complex tasks or to enhance performance. ## 5 Results Table 2 covers a number of key metrics, including the manufacturing technology, operating frequency, power consumption, and in the case of the FPGA, additional parameters such as latency, throughput, and energy efficiency. For instance, GPUs and FPGAs are often more parallel in their execution compared to CPUs, and therefore can perform certain tasks more efficiently despite a lower operating frequency. This is evident when examining the latency metric for the Xilinx FPGA, which stands at a mere 4.6 ms. In terms of power consumption, the FPGA demonstrates remarkable energy efficiency with a power requirement of only 2 Watts, significantly less than both the CPU and GPU. The table further highlights the performance-per-Watt of the FPGA with a throughput of 58.5 GOP/s and an energy efficiency of 29.2 GOP/s/W, underlining the suitability of FPGA devices for tasks where energy efficiency is critical. This comparison reveals the distinctive characteristics and advantages of each technology, and their appropriateness would largely depend on the specifics of the application at hand. Table 3 provides a comprehensive comparison of our proposed implementation with several prior works on astrocyte modeling, each represented by different computational platforms. Our implementation, similar to [8], [7], and [6], is built upon FPGA technology, but we utilize a more advanced Xilinx VCK-190 chip, \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{VC1902 Versal} \\ \hline **Resource** & **Utilization** & **Available** & **\% Utilization** \\ \hline LUT & 900 & 899,840 & 0.10\% \\ FF & 100 & 75,000 & 0.13\% \\ BRAM & 0 & 1,000 & 0\% \\ IO & 86 & 770 & 11.17\% \\ AI Engine & 0 & 400 & 0\% \\ DSP & 0 & 1,968 & 0\% \\ \hline \end{tabular} \end{table} Table 1: Resource utilization summary \begin{table} \begin{tabular}{l|c|c|c} \hline \multicolumn{2}{|c|}{**|i9 12900H**} & **RTX 3060** & **VCK190** \\ \hline **Vendor** & Intel & NVIDIA & AMD-Xilinx \\ **Tech (nm)** & 10 & 8 & 7 \\ **Freq (MHz)** & 5200 & 1320 & 100 \\ **MACs (G)** & 0.269 & 0.269 & 0.269 \\ **Latency (ms)** & 84 & 11.6 & 4.6 \\ **Power (W)** & 27 & 68 & 2 \\ **Throughput (GOP/s)** & 3.2 & 24.5 & 58.5 \\ **Efficiency (GOP/s/W)** & 0.11 & 0.36 & 29.2 \\ \hline \end{tabular} \end{table} Table 2: Comparison between CPU, GPU, and FPGA which aligns with the latest advancements in FPGA technology. Regarding clock speed, our solution maintains a speed of 100 MHz, which is standard for FPGA-based models, achieving a balance between speed and power consumption. An integral part of the comparison lies in the count of neurons and synapses, two critical measures for the complexity and capabilities of neural networks. Our approach handles a significantly higher count of both neurons (680) and synapses (69,888), which surpasses all other implementations. This marks a noteworthy improvement in network size and complexity, enhancing the capacity and functionality of our astrocyte model. The fault tolerance rate is another essential aspect of this comparison. Our work matches the lowest fault tolerance rate of 9.96%, as reported in [6], which highlights the robustness of our model in handling neuronal failures. The resilience improvement rate, as reported, reveals the performance enhancement our model brings to the table, achieving a significant rate of 63.11%. This improvement underscores the efficiency of our solution in the field of astrocyte modeling, suggesting superior computational outcomes. Power consumption is a key metric for any hardware implementation. Our implementation requires a power of 2W. While this is more than some of the other FPGA implementations, it is important to note that our work handles a significantly larger neuron and synapse count, resulting in higher energy demand. Therefore, considering the increased complexity and capacity of our model, this power requirement represents an impressive energy efficiency. ## 6 Conclusions This work has presented a novel astrocyte-augmented spiking neural network model implemented on CPU/GPU and FPGA platforms. The inclusion of astrocytes has shown significant improvements in the network's fault tolerance, demonstrating the potential benefits of astrocyte integration in artificial neural networks. Additionally, the use of FPGA hardware for this model leverages the advantages of parallel computation and on-the-fly hardware reconfiguration offered by DFX technology. The comparison with different computational architectures and previous works highlighted the strengths of our approach in terms of computational efficiency and network robustness. Future research in this direction could yield more sophisticated and efficient neuromorphic systems, thus paving the way for advanced applications in diverse areas such as robotics, bioinformatics, and cognitive computing. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & [17] & [8] & [7] & [6] & **Our** \\ \hline Platform & CPU & FPGA Virtex-5 & FPGA Artix-7 & FPGA VCU-128 & FPGA VCK-190 \\ \hline Clock & 3.1 GHz & 100 MHz & 100 MHz & 100 MHz & 100 MHz \\ \hline Neurons & 2 & 14 & - & 336 & 680 \\ \hline Synapses & 1 & 100 & - & 17,408 & 69,888 \\ \hline Fault Tolerance Rate & 30\% & 30\% & - & 39\% & 8.96\% \\ \hline Resilience Improvement & 12.5\% & 70\% & 80\% & 51.6\% & 63.11\% \\ \hline Power & - & 1.37 W & 0.33 W & 0.538 W & 2 W \\ \hline \end{tabular} \end{table} Table 3: Comparisons with previous implementations.
2306.17597
Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings
The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorphic model, has the potential to extract spatio-temporal features from the event streams, it is not effective and efficient. Based on the above, we propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively. Concretely, we extend the dynamic mechanism based on the global temporal embeddings, reconstruct the features, and emphasize the events effect adaptively at the training stage. During the inference stage, eliminate fruitless frames hierarchically according to a binary mask generated by the trained temporal embeddings. Comprehensive experiments demonstrate that our Razor SNN achieves competitive performance consistently on four events-based benchmarks: DVS 128 Gesture, N-Caltech 101, CIFAR10-DVS and SHD.
Yuan Zhang, Jian Cao, Ling Zhang, Jue Chen, Wenyu Sun, Yuan Wang
2023-06-30T12:17:30Z
http://arxiv.org/abs/2306.17597v1
# Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings ###### Abstract The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorphic model, has the potential to extract spatio-temporal features from the event streams, it is not effective and efficient. Based on the above, we propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively. Concretely, we extend the dynamic mechanism based on the global temporal embeddings, reconstruct the features, and emphasize the events effect adaptively at the training stage. During the inference stage, eliminate fruitless frames hierarchically according to a binary mask generated by the trained temporal embeddings. Comprehensive experiments demonstrate that our Razor SNN achieves competitive performance consistently on four events-based benchmarks: DVS 128 Gesture, N-Caltech 101, CIFAR10-DVS and SHD. Keywords:Efficient SNNs DVS Temporal embeddings Pruning. ## 1 Introduction Event-based neuromorphic computation utilizes sparse and asynchronous events captured by DVS to represent signals more efficiently. Unlike RGB cameras, DVS encodes the time, location, and polarity of the brightness changes for each pixel at high event rates [10]. Although the events are sparse in the spatial domain, the streams they composed are dense in the temporal domain. This characteristic makes event streams hardly process directly through deep neural networks (DNNs), which are based on dense computation. Fortunately, spiking neural networks (SNNs) have an event-triggered computation characteristic that matches well with processing events. However, it is desirable to accelerate the SNN models to be more suitable and efficient for real-time events tasks and further improve accuracy. The dynamic mechanism owns attention recipes, which selectively focus on the most informative components of the input, and can be interpreted as the sensitivity of output to the variant input. As for SNNs, inspired by [17] and [24], we propose the **temporal embeddings** combined with the dynamic mechanism for SNNs, to explore the unstructured and data-dependent strategy. Heaps of prior works [10, 24, 2] are dedicated to the spatial-wise attention. Different from above works, the temporal embeddings emphasize on dense ticks of event streams. As shown in Fig. 1, we present an event pruning mechanism for the temporal domain by embeddings, to adaptively filter out immaterial events and get the SNNs slim. It would reconstruct features and predict attention vectors, to compute the probabilities of dropping the events, while retaining the event-triggered characteristic. We get the pruning architecture as Razor SNN. Our method reveals the possibility of exploiting the dynamic mechanism with temporal embeddings for the acceleration of SNNs and RNNs-like models. The contributions are listed as follows: * Rethink the DVS events characteristics in both spatial and temporal aspects, and propose the novel pruning method named Razor SNN. It can do inference tasks with less data but higher performance. To the best of our knowledge, this is the first work to design a dynamic mechanism with temporal embeddings for SNNs. * Our Razor SNN can achieve competitive performances on events recognition tasks, even compared with full-events inputs. Besides, it gets improvement of accuracy for gesture recognition with inference only 65 ms. ## 2 Related Works ### Object Recognition Using DVS For events recognition matched with a dynamic vision camera, processing the events as groups is the most common method, to yield sufficient signal-to-noise ratios (SNR) [1]. In this paper, we adopt the frame-based representation that Figure 1: Event pruning for a spike layer in Razor SNN. accumulates events occured in a time window and maps into a frame [24]. It is convenient to generate frame-based representation and naturally compatible with the traditional computer vision framework. Besides, SNN algorithms based on frames benefit from faster convergence for training [14]. Timestep and window size is crucial to determine the quality of frame-based representation, the larger window size is, the higher SNR we could have. The prior works have been dedicated to taking various techniques to improve the classification performance based on the large window size. [29, 21] attempt to improve performance by taking training methods, while [23, 4] by changing the connection path of the SNNs, and [6, 8] through hybrid fusion. ### Spiking Neural Networks Based on the biologically plausible, the spiking neuron computes by transforming dynamic input into a series of spikes. Spike-based SNNs is to assume that the neurons which have not received any input spikes will skip computations, called event-triggered characteristic [15]. There are heaps of research works on launching spiking neural networks for recognition tasks [18]. Diehl et al. proposed a mechanism utilizing Spike Time Dependent Plasticity (STDP) [3], lateral inhibition and homeostasis to recognize. Lee et al. and Delbruck et al. have proposed a supervised learning mechanism mimicking back-propagation of conventional ANNs that can efficiently learn [12]. The first one uses feed-forward layers of spiking neurons and a variant of back-propagation by defining an error function between desired and spiking activity. Wu et al. proposed Spatio-Temporal Backpropagation (STBP), a learning rule for back-propagating error in temporal and spatio domain [20]. This addresses the problem of approximating the derivative of the spike function that inherently brings in the question of biological plausibility. In this work, we adopt LIAF models as the elements of spike-based SNNs and STBP to evaluate the network architecture. ### Model Acceleration of SNNs There are various solutions that aim to compress or accelerate spiking neural networks. Structural network pruning methods explicitly prune out filters [9]. Knowledge distillation methods [27, 11, 28] can guide the training of a student model with learnt knowledge, such as predictions and features, from a higher-capacity teacher (ANNs or SNNs). Some works design synchronous hardware inference mechanisms with parallelization strategies [13]. In our work, however, we aim to accelerate a SNN model based on feature map reconstruction with constraining width and depth of the model. ## 3 Razor SNN ### Iterative LIAF Model We first introduce the Leaky Integrate-and-Fire (LIF) model, a balance between complex dynamic characteristics and simpler mathematical form, and translate it to an iterative expression with the Euler method [21]. Mathematically it is updated as: \[\mathbf{u}(t+1)=\tau\mathbf{u}(t)+\mathbf{I}(t), \tag{1}\] where \(\mathbf{u}\left(t\right)\) denotes the neuronal membrane potential at time \(t\), \(\tau\) is a time constant for the leaky behavior and \(\mathbf{I}\left(t\right)\) is the pre-synaptic input. The LIF model is as follows: \[\mathbf{a}_{i}^{t+1,\ l}=\sum_{j=1}^{l-1}\mathbf{W}_{i,j}^{n}\mathbf{O}_{j}^{t+1,l-1}. \tag{2}\] EQ. 2 comes from the spatial domain. Where \(\mathbf{a}_{i}\) is the axon inputs of the \(i\)th neuron, \(\mathbf{W}_{i,j}^{n}\) is the synaptic weight from the \(j\)th neuron in previous layer to the \(i\)th neuron and \(\mathbf{O}_{j}\) is the output of the \(j\)th neuron, whose value can only be 1 or 0. Besides, where the \(t\) means the timestamp, \(l\) represents the \(l\)th layer: \[\mathbf{u}_{i}^{t+1,\ l}=\mathbf{u}_{i}^{t,l}\mathbf{h}\left(\mathbf{O}_{i}^{t,l}\right)+\mathbf{ a}_{i}^{t+1,l}. \tag{3}\] EQ. 3 comes from the temporal domain (TD). \(\mathbf{u}_{i}\) is the membrane potential of the \(i\)th neuron. \(\mathbf{h}\left(x\right)\) means the rate of TD memory loss as follows: \[\mathbf{h}\left(x\right)=\tau e^{-\frac{\pi}{\tau}}. \tag{4}\] EQ. 5 is the output of the \(i\)th neuron, responsible for checking whether the membrane potential exceeds the threshold \(\mathbf{V}_{th}\) and firing a spike or not: \[\mathbf{O}_{i}^{t+1,\ l}=\left\{\begin{array}{ll}1&\quad u_{i}^{t+1,l}\geq V_{ th},\\ 0&\quad u_{i}^{t+1,l}<V_{th}.\end{array}\right. \tag{5}\] Figure 2: The complete forward flow of Razor SNN with Event Pruning Mechanism. The feature maps colored green are processed at timestamp \(\mathbf{T}_{i}\). The dashed box in the bottom right corner represents the Event Pruning Mechanism we proposed. Zoom up to view better. However, introducing the LIF model into the last layer will lose information on the membrane potential and disturb the performance. Instead, we adpot the leaky integrate-and-analog-fire (LIAF) model. LIAF changes the Heaviside step function to ReLU function, then both spatial and temporal domains are analog values. ### Event Pruning Mechanism The precise observation of significant events is the keystone to Dynamic Event Pruner. Unlike vanilla attention mechanisms, Razor SNN takes the information from neighboring frames into consideration with Global Temporal Embeddings. Besides, Prune the refined events to purify inputs. For simplicity, we follow method [24] that the spatial input of \(l\)th layer at \(t\)th timestamp \(\mathbf{O}^{t,\ l-1}\) equals to \(\mathbf{X}^{t,\ l-1}\in\mathbb{R}^{C\times H\times W}\), where \(\mathbf{X}\) is the feature maps tensor and \(C\) is channel size. #### 3.2.1 Global Temporal Embeddings We introduce a set of learnable global temporal embeddings \(\mathbf{B}\in\mathbb{R}^{E\times T}\) to extract \(E\) sequence principles on temporal features. Notably, not all embeddings play the same role. For example, we would assign more attention on the moment when events are dense, while less on that when events are sparse. In this paper, we propose an embedding weighting module to determine the embedding importance independently. Concretely, we conduct a convolution-based module with softmax function (see Figure 2) to predict, and weight the importance vector \(\mathbf{w}\) onto the temporal embeddings to generate weighted embeddings \(\hat{\mathbf{B}}\): \[\hat{\mathbf{B}}^{t}=\sum_{i=1}^{T}\mathbf{w_{i}}\odot\mathbf{B}^{t}. \tag{6}\] #### 3.2.2 Reconstruct Events Feature We accumulate the feature maps within \(\mathbf{T}\) and flatten \(\mathbf{X}\) into shape of \((T,C\times H\times W)\) in the paper. Then the Events of Interests (EoI) masks \(\mathbf{M}\) can be obtained by calculating the similarities between weighted embeddings and the temporal frames in the feature maps: \[\mathbf{M}=\sigma(\hat{\mathbf{B}}\mathbf{X}), \tag{7}\] where \(\sigma\) denotes sigmoid activation function. Eventually, multiplying with the masks, we reconstruct Events Feature and get the refined feature \(\hat{\mathbf{X}}\), which is a better substitute for the vanilla features: \[\hat{\mathbf{X}}=\sum_{i=1}^{E}\mathbf{M}\odot\mathbf{X}. \tag{8}\] #### 3.2.3 Pruning Strategy Due to the refined feature has contained discriminative temporal information provided by the Global Temporal Embeddings, it is what the pruning mechanism based on. We only need to send the refined feature to the adaptive max pooling 3d for extracting importance scores of temporal features \(\mathbf{S}\in\mathbb{R}^{T}\), which is as follows: \[\mathbf{S}=max_{i=1}^{H}max_{j=1}^{W}max_{k=1}^{C}\hat{\mathbf{X}}, \tag{9}\] While during inference, we will generate a binary mask \(\mathbf{M}\) according to the scores, where eliminate pointless frames which are lower than filtering threshold \(\mathbf{S}_{th}\) we set, and set the attention score of the other frames to 1. \[\mathbf{M}=\mathbf{H}(\mathbf{S}-\mathbf{S}_{th}). \tag{10}\] \(\mathbf{H(\cdot)}\) is a Heaviside step function that is same as in EQ. 5. Eventually, we combine the mask \(\mathbf{M}\) with the input tensor, and the formed input at \(t\)th timestamp is: \[\widetilde{\mathbf{X}}^{t,l-1}=\mathbf{M}^{t,l-1}\odot\hat{\mathbf{X}}^{t,l-1}. \tag{11}\] ### Architecture Design of RazorSNNs We implement the RazorSNNs with embedding the Event Pruning Mechanism into each spiking layer except the encoder layer (the first layer). The reason is that, we assume SNN cannot extract more spatio-temporal information in this condition and pruning the whole network leads to unstable results. We follow the recent state-of-the-art methods[19, 24] to use a x-stage pyramid structure, and the Razor Pruning Architecture is shown in Figure 2. ## 4 Experimental Results In this section, to show our method superiority and effectiveness, we conduct experiments on three popular event-based benchmarks: DVS Gesture, N-Caltech and CIFAR10-DVS. ### Implementation Details In this paper, we follow the similar notation as Yao et al. [24] to define our network architectures separately for DVS128 Gesture, SHD and CIFAR10-DVS, while N-Caltech's is the same as DVS128 Gesture's. Besides, We take rate coding as loss function, and utilize the Adam optimizer for accelerating the training process. The corresponding hyperparameters details are shown in Tab 1. ### Performance Comparison We compare Razor SNN with various of prior works for event-based data, like CNN method, spike-based SNN, and analog-based SNN, on above mentioned benchmarks. The experiment results are shown in Tab 3. From the results, our Razor SNN outperforms the strong baseline with a margin on SHD, CIFAR series and N-Caltech 101, and achieves competitive performance on DVS128 Gesture. **SHD** The SHD dataset [5] is a large spike-based audio classification task that contains 10420 audio samples of spoken digits ranging from zero to nine in English and German languages. Unlike the fourdimensional event stream generated by the DVS camera, the audio spike stream has only two dimensions, i.e., time and position. Our method surpasses the previous state-of-the-art by 1.75%, and verify the effectiveness of the Temporal embedding module. **DVS128 Gesture** Almost all the spike-based SNNs methods evaluate their model on it. Razor SNN surpasses TA-SNN by 0.22% and outperforms all CNN-based methods. Due to unique temporal feature extraction and reconstruction, Razor SNN has superior ability to distinguish the clockwise and counterclockwise of gestures, which are easily confused. **N-Caltech 101** N-Caltech is a spiking version of frame-based Caltech101 dataset, which is a large-scale Events dataset. Few methods tested on N-Caltech because of the complicate background and computational complexity. Notably, Razor SNN still gets a nearly 2.1% increase over the TA-SNN, and outperforms all the SOTA methods. Besides, the temporal embeddings function as the attention module when SNNs meet static image, which is beneficial to classification. **CIFAR10-DVS** Compared to the best result TA-SNN so far, our method gets 1.01% improvement, attributed to its global temporal module catching critical events information and filter temporal noise, which damages SNNs accuracy. Our \begin{table} \begin{tabular}{|l|l|l|} \hline Parameter & Description & Value \\ \hline \(\mathbf{dt}\) & Window size & 1ms \\ \(\mathbf{V}_{th}\) & Fire threshold & 0.3 \\ \(\mathbf{e}^{-\frac{dt}{\tau}}\) & Leakage factor & 0.3 \\ \(\mathbf{S}_{th}\) & Razor ratio & 0.4 \\ \hline \end{tabular} \end{table} Table 1: Comprehensive parameters for experiments. \begin{table} \begin{tabular}{c|c|c} \hline Methods & Architecture & SHD \\ \hline Cramer [5] & LIF RSNN & 71.40 \\ Yin [26] & RELU SRNN & 88.93 \\ Zenke [25] & SG-based SNN & 84.00 \\ Yao [24] & TA-SNN & 91.08 \\ Ours & Razor SNN & **92.83** \\ \hline \end{tabular} \end{table} Table 2: Accuracy of models for the SHD Dataset (%). Razor SNN shows better generalization on large-scale data set, where exists more noisy and pointless events. Moreover, the number of parameters in Razor SNN only has weak increase compared with the vanilla SNN. So the events pruning mechanism can afford SNNs to achieve higher performance with less cost for practical applications. ### Ablation Studies #### 4.3.1 The Number of Temporal Embeddings We perform experiments on DVS Gesture to explore the effects of different numbers of temporal embeddings in Razor SNN. As shown in Tab 4, with only 2 embeddings, Event Pruning improves SNN by 0.84% Acc, while more tokens could achieve further improvements. In this paper, we choose a number of **4** for a better performance. #### 4.3.2 The Position of Where to Prune It is vital to figure out that we should insert temporal embeddings into which layers to prune, and we design four sets of experiments to explore. E1, pure SNNs baseline; E2, introduce temporal embeddings into the encoder layer; E3, introduce temporal embeddings into the \begin{table} \begin{tabular}{c c c c c} \hline \hline 0 (vanilla) & 2 & 4 & 6 & 8 \\ \hline 97.99 & 98.56 & **98.83** & 98.79 & 98.34 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation on the number of temporal embeddings.** \begin{table} \begin{tabular}{c c|c c c} \hline \hline Methods & Architecture & Gesture & N-Caltech & CIFAR10 \\ \hline Wu [21] & NeuNorm & - & - & 60.50 \\ Ramesh [16] & DART & - & 66.8 & 65.78 \\ Kugele [8] & DenseNet & 95.56 & - & 66.75 \\ Wu [22] & LIAF-Net & 97.56 & - & 70.40 \\ Zheng [29] & ResNet19 & 96.87 & - & 67.80 \\ Kim [7] & SALT & 67.10 & 55.0 & - \\ Wu [19] & ASF-BP & 93.40 & 60.23 & 62.50 \\ Yao [24] & TA-SNN & 98.61 & 68.42 & 72.00 \\ \hline Ours & Razor SNN & **98.83** & **70.50** & **73.01** \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparison of different methods on DVS-Gesture, N-Caltech 101 and CIFAR10-DVS (%).** backbone layers. E4, introduce temporal embeddings into all the layers. As shown in Fig 3, we observe that E2 and E3 independently afford improvement in most cases, where E3 achieves the best accuracy when T=120. But E4, who owns E2 and E3 simultaneously, leads to unstable results, and we assume SNNs cannot extract more spatio-temporal information in this condition. #### 4.3.1 Effects of components in Event Pruning Mechanism We set experiments to show the contribution of each proposed component in Mechanism in Tab 5. + **Embeddings.** Global temporal embeddings benefit most (0.31%) for Razor SNNs due to its consideration of neighboring frames and extraction of temporal features. **+ Embeddings weighting module.** Embedding weighting module decides the embedding importance independently and provide discriminative information and gains by 0.24%. **+ Reconstruct Events Feature.** The refined feature has contained discriminative temporal information, and experiment statics (0.10% ) proves that it a better substitute for the original features indeed. **+ Pruning.** Pruning eliminates worthless events contained much noise which disturb the SNNs model. ### Visualization Analysis To validate the effectiveness of Razor SNN, we visualize the case that when the vanilla SNN fails in recognition, the Razor SNN succeeds. As shown in 4, each feature indicates the average response of a spiking layer. We make the following two observations about the effect of temporal embeddings and reconstruction of Razor SNN. The spiking activity is more concentrated in Razor SNN, i.e., the deep blue area of Razor SNN is smaller and more focused. This suggests that global temporal Figure 3: E1, pure SNNs baseline; E2, introduce embeddings into the encoder layer; E3, introduce embeddings into the backbone layers. E4, introduce embeddings into all the layers. extraction is beneficial to handling the important region of intermediate channels. We observe that the pruning lightens the color of the yellow area (background). The lighter the pixel, the weaker the spiking activity rate. ## 5 Conclusion In this paper, we innovatively introduce dynamic attention mechanism based on temporal embeddings into SNNs, and propose the Razor SNNs. Compared with vanilla Spiking Neural Networks, Razor SNNs process signals more efficiently, especially for event streams, pruning pointless event frames progressively. Our method enjoys finer temporal-level feature and prunes worthless events frames. Extensive experiments show that, our Razor SNNs apply to various benchmarks and can achieve state-of-the-art performance consistently. Figure 4: Visualization of the heat maps generated by vanilla SNNs and Razor SNNs separately. The temporal embeddings and reconstructed features urge SNNs to centre on the gesture instead of distraction somewhere like vanilla models. Best viewed in color. \begin{table} \begin{tabular}{c c c c c} \hline \hline Embeddings & Weighting & Reconstruct & Pruning & Acc(\%) \\ \hline - & - & - & - & 97.99 (baseline) \\ ✓ & - & - & - & 98.30 (**+0.31**) \\ ✓ & ✓ & - & - & 98.54 (+0.24) \\ ✓ & ✓ & ✓ & - & 98.64 (+0.10) \\ ✓ & ✓ & ✓ & ✓ & 98.83 (+0.19) \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablation experiments on effects of components in Event Pruning Mechanism.**
2309.17312
Elastic bounds of the coupling tensor of anisotropic composite laminates: a polar approach
In the equivalent single layer theories of anisotropic laminates, the coupling tensor B describes the relation between the in- and out-of plane behavior of the plate. This tensor has some peculiar characteristics and in particular it is not defined, so it is normally considered as an unbounded tensor. We show in this paper a way to determine some relations between the polar invariants of B and those of the tensors describing the in- and out-of- plane behaviors of the laminate, A and D respectively. These relations constitute a set of bounds for the invariants of all the tensors of the laminate, A,B and D. It is hence shown that also the components of B must satisfy some conditions. Some peculiar cases, interesting for applications, are also considered.
P. Vannucci
2023-09-29T15:11:52Z
http://arxiv.org/abs/2309.17312v3
# On the bounds of the coupling tensor of anisotropic laminates ###### Abstract The problem of determining the elastic bounds of the coupling tensor for anisotropic laminates is addressed in this paper. It is shown how the invariant polar moduli of the coupling tensor interact with those of the extension and bending tensors to define all the conditions to be satisfied by such set of tensors describing the elastic behavior of an anisotropic laminate composed by identical layers. Some peculiar cases, interesting for applications, are also considered. **Key words:** anisotropy; elastic moduli bounds; polar formalism; tensor invariants; laminates PACS 46.25; 46.35; 62.20.de MSC 74B05; 74E10; 74E30; 74K20 ## 1 Introduction In the theory of elastic laminates, the behavior of the plate is described by a law of the type \[\left\{\begin{array}{c}{\bf N}\\ {\bf M}\end{array}\right\}=\left[\begin{array}{cc}h{\mathbb{A}}&\frac{h^{2}} {2}{\mathbb{B}}\\ \frac{h^{2}}{2}{\mathbb{B}}&\frac{h^{3}}{12}{\mathbb{D}}\end{array} \right]\left\{\begin{array}{c}{\mathbf{\varepsilon}}\\ {\mathbf{\kappa}}\end{array}\right\}, \tag{1}\] where, [1,2], \(h\) is the plate's thickness, \({\bf N},{\bf M}\) are respectively the tensor of membrane forces and bending moments, \({\mathbf{\varepsilon}},{\mathbf{\kappa}}\) the extension and curvature tensors, \({\mathbb{A}},{\mathbb{D}}\) the stiffness tensors of the extension and bending behaviors and \({\mathbb{B}}\) the coupling tensor. \({\mathbb{A}},{\mathbb{B}},{\mathbb{D}}\) are tensors of the elastic type, i.e. fourth-rank tensors with the major and minor symmetries but, unlike \(\mathbb{A}\) and \(\mathbb{D}\), that are positive definite, \(\mathbb{B}\) is not defined. At least, this is what is commonly admitted, and not only for \(\mathbb{B}\), but also for other coupling tensors describing some sort of coupling between two physical phenomena, e.g. in the theory of quasi-crystals, [3]. Actually, the positive definiteness of \(\mathbb{A}\) and \(\mathbb{D}\), like that of any other elastic tensor, is a consequence of the work, necessarily positive, done by the external forces. In the case of \(\mathbb{A}\) and \(\mathbb{D}\), this is computed considering the laminate as uncoupled. The case of coupled laminates, i.e. having \(\mathbb{B}\neq\mathbb{O}\), is almost unconsidered in the literature and \(\mathbb{B}\) is commonly seen as an _undefined_ tensor, i.e. in general not positively nor negatively defined. The positive definiteness of an elastic tensor gives as a result some bounds on the elastic moduli of the tensor in a given mathematical representation, e.g. the well known bounds on the Lame's constants or on the Young's modulus and the Poisson's ratio for isotropic materials. More delicate is the case of anisotropic materials, [1, 4], for which a clear and definite set of bounds for the elastic moduli has not yet been defined in three-dimensional elasticity for any possible elastic syngony, namely for the most general one of triclinic materials. However, the same problem has been solved definitely in two-dimensional elasticity, using the polar formalism, [2, 5, 6]. By this mathematical technique, introduced as early as 1979 by G. Verchery, [7], any elastic tensor is represented by invariant moduli and angles. In this way, the bounds on the elastic tensor are given on its invariants, so intrinsically representing the elastic limits of the material. The same procedure has been used to determine the so-called _geometrical bounds_, [8], i.e. the bounds determining the elastic domain for a laminate when the stacking is considered. The question of the bounds on \(\mathbb{B}\) remains open. This paper addresses exactly this subject and in particular it is a first attempt to give an answer to the following questions: 1. is it possible to establish some bounds for the moduli of \(\mathbb{B}\)? 2. how the known bounds on \(\mathbb{A}\) and \(\mathbb{D}\) are modified when \(\mathbb{B}\neq\mathbb{O}\)? 3. is it possible to give an explicit form to the bounds on the moduli of \(\mathbb{B}\)? 4. can such bounds be given, in all the cases, in an invariant form? 5. how the existence of some peculiar circumstances, e.g. a material symmetry, does affect the bounds on \(\mathbb{B}\)? This research has two motivations: the first, is a purely scientific question, interesting _per se_, to fill a gap still existing in the scientific literature: is it possible to give some bounds to tensor \(\mathbb{B}\)? Another motivation can be found in optimization problems: any design problem for a coupled laminate is correctly formulated only if the design space is well determined, so as to properly define the feasibility domain. The paper is organized as follows: in the next Section, the polar formalism is briefly recalled, especially for representing tensors \(\mathbb{A},\mathbb{B},\mathbb{D}\). Then, the procedure used to determine the bounds on \(\mathbb{B}\) is detailed and subsequently the attempt to establish a general solution is explained. Some special cases are then considered and finally some conclusion is drawn in the end. ## 2 Recall of the polar formalism for a laminate For a given plane elastic tensor \(\mathbb{T}\), the polar formalism allows to express the cartesian components at a direction \(\theta\) as \[\begin{split} T_{1111}(\theta)&{=}T_{0}{+}2T_{1}{+}R_ {0}\cos 4\left(\varPhi_{0}{-}\theta\right){+}4R_{1}\cos 2\left(\varPhi_{1}{-} \theta\right),\\ T_{1112}(\theta)&{=}R_{0}\sin 4\left(\varPhi_{0}{-} \theta\right){+}2R_{1}\sin 2\left(\varPhi_{1}{-}\theta\right),\\ T_{1122}(\theta)&{=}{-}T_{0}{+}2T_{1}{-}R_{0} \cos 4\left(\varPhi_{0}{-}\theta\right),\\ T_{1212}(\theta)&{=}T_{0}{-}R_{0}\cos 4\left(\varPhi_{0}{-} \theta\right),\\ T_{1222}(\theta)&{=}{-}R_{0}\sin 4\left(\varPhi_{0}{-} \theta\right){+}2R_{1}\sin 2\left(\varPhi_{1}{-}\theta\right),\\ T_{2222}(\theta)&{=}T_{0}{+}2T_{1}{+}R_{0}\cos 4 \left(\varPhi_{0}{-}\theta\right){-}4R_{1}\cos 2\left(\varPhi_{1}{-}\theta \right).\end{split} \tag{2}\] What is important to remark is the fact that the moduli \(T_{0},T_{1},R_{0},R_{1}\) as well as the difference of the angles \(\varPhi_{0}-\varPhi_{1}\) are tensor invariants; the value of one of the two polar angles, usually \(\varPhi_{1}\), fixes the frame. To be remarked that the polar method allows for a decomposition of anisotropic 2D elasticity into different _elastic phases_: an _isotropic phase_, characterized by the two invariants \(T_{0}\) and \(T_{1}\), and two _anisotropic phases_, whose amplitudes are determined by the invariants \(R_{0}\) and \(R_{1}\); these two anisotropy phases are shifted of the angle \(\varPhi_{0}-\varPhi_{1}\), the fifth tensor invariant. To remark that any rotation of the angle \(\theta\) is simply done in the polar method, it is sufficient to subtract \(\theta\) from each one of the two polar angles (the complete demonstration of these fundamental result can be found in [2]). A general sketch of the decomposition of elasticity into elastic phases is given in Fig. 1, where the angles \(\varPhi_{0},\varPhi_{1}\) and \(\varPhi_{0}-\varPhi_{1}\) are also indicated. Thanks to the polar formalism, the elastic symmetries are determined by the following values of the invariants: 1. ordinary orthotropy: \(\varPhi_{0}-\varPhi_{1}=k\dfrac{\pi}{4},\ k\in\{0,1\}\); 2. \(R_{0}\)-orthotropy: \(R_{0}=0\), [10]; Figure 1: The decomposition of anisotropic plane elasticity in the polar method for a glass-epoxy layer with \(T_{0}=92.38\) MPa, \(T_{1}=86.97\) MPa, \(R_{0}=44.86\) MPa, \(R_{1}=43.82\) MPa, source [9]; left: a completely anisotropic material obtained if it was \(\varPhi_{0}=\pi/3,\varPhi_{1}=\pi/20\), with, in blue (indicated also by \(T_{0}+2T_{1}\)), the isotropic phase, in gray (indicated by \(R_{0}\)) the \(R_{0}\) phase, in green (indicated by \(4R_{1}\)) the \(R_{1}\) phase and in thick red the overall result, i.e. the component \(T_{1111}(\theta)\). Right: the true, orthotropic material, corresponding to \(\varPhi_{0}=\varPhi_{1}=0\). 3. square symmetry: \(R_{1}=0\); 4. isotropy: \(R_{0}=R_{1}=0\). These elastic symmetries are the only possible ones in 2D elasticity; in particular, we see that two cases of ordinary orthotropy can exist, sharing the same values of the invariants \(T_{0},T_{1},R_{0},R_{1}\). Moreover, square symmetry is the 2D corresponding of the 3D cubic synonym: it is the case of a layer having two couples of mutually orthogonal symmetry axes, rotated of \(\pi/4\), and with the same values of the elastic moduli along the two orthogonal axes. It is, namely, the case of layers reinforced by balanced fabrics, i.e. by fabrics having the same amount of fibers in warp and weft (that are often erroneously considered as isotropic), [11]. Finally, the case of \(R_{0}\)-orthotropy has been discovered for the first time in 2D elasticity thanks to the polar method, [10], and later also in 3D elasticity, [12]. In Fig. 2, some examples of the above possible cases are given. The above polar transformations apply also to tensors \(\mathbb{A},\mathbb{B}\) and \(\mathbb{D}\); in particular, when a laminate is composed by identical layers, the case considered in this research, [16, 17], then (we indicate by a superscript \(A,B\) or \(D\) a polar quantity of \(\mathbb{A},\mathbb{B}\) or \(\mathbb{D}\) respectively, Figure 2: Polar diagrams of \(T_{1111}(\theta)\) for some examples of elastic symmetries in 2D elasticity; a) a \(k=0\) orthotropic ply (T300/5208 carbon epoxy ply, [13] ); b) a \(k=1\) orthotropic ply (braided carbon-epoxy BR45a ply, [14]); with a dashed line: a material with the same moduli but with \(k=0\); c) a square symmetric ply, \(R_{1}=0\) (carbon epoxy balanced fabric, [15]); with a dashed line: the same material but with \(\varPhi_{0}=\pi/4\); d) a \(R_{0}\)-orthotropic material, obtained superposing two T300/5208 carbon epoxy piles rotated of \(\pi/4\). while a polar quantity of the basic layer has no superscript) \[T_{0}^{A}=T_{0}^{D}=T_{0},\ T_{1}^{A}=T_{1}^{D}=T_{1},\ T_{0}^{B}=T_{1}^{B}=0. \tag{3}\] The polar formalism can be applied also to other situations, e.g. to the piezoelectric tensor, [18], or to thermo-elastic problems, [19, 20]. Some special cases of elastic non-classical materials have also been studied through the polar formalism, [21, 22, 23], however in this research we will consider only classical elastic tensors. Regarding a second-rank symmetric tensor \(\mathbf{L}\), in the polar formalism it is \[\begin{split} L_{11}(\theta)&=T+R\cos 2(\Phi- \theta),\\ L_{12}(\theta)&=R\sin 2(\Phi-\theta),\\ L_{22}(\theta)&=T-R\cos 2(\Phi-\theta),\end{split} \tag{4}\] with \(T,R\) two invariants and \(\Phi\) an angle determined by the choice of the frame. About these relations, we can rephrase the comments given for eq. (2), adding that actually they correspond to the analytical expression of the well known graphical construction of the Mohr's circle, [24]. In the following, we will indicate by \(t_{\varepsilon},r_{\varepsilon},\varphi_{\varepsilon}\) the polar components of \(\boldsymbol{\varepsilon}\) and by \(t_{\kappa},r_{\kappa},\varphi_{\kappa}\) those of \(\boldsymbol{\kappa}\). ## 3 Statement of the problem For a coupled laminate, i.e. with \(\mathbb{B}\neq\mathbb{O}\), the density of the elastic energy per unit of area of the plate is \[U=\frac{1}{2}\left\{\begin{array}{c}\mathbf{N}\\ \mathbf{M}\end{array}\right\}\cdot\left\{\begin{array}{c}\boldsymbol{ \varepsilon}\\ \boldsymbol{\kappa}\end{array}\right\}=\frac{1}{2}\left\{\begin{array}{c} \boldsymbol{\varepsilon}\\ \boldsymbol{\kappa}\end{array}\right\}\cdot\left[\begin{array}{cc}h\mathbb{A }&\frac{h^{2}}{2}\mathbb{B}\\ \frac{h^{2}}{2}\mathbb{B}&\frac{h^{3}}{12}\mathbb{D}\end{array}\right] \left\{\begin{array}{c}\boldsymbol{\varepsilon}\\ \boldsymbol{\kappa}\end{array}\right\}, \tag{5}\] i.e., for being \(\mathbb{B}=\mathbb{B}^{\top}\) by virtue of the major symmetries, [25], \[U=\frac{h}{24}(12\ \boldsymbol{\varepsilon}\cdot\mathbb{A}\boldsymbol{ \varepsilon}+12h\ \boldsymbol{\varepsilon}\cdot\mathbb{B}\boldsymbol{\kappa}+h^{2}\boldsymbol{ \kappa}\cdot\mathbb{D}\boldsymbol{\kappa}). \tag{6}\] The energy density \(U\) must be positive for each possible strain state, i.e. \(\forall\boldsymbol{\varepsilon},\boldsymbol{\kappa}\); this is the condition leading to express the bounds for the elastic moduli. Following an approach first introduced by Verchery and detailed in [2, 26], we express all the tensors in the previous equation by their polar components, once fixed \(\theta=0\). Some standard passages lead to \[\begin{split} U&=2h\{2T_{1}t_{\varepsilon}^{2}+[T_{0}+R_{0}^ {A}\cos 4(\Phi_{0}^{A}-\varphi_{\varepsilon})]r_{\varepsilon}^{2}+4R_{1}^{A}t_{ \varepsilon}r_{\varepsilon}\cos 2(\Phi_{1}^{A}-\varphi_{\varepsilon})\}\\ &+2h^{2}\{2R_{1}^{B}t_{\varepsilon}r_{\kappa}\cos 2(\Phi_{1}^{B}- \varphi_{\kappa})+2R_{1}^{B}t_{\kappa}r_{\varepsilon}\cos 2(\Phi_{1}^{B}- \varphi_{\varepsilon})\\ &+R_{0}^{B}r_{\varepsilon}r_{\kappa}\cos 2(2\Phi_{0}^{B}- \varphi_{\varepsilon}-\varphi_{\kappa})\}\\ &+\frac{h^{3}}{6}\{2T_{1}t_{\kappa}^{2}+[T_{0}+R_{0}^{D}\cos 4( \Phi_{0}^{D}-\varphi_{\kappa})]r_{\kappa}^{2}+4R_{1}^{D}t_{\kappa}r_{\kappa} \cos 2(\Phi_{1}^{D}-\varphi_{\kappa})\}.\end{split} \tag{7}\] In this expression the first term in curly braces is due to extension, the second one to coupling and the third one to bending. Let us order the polar moduli of \(\boldsymbol{\varepsilon}\) and \(\boldsymbol{\kappa}\) as a column vector \(\{v\}\): \[\{v\}=\left\{\begin{array}{l}t_{\varepsilon}\\ r_{\varepsilon}\\ t_{\kappa}\\ r_{\kappa}\end{array}\right\}; \tag{8}\] then, we can rewrite \(U\) as the quadratic form \[U=\frac{h}{24}\{v\}^{\top}[M]\{v\}, \tag{9}\] where \([M]\) is the \(4\times 4\) symmetric matrix \[[M]=\left[\begin{array}{cccc}96T_{1}&96R_{1}^{A}\cos 2(\Phi_{1}^{A}-\varphi_{ \varepsilon})&0&48hR_{1}^{B}\cos 2(\Phi_{1}^{B}-\varphi_{\kappa})\\ 96R_{1}^{A}\cos 2(\Phi_{1}^{A}-\varphi_{\varepsilon})&48T_{0}+R_{1}^{A}\cos 4( \Phi_{0}^{A}-\varphi_{\varepsilon})]&48hR_{1}^{B}\cos 2(\Phi_{1}^{B}- \varphi_{\varepsilon})&24hR_{0}^{B}\cos 2(2\Phi_{0}^{B}-\varphi_{\varepsilon}- \varphi_{\kappa})\\ 0&48hR_{1}^{B}\cos 2(\Phi_{1}^{B}-\varphi_{\varepsilon})&8h^{2}T_{1}&8h^{2}R_{1}^{B} \cos 2(\Phi_{1}^{B}-\varphi_{\kappa})\\ 48hR_{1}^{B}\cos 2(\Phi_{1}^{B}-\varphi_{\kappa})&24hR_{0}^{B}\cos 2(2\Phi_{0}^{B}- \varphi_{\varepsilon}-\varphi_{\kappa})&8h^{2}R_{1}^{B}\cos 2(\Phi_{1}^{B}- \varphi_{\kappa})&4h^{2}[T_{0}+R_{0}^{B}\cos 4(\Phi_{0}^{B}-\varphi_{\kappa})] \end{array}\right]. \tag{10}\] So, \(U>0\)\(\forall\mathbf{\varepsilon},\mathbf{\kappa}\iff[M]\) is positive definite \(\forall\varphi_{\varepsilon},\varphi_{\kappa}\). This happens if and only if, see [27] p. 340, \[M1=96T_{1}>0, \tag{11}\] \[M2=\det\left[\begin{array}{cc}96T_{1}&96R_{1}^{A}\cos 2(\Phi_{1}^{A}- \varphi_{\varepsilon})\\ 96R_{1}^{A}\cos 2(\Phi_{1}^{A}-\varphi_{\varepsilon})&48[T_{0}+R_{0}^{A}\cos 4( \Phi_{0}^{A}-\varphi_{\varepsilon})]\end{array}\right]>0\)\(\forall\varphi_{\varepsilon}, \tag{12}\] \[M3=\det\left[\begin{array}{cc}96T_{1}&96R_{1}^{A}\cos 2(\Phi_{1}^{A}- \varphi_{\varepsilon})&0\\ 96R_{1}^{A}\cos 2(\Phi_{1}^{A}-\varphi_{\varepsilon})&48[T_{0}+R_{0}^{A}\cos 4( \Phi_{0}^{A}-\varphi_{\varepsilon})]&48hR_{1}^{B}\cos 2(\Phi_{1}^{B}- \varphi_{\varepsilon})\\ 0&48hR_{1}^{B}\cos 2(\Phi_{1}^{B}-\varphi_{\varepsilon})&8h^{2}T_{1}\end{array} \right]>0\)\(\forall\varphi_{\varepsilon}, \tag{13}\] \[M4=\det[M]>0\)\(\forall\varphi_{\varepsilon},\varphi_{\kappa}. \tag{14}\] The first condition, eq. (11), is redundant: \(T_{1}>0\) is a general condition for this polar modulus, see [2] p. 154. Because \(T_{1}\) is a modulus of the basic layer, i.e. of a real material, this condition is automatically satisfied. The other conditions on \(M2,M3\) and \(M4\) are discussed in the next Section. Before doing that, we notice that only the condition on \(M4\) concerns also the curvature field, while those on \(M2\) and \(M3\) depend just on the extension field. ## 4 General solution We consider first the different conditions on \(M2,M3\) and \(M4\), then we examine them together in the next Section. ### Condition on M2 Equation (12) gives the following condition for the positive definiteness of \([M]\): \[T_{1}[T_{0}+R_{0}^{A}\cos 4(\Phi_{0}^{A}-\varphi_{\varepsilon})]-2{R_{1}^{A}}^{2 }\cos^{2}2(\Phi_{1}^{A}-\varphi_{\varepsilon})>0\)\(\forall\varphi_{\varepsilon}. \tag{15}\] This condition is exactly the same already found in [26] for proving the bounds on the polar moduli of an anisotropic layer; here, the condition refers to tensor \(\mathbb{A}\). Following the same steps outlined in [2] or in [26], it can be proved that eq. (15) is equivalent to the three conditions \[\begin{array}{l}T_{0}-R_{0}^{A}>0,\\ T_{0}T_{1}-{R_{1}^{A}}^{2}>0,\\ T_{1}(T_{0}^{2}-{R_{0}^{A}}^{2})-2{R_{1}^{A}}^{2}(T_{0}-R_{0}^{A}\cos 4\Phi_{A})>0, \end{array} \tag{16}\] where \(\Phi_{A}=\Phi_{0}^{A}-\Phi_{1}^{A}\), an invariant of \(\mathbb{A}\). Actually, as shown in [2, 26], the second condition above is less restrictive than the third one, so that it can be discarded. Finally, the condition on \(M2\) becomes \[\begin{array}{l}T_{0}-R_{0}^{A}>0,\\ T_{1}(T_{0}^{2}-{R_{0}^{A}}^{2})-2{R_{1}^{A}}^{2}(T_{0}-R_{0}^{A}\cos 4\Phi_ {A})>0.\end{array} \tag{17}\] We remark that conditions (17) are written in terms of tensor invariants of \(\mathbb{A}\), so they are frame independent. Of course, the order in which the components of \(\varepsilon\) and of \(\kappa\) appear in vector \(\{v\}\), eq. (8) is completely arbitrary, any other order being allowed. In particular, if one choses the order putting first the components of \(\kappa\), then those of \(\varepsilon\), i.e. if we had put \[\left\{v\right\}=\left\{\begin{array}{l}t_{\kappa}\\ r_{\kappa}\\ t_{\varepsilon}\\ r_{\varepsilon}\end{array}\right\}, \tag{18}\] then we had found for \(M2\) a similar expression, but with the index \(D\) replacing the index \(A\) everywhere, i.e. the conditions would concern in this case the moduli of \(\mathbb{D}\) and not those of \(\mathbb{A}\). Because of the arbitrariness of the order of the components of \(\{v\}\), we can conclude that actually the above conditions (15) and (17) must hold necessarily also for the components of \(\mathbb{D}\): \[T_{1}[T_{0}+R_{0}^{D}\cos 4(\Phi_{0}^{D}-\varphi_{\kappa})]-2{R_{1}^{D}}^{2} \cos^{2}2(\Phi_{1}^{D}-\varphi_{\kappa})>0\ \forall\varphi_{\kappa} \tag{19}\] and \[\begin{array}{l}T_{0}-R_{0}^{D}>0,\\ T_{1}(T_{0}^{2}-{R_{0}^{D}}^{2})-2{R_{1}^{D}}^{2}(T_{0}-R_{0}^{D}\cos 4\Phi_ {D})>0,\end{array} \tag{20}\] with of course \(\Phi_{D}=\Phi_{0}^{D}-\Phi_{1}^{D}\). ### Condition on M3 Developing the determinant in eq. (13), we get the condition \[\begin{array}{l}h^{2}T_{1}[T_{0}T_{1}+T_{1}R_{0}^{A}\cos 4(\Phi_{0}^{A}- \varphi_{\varepsilon})-2{R_{1}^{A}}^{2}\cos^{2}2(\Phi_{1}^{A}-\varphi_{ \varepsilon})\\ -6{R_{1}^{B}}^{2}\cos^{2}2(\Phi_{1}^{B}-\varphi_{\varepsilon})]>0\ \forall \varphi_{\varepsilon}.\end{array} \tag{21}\] To solve this condition, we consider just the term in square brackets, as \(T_{1}>0\), as already said; moreover, we introduce the angles \[\alpha=\Phi_{1}^{A}-\varphi_{\varepsilon}\ \Rightarrow\ \Phi_{0}^{A}-\varphi_{ \varepsilon}=\Phi_{A}+\alpha, \tag{22}\] and \[\delta_{A}=\Phi_{1}^{B}-\Phi_{1}^{A}\ \Rightarrow\ \Phi_{1}^{B}-\varphi_{ \varepsilon}=\delta_{A}+\alpha. \tag{23}\] The angle \(\delta_{A}\) is the _shift angle_ of tensor \(\mathbb{A}\) with respect to tensor \(\mathbb{B}\), whose geometrical meaning is shown by Fig. 3. With this, eq. (21) is equivalent to \[T_{1}[T_{0}+R_{0}^{A}\cos 4(\Phi_{A}+\alpha)]>2{R_{1}^{A}}^{2}\cos^{2}2\alpha+ 6{R_{1}^{B}}^{2}\cos^{2}2(\delta_{A}+\alpha)>0\ \forall\alpha. \tag{24}\] Using standard trigonometry, this can be transformed first to \[\begin{split}& T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>(T_{1}R _{0}^{A}\sin 4\Phi_{A}+3{R_{1}^{B}}^{2}\sin 4\delta_{A})\sin 4\alpha-\\ &-(T_{1}R_{0}^{A}\cos 4\Phi_{A}-3{R_{1}^{B}}^{2}\cos 4\delta_{A}-{R _{1}^{A}}^{2})\cos 4\alpha\quad\forall\alpha,\end{split} \tag{25}\] then to \[\begin{split}& T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>[(T_{1}R _{0}^{A}\sin 4\Phi_{A}+3{R_{1}^{B}}^{2}\sin 4\delta_{A})^{2}+\\ &+(T_{1}R_{0}^{A}\cos 4\Phi_{A}-3{R_{1}^{B}}^{2}\cos 4\delta_{A}-{R _{1}^{A}}^{2})^{2}]^{\frac{1}{2}}\cos 4(\alpha+\omega)\quad\forall\alpha,\end{split} \tag{26}\] with \[\omega=\frac{1}{4}\arctan\frac{T_{1}R_{0}^{A}\sin 4\Phi_{A}+3{R_{1}^{B}}^{2} \sin 4\delta_{A}}{T_{1}R_{0}^{A}\cos 4\Phi_{A}-3{R_{1}^{B}}^{2}\cos 4\delta_{A}-{R _{1}^{A}}^{2}}. \tag{27}\] For being satisfied for each possible value of \(\alpha\), eq. (26) gives the condition \[\begin{split}& T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>[(T_{1}R _{0}^{A}\sin 4\Phi_{A}+3{R_{1}^{B}}^{2}\sin 4\delta_{A})^{2}+\\ &+(T_{1}R_{0}^{A}\cos 4\Phi_{A}-3{R_{1}^{B}}^{2}\cos 4\delta_{A} -{R_{1}^{A}}^{2})^{2}]^{\frac{1}{2}},\end{split} \tag{28}\] Figure 3: Polar diagrams of \(A_{1111}(\theta)\), thick line, \(B_{1111}(\theta)\), dashed line, and \(D_{1111}(\theta)\), thin line, with indication of the polar angles \(\Phi_{1}^{A},\Phi_{1}^{B},\Phi_{1}^{D}\) and of the shift angles \(\delta_{A}\) and \(\delta_{D}\), for the general case of a completely anisotropic coupled laminate. solved into the two conditions \[T_{0}T_{1}-R_{1}^{A^{2}}-3{R_{1}^{B}}^{2}>0 \tag{29}\] and \[\begin{split}&(T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2})^{2}>(T_{ 1}R_{0}^{A}\sin 4\Phi_{A}+3{R_{1}^{B}}^{2}\sin 4\delta_{A})^{2}+\\ &+(T_{1}R_{0}^{A}\cos 4\Phi_{A}-3{R_{1}^{B}}^{2}\cos 4\delta_{A}-R _{1}^{A2})^{2}.\end{split} \tag{30}\] This last can be easily transformed to \[\begin{split}& T_{1}^{2}\left(T_{0}^{2}-{R_{0}^{A}}^{2}\right)+6T_{ 1}R_{0}^{A}{R_{1}^{B}}^{2}\cos 4\left(\Phi_{A}+\delta_{A}\right)-\\ &-2{R_{1}^{A}}^{2}\left[T_{1}\left(T_{0}-R_{0}^{A}\cos 4\Phi_{A} \right)-3{R_{1}^{B}}^{2}\right]-\\ &-6{R_{1}^{B}}^{2}\left(T_{0}T_{1}+{R_{1}^{A}}^{2}\cos 4\delta_ {A}\right)>0.\end{split} \tag{31}\] Equations (29) and (31) are two bounds into which the condition on \(M3\), eq. (13), is reduced. Both of them concern moduli of \(\mathbb{A}\) and \(\mathbb{B}\), so these bounds describe the interactions existing between the moduli of these two tensors. Unlike eq. (29), depending exclusively upon tensor invariants, the condition in eq. (31) depends also on \(\delta_{A}\), which is not a tensor invariant. However, \(\delta_{A}\) is a shift angle, so as such frame independent. Hence, also these bounds, corresponding to the condition \(M3\), are frame independent like those deriving from the condition on \(M2\). Using once more the remark done in the previous Section about the order to give to the components of vector \(\{v\}\) and operating like before, two more necessary bounds, concerning the moduli of \(\mathbb{B}\) and \(\mathbb{D}\) can be written as well, completely similar to eqs. (29) and (31): \[\begin{split}& T_{0}T_{1}-{R_{1}^{D}}^{2}-3{R_{1}^{B}}^{2}>0,\\ & T_{1}^{2}\left(T_{0}^{2}-{R_{0}^{D}}^{2}\right)+6T_{1}R_{0}^{D }{R_{1}^{B}}^{2}\cos 4\left(\Phi_{D}+\delta_{D}\right)-\\ &-2{R_{1}^{D}}^{2}\left[T_{1}\left(T_{0}-R_{0}^{D}\cos 4\Phi_{D} \right)-3{R_{1}^{B}}^{2}\right]-\\ &-6{R_{1}^{B}}^{2}\left(T_{0}T_{1}+{R_{1}^{D}}^{2}\cos 4\delta_ {D}\right)>0,\end{split} \tag{32}\] where \(\delta_{D}\) is the _shift angle_ between tensors \(\mathbb{B}\) and \(\mathbb{D}\), see again Fig. 3: \[\delta_{D}=\Phi_{1}^{B}-\Phi_{1}^{D}. \tag{33}\] ### Condition on M4 The development of the determinant in eq. (14) leads to a condition much more cumbersome to be solved than the previous ones, for two reasons: on the one hand, the expression of this determinant is very complicate and, on the other hand, it must be positive for each possible value of \(\varphi_{\varepsilon}\) and of \(\varphi_{\kappa}\), i.e. there are two possible independent fields to be taken into account simultaneously: strains and curvatures. Developing the determinant in eq. (14) leads to the following condition \[\begin{split}& T_{0}^{2}T_{1}^{2}{-2R_{1}^{A}}^{2}T_{0}T_{1}\cos 2 \left(\Phi_{1}^{A}{-}\varphi_{\varepsilon}\right)-2R_{1}^{A}{}^{2}\cos^{2}2( \delta_{A}+\varphi_{\varepsilon})\right]\times\\ &\times\left[T_{0}T_{1}+T_{1}R_{0}^{D}\cos 4(\Phi_{D}{-} \delta_{D}{-}\varphi_{\kappa})-2R_{1}^{D}{}^{2}\cos^{2}2(\delta_{D}+\varphi_{ \kappa})\right]\right\}+\\ &+36R_{1}^{B}{}^{4}\cos^{2}2\varphi_{\varepsilon}\cos^{2}2 \varphi_{\kappa}-6T_{0}T_{1}R_{1}^{B}{}^{2}(\cos^{2}2\varphi_{\varepsilon}+ \cos^{2}2\varphi_{\kappa})-\\ &-3T_{1}^{2}R_{0}^{B}{}^{2}\cos^{2}2(2\Phi_{B}{-} \varphi_{\varepsilon}-\varphi_{\kappa})-\\ &-24R_{1}^{A}R_{1}^{B}{}^{2}R_{1}^{D}\cos 2(\delta_{A}+\varphi_{ \varepsilon})\cos 2\varphi_{\varepsilon}\cos 2(\delta_{D}+\varphi_{\kappa})\cos 2 \varphi_{\kappa}-\\ &-6T_{1}R_{1}^{B}{}^{2}\left[R_{0}^{A}\cos 4(\Phi_{A}{-} \delta_{A}{-}\varphi_{\varepsilon})\cos^{2}2\varphi_{\varepsilon}+\right.\\ &\left.+R_{0}^{D}\cos 4(\Phi_{D}{-}\delta_{D}{-}\varphi_{ \kappa})\cos^{2}2\varphi_{\varepsilon}\right]+12T_{1}R_{0}^{B}R_{1}^{B}\cos 2 (2\Phi_{B}{-}\varphi_{\varepsilon}{-}\varphi_{\kappa})\times\\ &\times\bigl{[}R_{1}^{A}\cos 2(\delta_{A}+\varphi_{ \varepsilon})\cos 2\varphi_{\kappa}{+}R_{1}^{D}\cos 2(\delta_{D}+\varphi_{ \kappa})\cos 2\varphi_{\varepsilon}\bigr{]}>0\ \forall\varphi_{\varepsilon},\varphi_{\kappa}.\end{split} \tag{34}\] This form is particularly interesting because the term in curly braces is actually the product of the condition \(M2\) for \(\mathbb{A}\) times the same condition for \(\mathbb{D}\), cf. eqs. (15) and (19). The rest of the above expression depends on the invariants of \(\mathbb{B}\), i.e. on \(R_{0}^{B},R_{1}^{B}\) and \(\Phi_{B}\); this part represents hence the influence of coupling on the elastic bounds of \(\mathbb{A}\) and \(\mathbb{D}\). Whenever \(\mathbb{B}=\mathbb{O}\), all this part vanishes and condition \(M4\) becomes redundant, as obvious, because it reduces simply to conditions (17) and (20), to be satisfied simultaneously. We remark also that, like for the case of \(M3\), eq. (35) depends only on tensor invariants and on the two shift angles \(\delta_{A}\) and \(\delta_{D}\), so it is frame independent too. Unfortunately, it is not possible to derive from the condition (35) any explicit bound for the elastic moduli. In fact, on the one hand a procedure like the one described in the previous section for \(M3\) is no longer possible in this case and, on the other hand, the problem could be tackled looking for the minimum of the expression in eq. (35), i.e. calculating the gradient with respect to \(\varphi_{\varepsilon}\) and \(\varphi_{\kappa}\), then solving the equation obtained equating it to zero, searching the minimum among the solutions and imposing such a minimum to be positive. Trying to do this leads to some trigonometric equations that cannot be solved analytically, though this remains possible numerically, for any application case. ## 5 Special cases Besides the conditions found above, the anisotropic moduli of \(\mathbb{A},\mathbb{B}\) and \(\mathbb{D}\), i.e. \(R_{0}^{A},R_{0}^{B},R_{0}^{D},R_{1}^{A},R_{1}^{B},R_{1}^{D}\) must be put non negative, as each one of them is the modulus of a complex quantity, [2], p. 144. Actually, \(T_{0}\) and \(T_{1}\) are also positive quantities, but no conditions are to be written for them because they are two elastic invariant moduli of the basic layer composing the laminate, so as moduli of a real material, they automatically satisfy the elastic bounds. All the previous conditions resume to the set of inequalities \[R_{0}^{A} \geq 0, \tag{36}\] \[R_{1}^{A} \geq 0,\] \[R_{0}^{B} \geq 0,\] \[R_{1}^{B} \geq 0,\] \[R_{0}^{D} \geq 0,\] \[R_{1}^{D} \geq 0,\] \[T_{0}-R_{0}^{A}>0,\] \[T_{1}(T_{0}^{2}-{R_{0}^{A}}^{2})-2{R_{1}^{A}}^{2}(T_{0}-R_{0}^{ A}\cos 4\Phi_{A})>0,\] \[T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>0,\] \[T_{1}^{2}\left(T_{0}^{2}-{R_{0}^{A}}^{2}\right)+6T_{1}R_{0}^{A}{R _{1}^{B}}^{2}\cos 4\left(\Phi_{A}+\delta_{A}\right)-\] \[-2{R_{1}^{A}}^{2}\left[T_{1}\left(T_{0}-R_{0}^{A}\cos 4\Phi_{A} \right)-3{R_{1}^{B}}^{2}\right]-\] \[-6{R_{1}^{B}}^{2}\left(T_{0}T_{1}+{R_{1}^{A}}^{2}\cos 4\delta_{A} \right)>0\] \[\min[M4]>0.\] In fact, eqs. (20) and (32) are redundant for establishing the positive definiteness of matrix \([M]\). Though the last of the previous conditions cannot be given in an analytical explicit form, something more can be said in some particular cases, detailed below. We notice also that the condition on \(M1\) concerns just the material, and it is automatically satisfied, that on \(M2\) concerns exclusively \(\mathbb{A}\) (or alternatively \(\mathbb{D}\)), the one on \(M3\) regards \(\mathbb{A}\) (or \(\mathbb{D}\)) and \(\mathbb{B}\) together and finally the condition on \(M4\) concerns all the three tensors, \(\mathbb{A},\mathbb{B}\) and \(\mathbb{D}\) together. Finally, it is immediate to check that whenever \(\mathbb{B}=\mathbb{O}\) the conditions reduce simply to those already known, for \(\mathbb{A}\) and for \(\mathbb{D}\) separately: \[R_{0}^{A} \geq 0, \tag{37}\] \[R_{1}^{A} \geq 0,\] \[T_{0}-R_{0}^{A}>0,\] \[T_{1}(T_{0}^{2}-{R_{0}^{A}}^{2})-2{R_{1}^{A}}^{2}(T_{0}-R_{0}^{A} \cos 4\Phi_{A})>0,\] \[R_{0}^{D} \geq 0,\] \[R_{1}^{D} \geq 0,\] \[T_{0}-R_{0}^{D}>0,\] \[T_{1}(T_{0}^{2}-{R_{0}^{D}}^{2})-2{R_{1}^{D}}^{2}(T_{0}-R_{0}^{D} \cos 4\Phi_{D})>0.\] This actually means that not only the matrix in eq. (1) or, which is equivalent, matrix \([M]\), eq. (10), must be positive definite, but also \(\mathbb{A}\) and \(\mathbb{D}\) separately, as well known for uncoupled laminates, but not \(\mathbb{B}\). The bounds involving moduli of \(\mathbb{B}\) are not found imposing its positive definiteness: \(\mathbb{B}\) is actually not definite. ### Aligned orthotropic tensors Let us consider the case of _aligned orthotropic tensors_, i.e. \[\delta_{A}=\lambda_{A}\frac{\pi}{2},\ \delta_{D}=\lambda_{D} \frac{\pi}{2},\ \Phi_{A}=k_{A}\frac{\pi}{4},\ \Phi_{b}=k_{B}\frac{\pi}{4},\ \Phi_{D}=k_{D}\frac{\pi}{4}, \tag{38}\] \[\lambda_{A},\lambda_{D},k_{A},k_{B},k_{D}\in\{0,1\}.\] In such a case, very common in practice, it is easy to check that \[\nabla M4=0\ \ \Longleftrightarrow\ \ \varphi_{\varepsilon}=h_{ \varepsilon}\frac{\pi}{4},\ \varphi_{\kappa}=h_{\kappa}\frac{\pi}{4},\ h_{\varepsilon},h_{\kappa}\in\{0,1\}, \tag{39}\] i.e. the minimum of \(M4\) can be only in one of the four points \[P_{1}=(0,0),\ P_{2}=\left(\frac{\pi}{4},0\right),\ P_{3}=\left(0,\frac{\pi}{4 }\right),\ P_{4}=\left(\frac{\pi}{4},\frac{\pi}{4}\right). \tag{40}\] Calling, for the sake of conciseness, \(M4_{1},M4_{2},M4_{3}\) and \(M4_{4}\) the value taken by (35) at \(P_{1},P_{2},P_{3}\) and \(P_{4}\) respectively, then the condition on \(M4\) can be put in the form \[\min\{M4_{1},M4_{2},M4_{3},M4_{4}\}>0, \tag{41}\] where \[M4_{1} =\left[T_{0}T_{1}+(-1)^{k_{A}}T_{1}R_{0}^{A}-2{R_{1}^{A}}^{2} \right]\left[T_{0}T_{1}+(-1)^{k_{D}}T_{1}R_{0}^{D}-2{R_{1}^{D}}^{2}\right]+ \tag{42}\] \[+36{R_{1}^{B}}^{4}-3T_{1}^{2}{R_{0}^{B}}^{2}-6T_{1}{R_{1}^{B}}^{ 2}\left[2T_{0}+(-1)^{k_{A}}R_{0}^{A}+(-1)^{k_{D}}R_{0}^{D}\right]+\] \[+12(-1)^{k_{B}}T_{1}R_{0}^{B}R_{1}^{B}\left[(-1)^{\lambda_{A}}R_ {1}^{A}+(-1)^{\lambda_{D}}R_{1}^{D}\right]-\] \[-24(-1)^{\lambda_{A}}(-1)^{\lambda_{D}}R_{1}^{A}R_{1}^{B}R_{1}^{ D},\] \[M4_{2} =T_{1}\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right]\left[T_{1}\left(T_ {0}+(-1)^{k_{D}}R_{0}^{D}\right)-2{R_{1}^{D}}^{2}-6{R_{1}^{B}}^{2}\right],\] \[M4_{3} =T_{1}\left[T_{0}-(-1)^{k_{D}}R_{0}^{D}\right]\left[T_{1}\left(T _{0}+(-1)^{k_{A}}R_{0}^{A}\right)-2{R_{1}^{A}}^{2}-6{R_{1}^{B}}^{2}\right],\] \[M4_{4} =T_{1}^{2}\left\{\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right]\left[T _{0}-(-1)^{k_{D}}R_{0}^{D}\right]-3{R_{0}^{B}}^{2}\right\}.\] In such a situation, the conditions (17) on \(M2\) become \[\begin{split}& T_{0}-R_{0}^{A}>0,\\ &\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right]\left[T_{1}\left(T_{0}+(- 1)^{k_{A}}R_{0}^{A}\right)-2{R_{1}^{A}}^{2}\right]>0,\end{split} \tag{43}\] while those on \(M3\), eqs. (29) and (30), \[\begin{split}& T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>0,\\ & T_{1}\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right]\left[T_{1}\left( T_{0}+(-1)^{k_{A}}R_{0}^{A}\right)-2{R_{1}^{A}}^{2}-6{R_{1}^{B}}^{2}\right]>0. \end{split} \tag{44}\] Because \(T_{1}>0\) and by eqs. (43)\({}_{1}\) and (44)\({}_{2}\), condition (43)\({}_{2}\) is redundant and the above conditions can be rewritten as \[\begin{split}& T_{0}-R_{0}^{A}>0,\\ & T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>0,\\ & T_{1}\left[T_{0}+(-1)^{k_{A}}R_{0}^{A}\right]-2{R_{1}^{A}}^{2}- 6{R_{1}^{B}}^{2}>0.\end{split} \tag{45}\] A scrutiny of \(M4_{2}\) and of \(M4_{3}\) allows to see that if conditions (45)\({}_{1,3}\) are considered in eqs. (42)\({}_{2,3}\), this will lead to two conditions that are exactly the analogous of those that could be obtained by eqs.(20) and (32), i.e. writing the conditions \(M2\) and \(M3\) also for \(\mathbb{D}\). Finally the whole set of conditions for the case at hand will be \[\begin{split}& R_{0}^{A}\geq 0,\\ & R_{1}^{A}\geq 0,\\ & R_{0}^{B}\geq 0,\\ & R_{1}^{B}\geq 0,\\ & R_{0}^{D}\geq 0,\\ & R_{1}^{D}\geq 0,\\ & T_{0}-R_{0}^{A}>0,\\ & T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>0,\\ & T_{1}\left[T_{0}+(-1)^{k_{A}}R_{0}^{A}\right]-2{R_{1}^{A}}^{2}- 6{R_{1}^{B}}^{2}>0,\\ &\min\{M4_{1},M4_{2},M4_{3},M4_{4}\}>0.\end{split} \tag{46}\] It is likely that the above conditions constitute general bounds, i.e. that they are valid also for other more general situations, or, in other words, that the aligned orthotropic case is the worst one. However, this is just a conjecture that cannot be proved. ### Square symmetric \(\mathbb{B}\) Let us now consider the case of a square-symmetric coupling \(\mathbb{B}\), i.e. a laminate designed to have \(R_{1}^{B}=0\). In such a case, eq. (36)\({}_{9}\) reduces to \[T_{0}T_{1}-{R_{1}^{A}}^{2}>0, \tag{47}\] which is redundant, as previously discussed in Sect. 4.1. Moreover, eq. (36)\({}_{10}\) becomes exactly eq. (36)\({}_{8}\), so it can be discarded too. Actually, in this special case it is the whole set of conditions on \(M3\) that it is redundant. This is quite natural, because when \(R_{1}^{B}=0\) the only parameter accounting for coupling disappears from \(M3\), which reduces by consequence to conditions already accounted for by \(M2\). Moreover, eq. (35) reduces to \[\begin{split}&\left[T_{0}T_{1}+T_{1}R_{0}^{A}\cos 4(\varPhi_{A}- \delta_{A}-\varphi_{\varepsilon})-2{R_{1}^{A}}^{2}\cos^{2}2(\delta_{A}+ \varphi_{\varepsilon})\right]\times\\ &\times\left[T_{0}T_{1}+T_{1}R_{0}^{D}\cos 4(\varPhi_{D}- \delta_{D}-\varphi_{\kappa})-2{R_{1}^{D}}^{2}\cos^{2}2(\delta_{D}+\varphi_{ \kappa})\right]-\\ &-3T_{1}^{2}{R_{0}^{B}}^{2}\cos^{2}2(2\varPhi_{B}-\varphi_{ \varepsilon}-\varphi_{\kappa})>0\ \forall\varphi_{\varepsilon},\varphi_{\kappa}.\end{split} \tag{48}\] Also in this case it is interesting to consider the case of orthotropic aligned tensors, eq. (38), putting conventionally \(\varPhi_{1}^{B}=\varPhi_{0}^{B}\) (if \(R_{1}^{B}=0,\varPhi_{1}^{B}\) is not defined). Repeating _verbatim_ the steps already done in Sect. 5.1 gives eventually the conditions \[\begin{split} R_{0}^{A}&\geq 0,\\ R_{1}^{A}&\geq 0,\\ R_{0}^{B}&\geq 0,\\ R_{0}^{D}&\geq 0,\\ R_{1}^{D}&\geq 0,\\ T_{0}-R_{0}^{A}>0,\\ T_{1}\left[T_{0}+(-1)^{k_{A}}R_{0}^{A}\right]-2{R_{1}^{A}}^{2} >0,\\ \min\{M4_{1},M4_{2},M4_{3},M4_{4}\}>0.\end{split} \tag{49}\] with now \[\begin{split} M4_{1}&=\left[T_{1}(T_{0}+(-1)^{k_{A} }R_{0}^{A})-2{R_{1}^{A}}^{2}\right]\times\\ &\times\left[T_{1}(T_{0}+(-1)^{k_{D}}R_{0}^{D})-2{R_{1}^{D}}^{2} \right]-3T_{1}^{2}{R_{0}^{B}}^{2},\\ M4_{2}&=T_{1}\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right] \left[T_{1}(T_{0}+(-1)^{k_{D}}R_{0}^{D})-2{R_{1}^{D}}^{2}\right],\\ M4_{3}&=T_{1}\left[T_{0}-(-1)^{k_{D}}R_{0}^{D}\right] \left[T_{1}(T_{0}+(-1)^{k_{A}}R_{0}^{A})-2{R_{1}^{A}}^{2}\right],\\ M4_{4}&=T_{1}^{2}\left\{\left[T_{0}-(-1)^{k_{A}}R_{ 0}^{A}\right]\left[T_{0}-(-1)^{k_{D}}R_{0}^{D}\right]-3{R_{0}^{B}}^{2}\right\}. \end{split} \tag{50}\] ### Fully square symmetric laminates A more particular case, and even more interesting for applications, is that of _fully square symmetric laminates_, i.e. of \[R_{1}^{A}=R_{1}^{B}=R_{1}^{D}=0, \tag{51}\] which is, e.g., automatically get if \(R_{1}=0\), [11], i.e. if the basic layer is square symmetric itself, like actually it is for the very common case of layers reinforced by balanced fabrics. In such a case, the previous conditions (49) reduce to only \[R_{0}^{A} \geq 0, \tag{52}\] \[R_{0}^{B} \geq 0,\] \[R_{0}^{D} \geq 0,\] \[T_{0}-R_{0}^{A}>0,\] \[\min\{M4_{1},MA_{2},M4_{3},M4_{4}\}>0.\] with \[M4_{1} =\left[T_{0}+(-1)^{k_{A}}R_{0}^{A}\right]\left[T_{0}+(-1)^{k_{D}}R _{0}^{D}\right]-3{R_{0}^{B}}^{2}, \tag{53}\] \[M4_{2} =\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right]\left[T_{0}+(-1)^{k_{D}} R_{0}^{D}\right],\] \[M4_{3} =\left[T_{0}-(-1)^{k_{D}}R_{0}^{D}\right]\left[T_{0}+(-1)^{k_{A}} R_{0}^{A}\right],\] \[M4_{4} =\left[T_{0}-(-1)^{k_{A}}R_{0}^{A}\right]\left[T_{0}-(-1)^{k_{D}} R_{0}^{D}\right]-3{R_{0}^{B}}^{2}.\] This special case of coupled laminates is interesting for applications because condition (51) is sufficient to obtain coupled thermally stable laminates, [20], i.e. coupled laminates that preserve their form also under a temperature change, like the one occurred during the curing phase of pre-preg layers. ### \(R_{0}\)-orthotropic laminates Let us consider now the case of a laminate designed to have \[R_{0}^{A}=R_{0}^{B}=R_{0}^{D}=0, \tag{54}\] which is a special case of orthotropy, named \(R_{0}\)-_orthotropy_, [10, 22]. This kind of laminates can be obtained simply stacking layers with \(R_{0}=0\). For this special case, eqs. (36) reduce to (we recall that like \(T_{1}\), also \(T_{0}>0\) automatically, because it is the modulus of a real material) \[R_{1}^{A} \geq 0, \tag{55}\] \[R_{1}^{B} \geq 0,\] \[R_{1}^{D} \geq 0,\] \[T_{0}T_{1}-2{R_{1}^{A}}^{2}>0,\] \[T_{0}T_{1}-{R_{1}^{A}}^{2}-3{R_{1}^{B}}^{2}>0,\] \[T_{0}^{2}T_{1}^{2}-2{R_{1}^{A}}^{2}(T_{0}T_{1}-3{R_{1}^{B}}^{2} )-6{R_{1}^{B}}^{2}(T_{0}T_{1}+{R_{1}^{A}}^{2}\cos 4\delta_{A})>0,\] \[\min\left\{\left[T_{0}T_{1}-2{R_{1}^{A}}^{2}\cos^{2}2(\delta_{A} +\varphi_{\varepsilon})\right]\left[T_{0}T_{1}-2{R_{1}^{D}}^{2}\cos^{2}2( \delta_{D}+\varphi_{\kappa})\right]+\right.\] \[+36{R_{1}^{B}}^{4}\cos^{2}2\varphi_{\varepsilon}\cos^{2}2\varphi _{\kappa}-6T_{0}T_{1}{R_{1}^{B}}^{2}(\cos^{2}2\varphi_{\varepsilon}+\cos^{2}2 \varphi_{\kappa})-\] \[\left.-24{R_{1}^{A}}{R_{1}^{B}}^{2}R_{1}^{D}\cos 2(\delta_{A}+ \varphi_{\varepsilon})\cos 2\varphi_{\varepsilon}\cos 2(\delta_{D}+\varphi_{ \kappa})\cos 2\varphi_{\kappa}]\right\}>0.\] If once again we consider the case of aligned tensors, eq. (38), then proceeding like in Sect. 5.1, we get that condition (55)\({}_{4}\) is redundant and finally \[\begin{split}& R_{1}^{A}\geq 0,\\ & R_{1}^{B}\geq 0,\\ & R_{1}^{D}\geq 0,\\ & T_{0}T_{1}-R_{1}^{A^{2}}-3R_{1}^{B^{2}}>0,\\ & T_{0}T_{1}-2R_{1}^{A^{2}}-6R_{1}^{B^{2}}>0,\\ &\min[M4_{1},M4_{2},M4_{3}]>0,\end{split} \tag{56}\] with now \[\begin{split}& M4_{1}=\left(T_{0}T_{1}-2R_{1}^{A^{2}}\right) \left(T_{0}T_{1}-2R_{1}^{D^{2}}\right)-\\ &-12R_{1}^{B^{2}}\left[T_{0}T_{1}-3R_{1}^{B^{2}}+2(-1)^{\lambda_ {A}}(-1)^{\lambda_{D}}R_{1}^{A}R_{1}^{D}\right],\\ & M4_{2}=T_{0}T_{1}\left[T_{0}T_{1}-2R_{1}^{D^{2}}-6R_{1}^{B^{2} }\right],\\ & M4_{3}=T_{0}T_{1}\left[T_{0}T_{1}-2R_{1}^{A^{2}}-6R_{1}^{B^{2} }\right].\end{split} \tag{57}\] To notice that in this case \(M4_{4}=T_{0}^{2}T_{1}^{2}>0\) always. ### Coupled isotropic laminates A last particular and rather interesting case is that of coupled laminates having an isotropic behavior in extension and in bending. Such a kind of laminates can be obtained in different ways, e.g. applying the Werren and Norris conditions, [28] to laminates of the quasi-trivial type with \(\mathbb{A}=\mathbb{D}\), [16, 29]; an example of this kind of plates is the 18-layers laminate whose stacking sequence is \[[0^{\circ},60^{\circ},-60^{\circ}_{2},60^{\circ}_{2},-60^{\circ},0^{\circ},6 0^{\circ}_{2},0^{\circ},-60^{\circ},0^{\circ},-60^{\circ},0^{\circ}_{2},-60^{ \circ},60^{\circ}].\] In such a situation, it is \(R_{0}^{A}=R_{1}^{A}=R_{0}^{D}=R_{1}^{D}=0\), while all the angles \(\varPhi_{A},\varPhi_{D},\delta_{A},\delta_{D}\) are no longer defined. To notice that it is necessarily \(\mathbb{A}=\mathbb{D}\), because they are reduced to the only isotropic part, i.e. to \(T_{0}\) and \(T_{1}\), that are those of the basic layer, while \(\mathbb{B}\) is necessarily anisotropic, because \(T_{0}^{B}=T_{1}^{B}=0,R_{0}^{B}+R_{1}^{B}\neq 0\). In this situation, eq. (35) becomes \[\begin{split}& T_{0}^{2}T_{1}^{2}+36R_{1}^{B^{4}}\cos^{2}2\varphi_{ \varepsilon}\cos^{2}2\varphi_{\kappa}-6T_{0}T_{1}R_{1}^{B^{2}}(\cos^{2}2 \varphi_{\varepsilon}+\cos^{2}2\varphi_{\kappa})-\\ &-3T_{1}^{2}R_{0}^{B^{2}}\cos^{2}2(2\varPhi_{B}-\varphi_{ \varepsilon}-\varphi_{\kappa})>0\ \forall\varphi_{\varepsilon},\varphi_{\kappa},\end{split} \tag{58}\] eq. (36)\({}_{9}\) is redundant with respect to eq. (36)\({}_{10}\) and finally conditions (36) reduce to only \[\begin{split}& R_{0}^{B}\geq 0,\\ & R_{1}^{B}\geq 0,\\ & T_{0}T_{1}-6R_{1}^{B^{2}}>0,\\ &\min[M4]>0.\end{split} \tag{59}\] Once more, if \(\mathbb{B}\) is orthotropic, i.e. \(\Phi_{B}=k_{B}\frac{\pi}{4},k_{B}\in\{0,1\}\), then \[M4_{1} =T_{1}^{2}(T_{0}^{2}-3{R_{0}^{B}}^{2})-12{R_{1}^{B}}^{2}(T_{0}T_{1} -3{R_{1}^{B}}^{2}), \tag{60}\] \[M4_{2} =M4_{3}=T_{0}T_{1}(T_{0}T_{1}-6{R_{1}^{B}}^{2}),\] \[M4_{4} =T_{1}^{2}(T_{0}^{2}-3{R_{0}^{B}}^{2}),\] and conditions (59) become simply \[R_{0}^{B}\geq 0, \tag{61}\] \[R_{1}^{B}\geq 0,\] \[T_{0}T_{1}-6{R_{1}^{B}}^{2}>0,\] \[\min\{M4_{1},M4_{2},M4_{4}\}>0.\] ## 6 Conclusion Coming back to the questions posed in Sect. 1, the result found above show that: 1. it is possible to establish some bounds involving _also_ the moduli of \(\mathbb{B}\) but not _exclusively_ these ones; 2. the moduli of \(\mathbb{A}\) and \(\mathbb{D}\) known for the case of uncoupled laminates are still valid but in addition some other bounds, relating the moduli of \(\mathbb{A},\mathbb{B}\) and \(\mathbb{D}\) are added in the case of coupled laminates; 3. not all the bounds relating the moduli of \(\mathbb{A},\mathbb{B}\) and \(\mathbb{D}\) can be given in an explicit form; 4. all the bounds depend exclusively on tensor invariants and shift angles, so all of them are frame independent; 5. the existence of some kind of symmetries affects the bounds and normally allows to express them in a simpler form. The previous results show also that \(\mathbb{B}\) remains an undefined tensor. The bounds found in this paper reveal hence the existence of some supplementary conditions to be satisfied by the moduli of \(\mathbb{A}\) and \(\mathbb{D}\), these conditions imposing some sort of interaction with the moduli of \(\mathbb{B}\). This result, unknown until now, is interesting _per se_, because it shows that coupling is not unbounded, but also in practical applications, namely in optimization problems, to correctly define the feasibility domain in the search of optimal coupled laminates. There are, as often happens, other ways to approach the same complicated problem, and some of them have been explored by the author, but all of them necessarily lead to a greater number of conditions.
2309.03367
Self-Supervised Masked Digital Elevation Models Encoding for Low-Resource Downstream Tasks
The lack of quality labeled data is one of the main bottlenecks for training Deep Learning models. As the task increases in complexity, there is a higher penalty for overfitting and unstable learning. The typical paradigm employed today is Self-Supervised learning, where the model attempts to learn from a large corpus of unstructured and unlabeled data and then transfer that knowledge to the required task. Some notable examples of self-supervision in other modalities are BERT for Large Language Models, Wav2Vec for Speech Recognition, and the Masked AutoEncoder for Vision, which all utilize Transformers to solve a masked prediction task. GeoAI is uniquely poised to take advantage of the self-supervised methodology due to the decades of data collected, little of which is precisely and dependably annotated. Our goal is to extract building and road segmentations from Digital Elevation Models (DEM) that provide a detailed topography of the earths surface. The proposed architecture is the Masked Autoencoder pre-trained on ImageNet (with the limitation that there is a large domain discrepancy between ImageNet and DEM) with an UperNet Head for decoding segmentations. We tested this model with 450 and 50 training images only, utilizing roughly 5% and 0.5% of the original data respectively. On the building segmentation task, this model obtains an 82.1% Intersection over Union (IoU) with 450 Images and 69.1% IoU with only 50 images. On the more challenging road detection task the model obtains an 82.7% IoU with 450 images and 73.2% IoU with only 50 images. Any hand-labeled dataset made today about the earths surface will be immediately obsolete due to the constantly changing nature of the landscape. This motivates the clear necessity for data-efficient learners that can be used for a wide variety of downstream tasks.
Priyam Mazumdar, Aiman Soliman, Volodymyr Kindratenko, Luigi Marini, Kenton McHenry
2023-09-06T21:20:10Z
http://arxiv.org/abs/2309.03367v1
# Self-Supervised Masked Digital Elevation Models Encoding for Low-Resource Downstream Tasks ###### Abstract The lack of quality labeled data is one of the main bottlenecks for training Deep Learning models. As the task increases in complexity, there is a higher penalty for overfitting and unstable learning. The typical paradigm employed today is Self-Supervised learning, where the model attempts to learn from a large corpus of unstructured and unlabeled data and then transfer that knowledge to the required task. Some notable examples of self-supervision in other modalities are BERT for Large Language Models, Wav2Vec for Speech Recognition, and the Masked AutoEncoder for Vision, which all utilize Transformers to solve a masked prediction task. GeoAI is uniquely poised to take advantage of the self-supervised methodology due to the decades of data collected, little of which is precisely and dependably annotated. Our goal is to extract building and road segmentations from Digital Elevation Models (DEM) that provide a detailed topography of the earths surface. The proposed architecture is the Masked Autoencoder pre-trained on ImageNet (with the limitation that there is a large domain discrepancy between ImageNet and DEM) with an UperNet Head for decoding segmentations. We tested this model with 450 and 50 training images only, utilizing roughly 5% and 0.5% of the original data respectively. On the building segmentation task, this model obtains an 82.1% Intersection over Union (IoU) with 450 Images and 69.1% IoU with only 50 images. On the more challenging road detection task the model obtains an 82.7% IoU with 450 images and 73.2% IoU with only 50 images. Any hand-labeled dataset made today about the earths surface will be immediately obsolete due to the constantly changing nature of the landscape. This motivates the clear necessity for data-efficient learners that can be used for a wide variety of downstream tasks. Self-Supervised Learning, Deep Learning, Masked Image Modeling, GeoAI, DEM, Segmentation, Small-Data ## 1 Introduction Deep Learning has provided powerful and highly predictive models for a wide variety of tasks, but it often comes at the cost of the immense amount of data needed to curtail overfitting and promote generalization Schmitt et al. (2020). The onset of the Transfer Learning methodology has opened these tools to solve more niche problems with smaller datasets Pires de Lima and Marfurt (2019). A common strategy is to utilize a model that has been trained to perform some objective and exploit its feature extraction capabilities to fine-tune it toward another problem. Unfortunately, many of these pre-trained models are trained in a supervised fashion, using labeled data, which may not be available for some areas of interest Schmitt et al. (2020). Ideally, features should be learned through the dataset itself without any labels. This technique of Self-Supervised pre-training has been the current method of building a model that generalizes features of the data which can be used for any downstream tasks Wang et al. (2021). There are two main features that enable these models to be so efficient: **(i)** utilizing an encoder network that ingets randomly masked data and a lightweight decoder network that attempts to reconstruct the masked portions, in this process, the model is forced to learn global relationships that can be utilized in downstream tasks. **(ii)**: employing a Transformer architecture Vaswani et al. (2017) which encodes relationships between tokens via the Attention mechanism. In NLP and Speech, the attention gives the model a temporal awareness by learning the relationships between words. For Vision, the model encodes which patches of the image are most related to each other, offering powerful global spatial awareness. The specific variant we are using in this paper is the Masked AutoEncoder He et al. (2021). The contribution of this paper is to evaluate the use of the Masked Autoencoder pre-trained on ImageNet for different downstream segmentation tasks of digital elevation models (i.e., segmenting building footprints and roads), particularly with differences in the learning domain between traditional optical images and digital elevation data (e.g., single channel with large dynamic range). ## 2 Related Works Several researchers utilized weakly supervised learning to avoid the lack of training datasets. For example, Wei and Ji (2021) applied weakly supervised training successfully for the task of road segmentation. Similarly, Soliman et al. (2022) evaluated an existing dataset to train models to extract building footprints from DEM and Wang et al. (2020) used a single labeled pixel per image to segment optical satellite images. However, advances in utilizing self-supervised learning strategies in modeling language Devlin et al. (2018) and in computer vision Dosovitskiy et al. (2020) has motivated researchers to evaluate the self-supervised approach to train deep learning models on feature representations from geospatial and remote sensing data without human annotation. For example, Gao et al. (2022) demonstrated that a masked autoencoder is capable of outperforming supervised methods in different downstream tasks by learning how to reconstruct masked patches in optical satellite images. Zhu et al. (2023) proposed SpectralMAE, which is a spectral-masked autoencoder model designed to reconstruct hyperspectral images. Similarly, Scheibenreif et al. (2023) applied a masked autoencoder for Hyperspectral Image Classification.Wang et al. (2021) developed Mask DeepLab as a model for change detection by teaching the model how to construct time difference images, while Reed et al. (2022) demonstrated how to incorporate a sense of geographic scale in MAE learning. Figure 1: Masked Autoencoder Backbone + UperNet Head The Masked Autoencoder is composed of the typical encoder/decoder structure, where during pre-training, 75% of the image is masked out and we perform a reconstruction loss. Once pre-trained, outputs of the encoder from four seperate transformer blocks are stacked to form a feature pyramid that are then fused via the UperNet architecture for segmentation. However, there is a lack of studies evaluating self-supervised models in segmenting Digital Elevation Models. Especially since DEMs have contrasting characteristics to optical images and could be subject to resolution loss during data pre-processing, for example, by the loss of their large dynamic from local normalization. In addition, it is important to evaluate the need for pre-training MAE on a large corpus of DEMs similar to some of the foundation models in optical remote sensing Sun et al. (2022). ## 3 Model Architecture Details We utilized the base Masked AutoEncoder (MAE) that was pre-trained on ImageNet Russakovsky et al. (2014) as our backbone to our segmentation model. The original images are 224 x 224 pixels and the ViT Patch Embedding module splits this into 192 patches each with shape 16 x 16 pixels. Each patch of the image is passed through a linear layer to encode it as a token with embedding dimension of 768. In the masking step, 75% of the tokens are randomly hidden. The Encoder portion of the MAE has 12 transformer blocks and each block has 12 heads in the Multi-Headed Attention computation. The MAE opts for a light weight decoder (rather than a symmetrical encoder/decoder architecture) so the token embeddings are first reduced in size from 768 to 512. There are 8 transformer blocks in the decoder with each having 16 heads. Another linear layer takes the output from the decoder and returns the tokens back to the original shape (16x16) so we can perform a reconstruction loss of the masked tokens. While the MAE backbone is used to encode our images, an additional module is necessary to actually generate segmentation masks. To perform this task we will utilize the UperNet head. The UperNet architecture employs a feature pyramid network (FPN) Lin et al. (2016) that fuses information extracted from different layers of the Masked Autoencoder (specifically transformer blocks 3, 5, 7, 11). A similar architecture was previously used in Li et al. (2023) for extraction of permafrost features from the arctic, although with a convolutional ResNet50 backbone. All implementation was done with the PyTorch Framework Russakovsky et al. (2014) and MMSegmentation Contributors (2020). All models were trained for 3000 iterations (with validation at every 150 iterations) with a batch size of 8, and the best validation accuracy is presented. The backbone was frozen and only the UperNet head was tuned for the specific segmentation task. The AdamW optimizer was used with a weight decay of 0.05 along with a Polynomial Learning Rate scheduler to help curtail overfitting and promote smooth learning on small datasets. We also used a pixel-wise categorical cross entropy for our loss function with added class weights due to the large imbalance of background pixels versus the label pixels. For comparison, we additionally trained a UNet Ronneberger et al. (2015) model from scratch for the same small-data tasks to compare against some more traditional methods. Figure 2: Example of Missing Road Labels Example of missing road segmentation labels in unfiltered road dataset. Prompted to hand-curate a set of labels with higher quality for our road segmentation task. ## 4 Results and Discussion We look at how our model, a frozen MAE, and a trained UperNet Head, compare against the more typical UNet model for this task. We took samples of 450, 200, 50, and 10 images from both the visually-labeled road and building training datasets with a constant 50 images for validation. The IoU for the predicted class for each model is presented in table 1. Our methods provide reliable segmentation performance on very few samples. On the building segmentation task, we obtain a performance of 55.5% IoU with only 10 images, about 50% better than a UNet. On the Road segmentation task, training on 10 images doesn't provides a dependable model. As we increase the number of training samples in both road and building segmentation, our IoU continues to increase with the MAE consistently outperforming the UNet. The road segmentation task, specifically, has varying levels of difficulty depending on the nature of the landscape and the elevation difference. The original data were elevation rasters with a high dynamic range. In flat lands with well-defined roads like highways, (i.e. Figure 3), the streets are much more clearly separated after locally normalizing the image between the maximum and minimum elevation. On the other hand, in residential areas (i.e. Figure 4), normalizing with the max elevations on top of buildings conceals the roads. Even so, our model detects roads in these challenging conditions. \begin{table} \begin{tabular}{c|c c|c c} \multicolumn{5}{c}{Building and Road Segmentation IoU} \\ \hline Num. & MAE & UNet & MAE & UNet \\ Samples & (B) & (B) & (R) & (R) \\ 10 & 55.5\% & 27.9\% & 37.1\% & 23.4\% \\ 50 & 69.1\% & 55.0\% & 73.2\% & 70.4\% \\ 200 & 76.1\% & 68.2\% & 81.3\% & 77.9\% \\ 450 & 82.1\% & 76.5\% & 82.7\% & 78.4\% \\ \hline \end{tabular} \end{table} Table 1: IoU for Building (B) and Road (R) segmentation. Figure 4: Road Segmentation with high elevation difference from a model trained on 450 images. Figure 3: Road Segmentation with a minimum elevation difference from a model trained on 450 images. We should also notice that although the manually labeled dataset was curated carefully, there are a few errors related to the alignment of the visual label dataset and the tiling, especially around the edges of the manually labeled data. As we can see in Figure 5, labels may be missing for some of the buildings. Nevertheless, our model is capable of predicting them accurately. In addition, road labels currently have an average width, however, road widths vary within and between cities. Noisy data can sometimes be unavoidable without a significant effort of continuously maintaining labels; therefore, we wanted to test the robustness of our model by training on a large amount of variable quality road segmentation. As shown above, we had roughly 10,000 training images of roads, 500 of which were visually inspected for completeness. In order to test the robustness of our model, we trained on the entire set of images but validated on the visually labeled/inspected validation set to allow for comparison to other experiments. As shown in Table 2, the MAE and UNet obtain a 78.2% and 74.4% IoU respectively, which downgraded the performance compared to our previous training on 450 high-quality images. This offers some additional evidence that using a smaller sample of quality data outperforms having a large corpus of unverified data (Weakly supervised). ## 5 Conclusions We present self-supervised learning as an effective strategy to perform various downstream tasks with limited data resources. We obtained encouraging results using an ImageNet pre-trained backbone, even with the domain discrepancy from DEM. Following this proof-of-concept, we plan to pre-train an MAE on a large corpus of DEM datasets so the numerical encoding are more salient and provides a substantial performance jump. In the future, this model can be used for various downstream tasks: (1) Segmentation with the UperNet head, (2) Classification with a simple linear \begin{table} \begin{tabular}{c|c} Noisy Labels Segmentation IoU \\ \hline MAE & 78.2\% \\ UNet & 74.4\% \\ \end{tabular} \end{table} Table 2: Results from training each model on all available road data (with noisy labels) and validating on the 50 high-quality images. Figure 5: Missing Building Segmentations Even in a curated dataset, there can still be some errors in the labels, leading to a slight under-reporting of true model performance head, and (3) Object Detection using MAE as a backbone to the Detectron architecture Li et al. (2022). Training a DEM-specific MAE is likely to result in enhancing the model performance as there is evidence that scaling the model up provides significant gains He et al. (2021). Access to an adequate amount of DEM data and compute environments will enable pre-training large Geo-AI models to provide a data-efficient DEM encoder and make possible low-resource learning for Geospatial tasks. ## 6 Acknowledgements This work is supported by funding from the National Science Foundation, award number 1927729.
2309.04408
Wi-BFI: Extracting the IEEE 802.11 Beamforming Feedback Information from Commercial Wi-Fi Devices
Recently, researchers have shown that the beamforming feedback angles (BFAs) used for Wi-Fi multiple-input multiple-output (MIMO) operations can be effectively leveraged as a proxy of the channel frequency response (CFR) for different purposes. Examples are passive human activity recognition and device fingerprinting. However, even though the BFAs report frames are sent in clear text, there is not yet a unified open-source tool to extract and decode the BFAs from the frames. To fill this gap, we developed Wi-BFI, the first tool that allows retrieving Wi-Fi BFAs and reconstructing the beamforming feedback information (BFI) - a compressed representation of the CFR - from the BFAs frames captured over the air. The tool supports BFAs extraction within both IEEE 802.11ac and 802.11ax networks operating on radio channels with 160/80/40/20 MHz bandwidth. Both multi-user and single-user MIMO feedback can be decoded through Wi-BFI. The tool supports real-time and offline extraction and storage of BFAs and BFI. The real-time mode also includes a visual representation of the channel state that continuously updates based on the collected data. Wi-BFI code is open source and the tool is also available as a pip package.
Khandaker Foysal Haque, Francesca Meneghello, Francesco Restuccia
2023-09-08T16:12:27Z
http://arxiv.org/abs/2309.04408v2
# Wi-BFI: Extracting the IEEE 802.11 Beamforming Feedback Information from Commercial Wi-Fi Devices ###### Abstract. Recently, researchers have shown that the beamforming feedback angles (BFAs) used for Wi-Fi multiple-input multiple-output (MIMO) operations can be effectively leveraged as a proxy of the channel frequency response (CFR) for different purposes. Examples are passive human activity recognition and device fingerprinting. However, even though the BFAs report frames are sent in clear text, there is not yet a unified open-source tool to extract and decode the BFAs from the frames. To fill this gap, we developed Wi-BFI, the first tool that allows retrieving Wi-Fi BFAs and reconstructing the beamforming feedback information (BFI) - a compressed representation of the CFR - from the BFAs frames captured over the air. The tool supports BFAs extraction within both IEEE 802.11ac and 802.11ax networks operating on radio channels with 160/80/40/20 MHz bandwidth. Both multi-user and single-user MIMO feedback can be decoded through Wi-BFI. The tool supports real-time and offline extraction and storage of BFAs and BFI. The real-time mode also includes a visual representation of the channel state that continuously updates based on the collected data. Wi-BFI code is open source and the tool is also available as a pip package1. Footnote 1: [https://github.com/kfoysalhaque/Wi-BFI](https://github.com/kfoysalhaque/Wi-BFI) Wi-Fi, beamforming, compressed beamforming feedback, multiple-input multiple-output (MIMO) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. + Footnote †: c) 2023 Copyright held by the owner/author(s). 40 MHz (station 1 in Figure 1), and an IEEE 802.11ax compliant device at 160 MHz (station 2 in Figure 1) through a single capture. Wi-BFI does not require access to the monitored devices in the MIMO network. Contrarily, extracting the CFR requires physical access and firmware modifications of the monitored devices (Bartner et al., 2017). Because of this, state-of-the-art work has shown significant interest in BFI (Bartner et al., 2017; Bartner et al., 2018). Recent work has demonstrated that BFAs/BFI can be successfully used in various applications like wireless human sensing (Bartner et al., 2018) and radio-fingerprinting (Bartner et al., 2018). While in previous work the procedure to extract the BFAs/BFI was customized to the specific network setup, Wi-BFI can be used with any network configuration thus enabling further research in the area. ### Summary of Paper Contributions This work provides the following contributions. \(\bullet\) We propose Wi-BFI, an open source python-based tool to extract 802.11ac/ax BFAs in the wild and reconstruct the BFI. Wi-BFI is compatible with any network configurations and operating systems. \(\bullet\) We enable the extraction of BFAs and reconstruction of BFI from multiple devices simultaneously, without any physical access to them. The monitored devices can implement different standards and operate on different bands. \(\bullet\) We provide both real-time plotting and saving of the extracted BFAs and reconstructed BFI from live capture or any prior collected traces. \(\bullet\) We present a use case entailing human activity classification with BFAs collected from IEEE 802.11ac devices operating at 80 MHz. By leveraging spatial diversity, the BFAs-based classifier achieves up to 99.28% of accuracy. ## 2. System Architecture of Wi-BFI In the following, we will use the superscripts \(T\) and \(\dagger\) to denote the transpose and the complex conjugate transpose (i.e., the Hermitian). We define with \(\angle\)C the matrix containing the phases of complex-valued matrix C. Moreover, \(\text{diag}(c_{1},\ldots,c_{j})\) indicates the diagonal matrix with elements \((c_{1},\ldots,c_{j})\) on the main diagonal. The \((c_{1},c_{2})\) entry of matrix C is defined by \([\text{C}]_{c_{1},c_{2}}\), while \(\mathbb{I}_{c}\) refers to an identity matrix of size \(c\times c\) and \(\mathbb{I}_{c\times d}\) is a \(c\times d\) generalized identity matrix. ### Wi-BFI Operation Principle Wi-BFI leverages the way MIMO is implemented in Wi-Fi networks following the mechanism standardized in IEEE 802.11. To enable MIMO, the beamformer _pre-codes_ the data packets by linearly combining the signals to be simultaneously transmitted to the different beamformees. To do so, the beamformer uses a precoding matrix \(\mathbf{W}\) which is derived from the CFR matrix \(\mathbf{H}\), describing how the environment modifies the irradiated signals in their path to the receiver. The CFR needs to be estimated by each beamformee and fed back to the beamformer to allow proper pre-coding. The CFR estimation process is called _channel sounding_ and is depicted in Figure 2. The procedure is triggered by the beamformer that periodically broadcasts a null data packet (NDP) to estimate the MIMO channel between itself and the connected beamformees (**step 1** in Figure 2). Since its purpose is to sound the channel, the NDP _is not beamformed_. This is particularly advantageous as the resulting _CFR estimation is not affected by inter-stream or inter-user interference_. The NDP is a packet containing sequences of bits - named long training fields (LTFs) - the decoded version of which is known by the beamformees. The LTFs are transmitted over the different beamformer antennas in subsequent time slots, thus allowing each beamformee to estimate the CFR of the links between its receiver antennas and the beamformer transmitter antennas. The LTFs are modulated - as the data fields - through orthogonal frequency-division multiplexing (OFDM) by dividing the signal bandwidth into \(K\) partially overlapping and orthogonal sub-channels spaced apart by \(1/T\). The input bits are grouped into OFDM symbols, \(\mathbf{a}=[a_{-K/2},\ldots,a_{K/2-1}]\) where \(a_{k}\) is named OFDM sample. The \(K\) OFDM samples are digitally modulated and transmitted through the \(K\) OFDM sub-chan Figure 1. Wi-BFI overview. Figure 2. Wi-BFI- overall system architecture. thus occupying the channel for \(T\) seconds. Thus, the transmitted LTF signal is \[s_{\text{tx}}(t)=e^{j2\pi ft\ell t}\sum_{k=-K/2}^{K/2-1}a_{k}e^{j2\pi kt/T}, \tag{1}\] where \(f_{\text{c}}\) is the carrier frequency. The NDP is received and decoded by each beamformee (**step 2** in Figure 2) to estimate the CFR H. The different LTFs are used to estimate the channel over each pair of transmitter (TX) and receiver (RX) antennas, for every OFDM sub-channels. This generates a \(K\times M\times N\) matrix H for each beamformee, where \(M\) and \(N\) are respectively the numbers of TX and RX antennas. Next, the CFR is compressed - to reduce the channel overhead - and fed back to the beamformer. Using \(\mathbf{H}_{k}\) to identify the \(M\times N\) sub-matrix of H containing the CFR samples related to sub-channel \(k\), the _compressed beamforming feedback_ is obtained as follows ([7], Chapter 13). First, \(\mathbf{H}_{k}\) is decomposed through singular value decomposition (SVD) as \[\mathbf{H}_{k}^{T}=\mathbf{U}_{k}\mathbf{S}_{k}\mathbf{Z}_{k}^{\dagger}, \tag{2}\] where \(\mathbf{U}_{k}\) and \(\mathbf{Z}_{k}\) are, respectively, \(N\times N\) and \(M\times M\) unitary matrices, while the singular values are collected in the \(N\times M\) diagonal matrix \(\mathbf{S}_{k}\). Using this decomposition, the complex-valued beamforming matrix \(\mathbf{V}_{k}\) is defined by collecting the first \(N_{\text{SS}}\leq N\) columns of \(\mathbf{Z}_{k}\). Such a matrix is used by the beamformer to compute the pre-coding weights for the \(N_{\text{SS}}\) spatial streams directed to the beamformee. Hence, \(\mathbf{V}_{k}\) is converted into polar coordinates to avoid transmitting the complete matrix. Specifically, \(\mathbf{V}_{k}\) is decomposed as the product of \(\mathbf{D}_{k,i}\) and \(\mathbf{G}_{k,\ell,i}\) matrices, which are defined as \[\mathbf{D}_{k,i}=\begin{bmatrix}\mathbb{I}_{i-1}&0&\dots&0\\ 0&e^{j\phi_{k,i}}&0&\dots&\vdots\\ \vdots&0&\ddots&0\\ 0&\dots&0&1\end{bmatrix}, \tag{3}\] \[\mathbf{G}_{k,\ell,i}=\begin{bmatrix}\mathbb{I}_{i-1}&0&\dots&0\\ 0&\cos\psi_{k,\ell,i}&0&\sin\psi_{k,\ell,i}&\vdots\\ \vdots&0&\mathbb{I}_{\ell-i-1}&0&\vdots\\ \vdots&-\sin\psi_{k,\ell,i}&0&\cos\psi_{k,\ell,i}&0\\ 0&\dots&0&\mathbb{I}_{M-\ell}\end{bmatrix}, \tag{4}\] where the values of the \(\phi\) and \(\psi\) angles are obtained through the procedure detailed in Alg. 1. The number of \(\phi\) and \(\psi\) angles depends on the specific network configuration, i.e., the number of transmitter antennas and spatial streams. Using these matrices and \(\tilde{\mathbf{D}}_{k}\) (see line 2 in Alg. 1), \(\mathbf{V}_{k}\) can be written as \(\mathbf{V}_{k}=\tilde{\mathbf{V}}_{k}\tilde{\mathbf{D}}_{k}\), with \[\tilde{\mathbf{V}}_{k}=\prod_{i=1}^{\min(N_{\text{SS}},M-1)}\left(\mathbf{D}_ {k,i}\prod_{l=i+1}^{M}\mathbf{G}_{k,l,i}^{T}\right)\mathbb{I}_{M\times N_{ \text{SS}}}, \tag{5}\] where the products represent matrix multiplications. In the \(\tilde{\mathbf{V}}_{k}\) matrix, the last row - i.e., the feedback for the \(M\)-th transmitter antenna - consists of non-negative real numbers by construction. Using this transformation, the beamformee is only required to transmit the \(\phi\) and \(\psi\) angles to the beamformer as they allow reconstructing \(\tilde{\mathbf{V}}_{k}\) precisely. Moreover, it has been proved (see [7], Chapter 13) that the beamforming performance is equivalent at the beamformee when using \(\mathbf{V}_{k}\) or \(\tilde{\mathbf{V}}_{k}\) to construct the steering matrix \(\mathbf{W}\). In turn, the feedback for \(\tilde{\mathbf{D}}_{k}\) is not fed back to the beamformer. To further reduce the channel occupancy, the angles are quantized using \(b_{\phi}\) bits for \(\phi\) and \(b_{\phi}=b_{\phi}-2\) bits for \(\psi\), as follows: \[[\phi,\psi]=\left[\pi\left(\frac{1}{2^{b_{\phi}}}+\frac{q_{\phi}}{2^{b_{\phi}- 1}}\right),\left(\frac{1}{2^{b_{\phi}+2}}+\frac{q_{\psi}}{2^{b_{\phi}+1}} \right)\right]. \tag{6}\] In IEEE 802.11ac/ax, \(b_{\phi}=\{9,7\}\) bits and \(b_{\phi}=\{6,4\}\) bits are used for multi-user multi-input multi-output (MU-MIMO) and single-user multi-input multi-output (SU-MIMO) systems respectively. The quantized values \(-\)\(q_{\phi}=\{0,\dots,2^{b_{\phi}}-1\}\) and \(q_{\psi}=\{0,\dots,2^{b_{\phi}}-1\}\) - are packed into the compressed beamforming frame (**step 3** in Figure 2) and such _BFAs frame_ is transmitted to the beamformer (**step 4** in Figure 2). The \(b_{\phi}\) and \(b_{\psi}\) can be read in the VHT/HE MIMO control field, together with other information like the number of columns (\(N_{\text{SS}}\)) and rows (M) in the beamforming matrix and the channel bandwidth (see Figure 3). Each BFI contains \(A\) angles for each of the \(K\) OFDM sub-channels for a total of \(A\times K\) angles each. We remark that, since MU-MIMO requires fine-grained channel sounding - around every 10 milliseconds, according to [8] - it is fundamental to process the BFI in a fast manner at the beamformer. For this reason, and since cryptography would lead to excessive delays, _the angles are currently transmitted over-the-air unencrypted._ Wei-BFI captures the BFAs frames (**step 5** in Figure 2) with the python-based network monitoring tool Pyshark. However, any other network monitoring tool like tcpdump or Wireshark can also be used with Wi-BFI. Note that the beamformer continuously triggers the channel sounding procedure on the connected beamformees. This makes _the BFI contain very rich, reliable, and spatially diverse information_. Moreover, as BFAs frames are broadcasted by the beamformees, the information for all the beamformees _can be collected with a single capture_ by any Wi-Fi-compliant device. Wi-BFI leverages this procedure to collect the BFAs frames from multiple devices operating on different bands with a single capture, and reconstruct the BFI from the extracted BFAs. Thus, Wi-BFI does not need any direct access to the beamformees or any hardware-specific tool, ultimately reducing the complexity of the data collection procedure. Wi-BFI consists of four main steps as depicted in Figure 4. Firstly, in **step I**, Wi-BFI groups the BFAs frames based on the following arguments: **(i)** the standard: IEEE 802.11ac or IEEE 802.11ax; **(ii)** the bandwidth: 20 MHz, 40 MHz, 80 MHz, or 160 MHz; **(iii)** the device, through the MAC address. After grouping the BFAs frames from any live capture or earlier captured trace, Wi-BFI performs bit-wise parsing through the frames to extract the BFAs (**step II** in Figure 4). For that, we follow the IEEE 802.11 BFAs frame structure, presented in Figure 3. Specifically, a BFAs frame has three main parts: **(i)**_Radiotap Header_, **(ii)**_IEEE 802.11 Action No Ack_, _Flags_, and **(iii)**_IEEE 802.11 Wireless Management_. The fields of each part of a BFAs frame, along with the corresponding byte values, are mentioned in Figure 3. Wi-BFI parses through each of the fields of the BFAs frame to extract the corresponding information including the BFAs. The BFAs are structured in a matrix of dimension is \(P\times K\times A\), where \(P\) represents the total number of packets in the capture. As an example, considering a \(4\times 2\) IEEE 802.11ax SU-MIMO system operating on a 160 MHz bandwidth channel, with the beamformer having four antennas and the beamformee having two antennas, each BFAs frame has \(K=\)500 OFDM data sub-channels with \(A=\)10 BFAs each. For this setup, Figure 5 depicts the first two BFAs (\(\phi_{11}\) and \(\phi_{21}\)) for each of the OFDM sub-channels (x-axis) for one BFAs frame. Note that, according to the standard, the number of sub-channels changes with the change in operating standards and transmission bandwidth [9]. Moreover, we remind that the number of BFAs depends on the network configuration, i.e., the number of transmit antennas \(M\), and the number of spatial streams \(N_{\text{SS}}\). The number and order of BFAs for some network configurations are reported in Table 1. Figure 4: Main steps of the Wi-BFI tool. Figure 3: IEEE 802.11ac/ax BFI report frame structure. After extracting the BFAs, Wi-BFI reconstructs the \(\tilde{\mathbf{V}}_{k}\) matrix following Equation 5 (see **step III** of Figure 4). The dimension of the reconstructed \(\tilde{\mathbf{V}}_{k}\) matrix is \(P\times K\times M\times N_{\text{SS}}\). For the above mentioned \(4\times 2\) IEEE 802.11ax system at 160 MHz, the reconstructed \(\tilde{\mathbf{V}}_{k}\) has \(K\)=500, \(M\)=4 and \(N_{\text{SS}}\)= 2 making the dimension \(P\times 500\times 4\times 2\). The \(\tilde{\mathbf{V}}_{k}\) matrices reconstructed by Wi-BFI from the captured angles for the first two transmitter antennas are depicted in Figure 6, i.e., for \((m=1,n_{\text{SS}}=1),(m=1,n_{\text{SS}}=2),(m=2,n_{\text{SS}}=1)\), and \((m=2,n_{\text{SS}}=2)\), with \(m\in\{0,\ldots,M-1\}\) and \(n_{\text{SS}}\in\{0,\ldots,N_{\text{SS}}-1\}\)) being the transmitter antenna and the spatial stream indices, respectively. Figure 6 reports \(\tilde{\mathbf{V}}_{k}\) for a single frame. Figure 7 depicts the time evolution of \(\tilde{\mathbf{V}}_{k}\) (multiple frames). As the final step, as depicted in **step IV** of Figure 4, Wi-BFI either plots or saves the extracted BFAs and the \(\tilde{\mathbf{V}}_{k}\) matrix (BFI). For further visual representations of the BFAs, please refer to our Wi-BFI demonstration video.2 Footnote 2: Wi-BFI demonstration available at [https://youtu.be/0k7vTRCmMBw](https://youtu.be/0k7vTRCmMBw) ## 3. Wi-Bfi Use Case: Sensing With BFAs Beyond connectivity, the unprecedented rise in the number of smartphones, laptops, and many other Wi-Fi-enabled devices (Beng et al., 2018) will pave the way to revolutionary Wi-Fi sensing applications including activity recognition (Beng et al., 2018), and radio-fingerprinting (Beng et al., 2018), among others (Beng et al., 2018). The vast majority of such applications leverage the uncompressed CFR measurements obtained from pilot symbols in the Wi-Fi packets to characterize the propagation channel. Despite leading to good performance, CFR-based techniques require manual extraction and recording of the CFR, which is currently not supported by the IEEE 802.11 standard. This has led to the usage of CFR extractor tools running custom-tailored firmware (Beng et al., 2018; Wang et al., 2018). Even if 802.11 could eventually support CFR extraction in the future, (i) it would require additional device-specific processing to extract it from the \begin{table} \begin{tabular}{|c|c|c|} \hline **MIMO** & **No. angles** & **Order of the angles** \\ \hline \(2\times 1\) & 2 & \(\phi_{11}\), \(\psi_{21}\) \\ \hline \(2\times 2\) & 2 & \(\phi_{11}\), \(\psi_{21}\) \\ \hline \(3\times 1\) & 4 & \(\phi_{11}\), \(\phi_{21}\), \(\psi_{21}\), \(\psi_{21}\) \\ \hline \(3\times 2\) & 6 & \(\phi_{11}\), \(\phi_{21}\), \(\psi_{21}\), \(\psi_{21}\), \(\phi_{22}\), \(\psi_{32}\) \\ \hline \(3\times 3\) & 6 & \(\phi_{11}\), \(\phi_{21}\), \(\psi_{21}\), \(\psi_{21}\), \(\phi_{22}\), \(\psi_{32}\) \\ \hline \(4\times 1\) & 6 & \(\phi_{11}\), \(\phi_{21}\), \(\phi_{31}\), \(\psi_{21}\), \(\psi_{31}\), \(\psi_{41}\) \\ \hline \(4\times 2\) & 10 & \(\phi_{11}\), \(\phi_{21}\), \(\phi_{31}\), \(\psi_{21}\), \(\psi_{31}\), \(\psi_{41}\), \\ & & \(\phi_{22}\), \(\phi_{32}\), \(\psi_{32}\), \(\psi_{42}\) \\ \hline \(4\times 3\) & 12 & \(\phi_{11}\), \(\phi_{21}\), \(\phi_{31}\), \(\psi_{21}\), \(\psi_{31}\), \(\psi_{41}\), \\ & & \(\phi_{22}\), \(\phi_{32}\), \(\psi_{32}\), \(\psi_{42}\), \(\phi_{33}\), \(\psi_{43}\) \\ \hline \end{tabular} \end{table} Table 1. Order of BFAs in the BFI report frame in Figure 3 for different MIMO configurations. Figure 5. BFAs (\(\phi_{11}\) and \(\phi_{21}\)) of each sounded sub-channel of a \(4\times 2\) IEEE 802.11ax system at 160 MHz. Figure 6. Magnitude of \(\tilde{\mathbf{V}}_{k}\) of each sounded sub-channel for different transmit antennas \(m\in\{0,\ldots,M-1\}\) and spatial streams \(n_{\text{SS}}\in\{0,\ldots,N_{\text{SS}}-1\}\) of a \(4\times 2\) IEEE 802.11ax system at 160 MHz. Figure 7. Time evolution of \(\tilde{\mathbf{V}}\). The columns refer to the transmit antenna and rows to the spatial streams. chip, thus increasing energy consumption; (ii) multi-device sensing would require tight synchronization among collectors to align the time of the individual CFR measurements. On top of these, most of the existing CFR tools need direct access to the device and only work on devices implementing specific standards and operating on channels with certain bandwidths. This limits different applications to leverage the Wi-Fi signals opportunistically and hinders mass adoption at the same time. On the contrary, we argue that the BFI, and, in turn, the BFAs, are more effective metrics for developing sensing applications. In this section, we show how BFAs can be leveraged for human activity classification through IEEE 802.11ac MU-MIMO systems operating on channels with 80 MHz of bandwidth. A comparison of BFAs and traditional CFR based approach along with domain generalization capability of BFAs are presented in (Bordes and Kwiepka, 2018). ### Learning Architecture Owing to its excellent performance, convolutional neural network (CNN) is a popular choice in a wide range of applications including wireless sensing (Bordes and Kwiepka, 2018; Kwiepka, 2018). The convolutional layer is the basis of CNN which performs convolution operations on the elements of the input data to extract features. Thus, for demonstrating the potential of BFAs in Wi-Fi sensing tasks like activity classification, we leverage a CNN architecture based on only three VGG-based blocks (Krizhevsky et al., 2014). Specifically, the network stacks three convolutional blocks (conv-block) and a max-pooling (MaxPool) layer as detailed in Figure 8. The output of the MaxPool layer is flattened and then the Softmax activation function is used to obtain the probability distribution over the activity labels. Each conv-block consists of two 2-dimensional convolutional layers. Inspired by the design of VGG block, we use a kernel size of \(3\times 3\) and a step size of 1 in each of the convolutional layer (Krizhevsky et al., 2014). We use batch normalization and rectified linear units (ReLU) activation function in each conv-block to avoid gradient explosion or vanishing and to have non-linearity in the network respectively. Our VGG-based CNN consists of three of such conv-blocks with 32, 64, and 128 filters respectively. ### Experimental Setup The experiments are conducted in three indoor scenarios with three different subjects as presented in Figure 9. In each of the setups, we have a \(3\times 3\) IEEE 802.11ac MU-MIMO system operating on channel 153 with center frequency \(f_{c}=5.77\) GHz, with 80 MHz of bandwidth. Each of the systems has one beamformer and three beamformees (stations (STAs)) with M=3 and N=1 antennas enabled respectively. We use off-the-shelf Netgear Nighthawk X4S AC2600 routers both as beamformer and beamformees to set up the network. We consider three human subjects performing ten different activities: _walking, standing, rotating, waiving, drinking, hands up and down, reading a book, jogging, boxing, and clapping_. UDP data streams are sent from the beamformer to the beamformees in the downlink direction to trigger the channel sounding. We record the BFAs frames of all three stations with a single capture with Wi-BFI and extract the BFAs associated with each beamformees. For training the model we capture the BFAs frames for 3 minutes for each activity performed by each subject. Hence, for each of the beamformees we divide the extracted BFAs into non-overlapping windows of 10 packets each. The windows are fed one at a time to the learning block. To obtain the ground truth, we also capture the video streams of the subjects performing the activities. The experimental setup and the snaps from the ground truth video streams with three different subjects and activities are shown in Figure 10. ### Sensing Performance with BFAs Figure 11 presents the classification performance of BFAs with baseline CNN when we consider a different number of STAs. The average classification accuracies are respectively 94.35%, 98.59%, and 99.28% when we use the data from only Figure 8. CNN-based learning architecture. Figure 9. Experimental setup for activity classification with BFAs. one STA, two STAs combined, and three STAs combined. It is noticeable that the performance is comparatively poor when we have only one STA. To further investigate this aspect, we report the classification performance of each individual beamformees in Figure 12. We notice that, in different setups, the individual performance of the beamformees varies significantly. For example, in setup 1, STA B performs worse with an accuracy of around 80%. On the other hand, in setup 2 and setup 3 STA C and STA B perform worse with an accuracy of 82.81% and 92.97% respectively. This is due to the fact that the physical location of the STA may cause the channel between that particular STA and the beamformer to be in deep fade, making the model perform poorly. However, this can be improved by considering more than one beamformees for the classification as presented in Figure 11. To evaluate the impact of the sub-channel granularity on the activity classification accuracy, we report in Figure 13 the performance of the CNN based model, presented in Figure 8, when we consider different numbers of OFDM sub-channels. The average performance over all the setups considering all 234 data sub-channels at 80 MHz bandwidth is 99.19%. Even though the overall performance degrades when the number of sub-channels decreases, the reduction in the performance is not significant. On the other hand, a decrease in the number of sub-channels results in a significant reduction of the computational burden. For example, the performance decreases by only 0.44% and 1.83% when the number of sub-channels is decreased to 160 and 80, respectively. However, it reduces the computational burden by 1.46 and 2.92 times respectively. Considering only 20 sub-channels, the overall performance is 93.58% with an 11.7 times reduction of the computational burden. This gives us an idea of the trade-off between the performance and computational burden. Overall, the performance analysis shows that BFAs-based sensing performs comparatively with the CFR based sensing works of the literature (Gupta et al., 2018). BFAs-based sensing can leverage spatial diversity gained through MU-MIMO system which most CFR-based approaches cannot. ## 4. Related Work Over the last few years, a few efforts have been made to leverage BFAs frames for various applications including radio-fingerprinting (Kanda et al., 2018) and Wi-Fi sensing (Menghello et al., 2018; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). Meneghello et al. leveraged \(\mathbf{\tilde{V}_{k}}\) matrices to perform Wi-Fi radio fingerprinting on the move which is based on the intuition that, imperfections in the transmitter's radio circuitry percolate onto the BFI (Kanda et al., 2018). Kanda et al. extracted \(\mathbf{\tilde{V}_{k}}\) matrices from BFAs frames to perform respiratory rate estimation (Chen et al., 2018). Kondo et al. evaluate the sensing ability of uni-directional and bi-directional BFI and demonstrate that BFI computed from the Figure 11. Classification performance with BFAs for different number of STAs. Figure 12. Classification performance with BFAs for each individual STA. Figure 10. Experimental setup for activity classification with BFAs. access point (uplink beamforming) has better sensitivity in comparison to the BFI computed by the stations (downlink beamforming) (Hahara et al., 2020). Itahara et al. leveraged BFI to estimate the angle of departure for multiple propagation paths and achieved comparable performance to that of CFR based approaches (Haque et al., 2020). Haque et al. performed sensing by classifying 20 different human activities with compressed BFAs (Haque et al., 2020). Even though there has been increased interest in BFI-based research, there are no unified open-sourced tools to extract BFAs and reconstruct BFI from the captured BFAs. This limits the research community to investigate further this field. Thus we developed Wi-BFI to extract BFAs and reconstruct the \(\tilde{\mathbf{V}}_{\mathbf{k}}\) matrices from any IEEE 802.11 device with all possible bandwidths and network configurations. ## 5. Conclusions and Remarks In this paper, we have proposed Wi-BFI, the first tool to extract IEEE 802.11ac/ax beamforming feedback angles (BFAs) and reconstruct the beamforming feedback information (BFI) in the wild from both SU-MIMO and MU-MIMO systems operating at 20 MHz/40 MHz/80 MHz/160 MHz bandwidth and with any network configuration. Wi-BFI operates at multiple bands and with multiple Wi-Fi standards simultaneously without the need for any direct access to the beamformer or beamformees. We provided an example of leveraging BFAs for one of the most popular wireless sensing tasks, i.e., human activity recognition, achieving up to 99.28% recognition accuracy. Other BFI use cases include radio-fingerprinting, Wi-Fi localization, and optimization of the resource allocation in MU-MIMO networks, among others. To further promote BFI-based research we have made the Wi-BFI tool compatible with any operating system and we open-sourced it together with a detailed explanation of the operations and practical examples of use. ## Acknowledgments This work is funded in part by the National Science Foundation (NSF) grant CNS-2134973, CNS-2120447, and ECCS-2229472, by the Air Force Office of Scientific Research under contract number FA9550-23-1-0261, by the Office of Naval Research under award number N00014-23-1-2221, and by the Fulbright Schuman Program, administered by the Fulbright Commission in Brussels and jointly financed by the U.S. Department of State, and the Directorate-General for Education, Youth, Sport and Culture (DG.EAC) of the European Commission. The views and opinions expressed in this work are those of the authors and do not necessarily reflect those of the funding institutions.
2309.13174
Robust self-propulsion in sand using simply controlled vibrating cubes
Much of the Earth and many surfaces of extraterrestrial bodies are composed of in-cohesive particle matter. Locomoting on granular terrain is challenging for common robotic devices, either wheeled or legged. In this work, we discover a robust alternative locomotion mechanism on granular media -- generating movement via self-vibration. To demonstrate the effectiveness of this locomotion mechanism, we develop a cube-shaped robot with an embedded vibratory motor and conduct systematic experiments on diverse granular terrains of various particle properties. We investigate how locomotion changes as a function of vibration frequency/intensity on granular terrains. Compared to hard surfaces, we find such a vibratory locomotion mechanism enables the robot to move faster, and more stable on granular surfaces, facilitated by the interaction between the body and surrounding granules. The simplicity in structural design and controls of this robotic system indicates that vibratory locomotion can be a valuable alternative way to produce robust locomotion on granular terrains. We further demonstrate that such cube-shape robots can be used as modular units for morphologically structured vibratory robots with capabilities of maneuverable forward and turning motions, showing potential practical scenarios for robotic systems.
Bangyuan Liu, Tianyu Wang, Velin Kojouharov, Frank L. Hammond III, Daniel I. Goldman
2023-09-22T20:31:07Z
http://arxiv.org/abs/2309.13174v1
# Robust self-propulsion in sand using simply controlled vibrating cubes ###### Abstract Much of the Earth and many surfaces of extraterrestrial bodies are composed of in-cohesive particle matter. Locomoting on granular terrain is challenging for common robotic devices, either wheeled or legged. In this work, we discover a robust alternative locomotion mechanism on granular media - generating movement via self-vibration. To demonstrate the effectiveness of this locomotion mechanism, we develop a cube-shaped robot with an embedded vibratory motor and conduct systematic experiments on diverse granular terrains of various particle properties. We investigate how locomotion changes as a function of vibration frequency/intensity on granular terrains. Compared to hard surfaces, we find such a vibratory locomotion mechanism enables the robot to move faster, and more stable on granular surfaces, facilitated by the interaction between the body and surrounding granules. The simplicity in structural design and controls of this robotic system indicates that vibratory locomotion can be a valuable alternative way to produce robust locomotion on granular terrains. We further demonstrate that such cube-shape robots can be used as modular units for morphologically structured vibratory robots with capabilities of maneuverable forward and turning motions, showing potential practical scenarios for robotic systems. Vibration, granular media, robot, locomotion, modular robot, robophysics 1 ## 1 Introduction Many terrestrial and extraterrestrial landmasses are composed of soft flowable material, like sand and snow. Among them, granular media, a collection of discrete, solid particles that exhibit energy dissipation upon interaction, is challenging for conventional wheeled and tracked devices to locomote on. To surmount this challenge, diverse robotic systems have been developed to produce controllable movement in granular materials. These systems encompass legged configurations (Liang et al., 2012; Qian et al., 2015; Zhong et al., 2018; Lee et al., 2022), limbless structures (Maladen et al., 2011; Marvi et al., 2014), and rover designs (Knuth et al., 2012; Shrivastava et al., 2020). A common issue faced by diverse robotic systems attempting locomotion within granular media lies in the potential transition of the granular terrain from a solid to a flowing state, which occurs when the force per unit area exerted by the robot exceeds its yield stress or input energy threshold. The intricate interaction between the robot body and the terrain often engenders a coupled relationship encompassing the robot's movement, the resistive force the robot experiences, and the dynamic changing of the terrain state before, during, and after the robot's movement. Particularly in cases involving shear-based locomotion, the granular material can aggregate into formations that act as obstacles, hindering the robot's progress. A vibration-driven robot leverages periodic oscillations or rotations of internal components to generate motion throughout the entire body (Golitsyna, 2018; Calisti et al., 2017; Reis et al., 2013; Becker et al., 2011). In contrast to conventional wheeled, legged, or elongated limbless robots, vibration-driven robots offer a simpler and more compact body design, without the need for body deformation and exposed actuators or joints. This intrinsic feature effectively prevents potential damage or malfunctions to the robot arising from particle infiltration or obstruction of the actuation mechanisms. Further, such mechanism and design allow for a greater contact area between the robot's body and the terrain. Therefore, the yield strain and stress for supporting body weight and producing motion propulsion can be reduced; consequently, the risk of terrain collapse is mitigated. Previous research has designed and studied vibration-driven robots that can locomote on various terrains, such as hard surfaces (Li et al., 2021; Notomista et al., 2019; Rubenstein et al., 2012), fluid surfaces (Cocuzza et al., 2021), and amphibious environments (Wang et al., 2022; Tang et al., 2018). These vibratory robots typically utilize a consistent actuation principle, wherein vibratory actuators induce multi-directional movement through alterations in force orientation, and the accumulated movement direction is dictated by anisotropic friction or resistive forces. These forces stem from specific directional components of periodic internal forces or inherent body features (Golitsyna, 2018; Calisti et al., 2017). However, the performance of such a vibration-driven mechanism on granular media remains unexplored. Due to its design simplicity of structure, ease of control, and enlarged body-terrain contact area, a vibratory presents a promising avenue for achieving robust and effective locomotion on granular media. This work introduces the development of a novel vibrating robotic system (Fig. 1) to investigate the locomotion capabilities of such vibration mechanisms within different granular mediums and identify potential practical applications. ## 2 Materials and Methods ### Robot Design and Maneuver manipulation #### 2.1.1 Single Cube For design simplicity, we used Solidworks to create a cubic box with an open side for the motor and a press-fit lid to close the cube (as shown in Fig. 1). The 4-cm-side-length cube was then 3D printed using a LulzBot TAZ Workhorse 3D Printer and polylactic acid (PLA) as the printing material. To generate the vibration, we selected an eccentric rotating mass vibration motor (ERM), specifically the VJQ24 from Vybronics Inc (weight 31 g, 2550 rpm at 5 V DC). The rotary axis of the vibration motor lies horizontally in the x direction, parallel to the cube's bottom surface. When power is applied, the uneven mass starts rotating, which leads to the rotary oscillation around x axis. By switching the voltage from positive to negative, the vibration motor rotation direction converts from counter-clockwise to clockwise (viewing in the positive x direction), which allows the single cube and bi-cube to generate maneuvers. The position of the vibration motor is adjusted to the cube's center, guaranteeing alignment between the center of the cube's mass and its geometric center. Inside the cube, we implemented a structure to securely press fit the motor in place to ensure that it remains stationary while vibrating. Influenced by the simultaneous lateral oscillatory inertia generated by the single cube system, the cube forward locomotion is accompanied by turning (shown in Fig. 1). When the input voltage is positive, the counter-clockwise-rotating vibration motor induces a leftward turning, while negative input voltage induces a rightward turning. #### 2.1.2 Bi-cube and maneuver control The bi-cube robot is formed by two firmly bonded identical cubes by combining two cubes in the orientation in which two ERM's rotary axes are parallel to each other. Such a bi-cube robot can not only move forward straight but can also turn. Based on the maneuver test, when the left cube (marked as #1 in Fig. 2) vibration motor rotates counter-clockwise and the right cube (marked as #2) rotates clockwise, the lateral oscillation influence of each Figure 1: Vibratory single cube and bi-cube fabrication and motion. Figure (A), (C), and (E) shows the appearance, perspective drawing, and the turning maneuver of a single cube. By combining two single cubes together, the bi-cube is fabricated. Figure (B), (D), and (F) shows the appearance, perspective drawing, and the forward-moving maneuver of a bi-cube. The locomotion of single cube and bi-cube are recorded in supplementary video S1. motor can be canceled out, thus the bi-cube robot performs a forward motion. When #1 cube turns off and #2 cube rotates counter-clockwise, the bi-cube robot performs a left turn. Similarly, when #2 cube turns off and #1 cube rotates clockwise, the bi-cube robot performs a right turn. For the other manipulation combination case, we did not observe stable maneuvers occur. Additionally, the forward maneuver requires that the input voltage magnitude applied to both Cube #1 and Cube #2 be the same, which helps the cubes to synchronize and resonate. A small divergence between two sides' voltage would influence the maneuver performance greatly, and finally, lead to unstable arbitrary motion. ### Experiment Environment #### 2.2.1 Air-fluidized Testbed To ensure consistent initial conditions for our experiments on granular media, we implemented a terrain creation and locomotion testing system (as shown in Fig. 2), following the approach described in our previous work (Qian et al., 2013). The system uses an air-fluidized bed, a container (60cm long, 30cm wide) filled with granular material (5cm deep) such that the surface can be flattened at the beginning of each experiment by blowing air through a porous rigid plastic layer. A vacuum (Vacmaster) is used to blow the bed. The flow distribution layer, which evenly distributes the air across the bed, is approximately 0.5 cm thick and has randomly distributed pores with a diameter of 50 micrometers. This allows for precise control over the bed's surface and ensures that the initial conditions of each experiment are consistent. Figure 2: Bi-cube steering mechanism. The bi-cube can execute three maneuvers by controlling the rotational state of vibration motors #1 and #2 inside the left and right cubes: forward motion, left turning, and right turning, from the left column to the right column, with each row illustrating the motor state from the perspective view, back view, and the corresponding maneuver. #### 2.2.2 Granular materials We expect that the locomotor capabilities of our robotic cube can vary depending on the type of granular material it is tested on, as the particle sizes of different materials can affect its movement. To investigate this, we tested robot speed and energy use on three different types of granular material in our experiments: glass particles with an average diameter of approximately 200 micrometers, sand with particle sizes ranging from 500 to 700 micrometers (fine granular media), and 1000 to 1200 micrometers (coarse sand, collected in Yuma County, Arizona, USA). As a comparison, we also tested on a hard wooden board (as shown in Fig. 3D). ### Experiment Protocol For each of the four terrains we considered, we carried out a series of experiments with the bi-cube robot. We varied the input voltage to the system from 0 to 8V to test its forward speed and cost of transport (COT). Specifically, we limited the robot's rotational movement using two parallel walls that were 10 cm away from each other (slightly larger than the robot's width 8 cm). For each voltage input value, we carried out three repeated experiments. In each experiment, we first flattened the granular material surface, then placed the robot at one end of the tank (\(>\)5 cm away from the wall) and recorded the robot's locomotion from the robot started vibrating to the robot reached the other end of the tank. For the experiments where the robot was unable to reach the other end of the tank, we kept the robot running for at least 30 seconds. To capture the robot's locomotion trajectory and poses, we implemented a motion tracking system (OptiTrack) using 4 OptiTrack Flex 13 cameras. We attached infrared reflective markers to the robot body and tracked their 3D positions using the system at a 120 fps frame rate, thus the progress of the locomotion can be fully reconstructed and analyzed. We tracked the rigid body position and orientation in the world frame to Figure 3: The automated terrain creation and locomotion testing system. (A) Diagram of the air-fluidized bed for the robot locomotion testing. (B) the process of flattening the granular media surface before each experiment and creating a loosely packed state. calculate the forward speed. To calculate the cost of transport, we measured the power consumption of the system using an INA260 precision digital power sensor that monitors power at a 100 Hz frequency. ## 3 Result and Discussion ### 3.1 Locomotor Performance #### 3.1.1 Forward velocity We measured the average speed under various applied voltages to quantify the bi-cube robot's locomotion performance. Utilizing tracked data, we computed the average forward speed (measured in cm/s) achieved by the bi-cube robot during each experiment. The values were averaged over three separate trials, as depicted in Fig. 4, where error bars represent the standard deviation. In glass particles, the bi-cube robot cannot generate effective locomotion when the input voltage is less than 3V. With increasing input voltage, the forward locomotion speed increases, reaching the local maximum of \(4.55\pm 1.05\) cm/s at 4.5V. As the input voltage steadily increases, reaching the maximum voltage of the ERM at 8V, performance gradually declines. Comparable trends in locomotion performance are observed in fine granular media (fine sand) and coarse granular media (coarse sand) as well. However, the minimum input voltage required for effective motion increases to 4.5V in these cases. Moreover, local maxima in forward speed emerge at higher input voltages. Specifically, in fine granular media (fine sand), the local maximum is observed at \(8.64\pm 0.46\) cm/s with an input voltage of 6V. In the case of coarse granular media (coarse sand), the local maximum reaches \(9.34\pm 0.54\) cm/s with an input voltage of 6.5V. For comparative analysis, we conducted locomotion performance tests on a hard wooden surface. The results illustrate that only a narrow input voltage range (from 3.5V to 4.5V) yields forward movement for the bi-cube robot on this terrain. Further, the maximum speed achieved is only \(1.27\pm 0.21\) cm/s at 4V, significantly slower than the velocities achieved in granular media. Through these experimental findings, we validate that the bi-cube robot we developed is capable of achieving effective locomotion across various granular media types (at rates greater than 1 body length per second). Further, it is observed that the optimal operating voltage increases as the grain size increases. #### 3.1.2 Locomotion efficiency In addition to evaluating the forward locomotion speed, we conduct real-time power consumption measurements using an INA260 power sensor for each experiment. Subsequently, we calculate the cost of transport (COT) following \(\text{COT}=\frac{P}{mgv}\), where \(P\) and \(v\) represent the average power consumption and speed achieved during the locomotion process, respectively, and \(m\) is the mass of the bi-cube robot (147 g). Fig. 5 shows COT values of the bi-cube robot traversing four different types of terrain that we tested. Experiment results reveal that the local minima of COT vary corresponding to grain sizes. Notably, this minimal cost of transport values remains below 10 across all granular terrains: \(9.06\pm 0.66\) at 3.5V in glass particles, \(8.23\pm 0.42\) at 4.5V in fine granular media (fine sand), and \(8.31\pm 0.09\) at 5V in coarse granular media (coarse sand). However, we notice that the local minima of the cost of transport do not coincide with the local maxima of speed. Note that we have excluded data points corresponding to input voltages where the robot is unable to achieve effective locomotion, resulting in a cost of transport exceeding 100. Our findings demonstrate that the bi-cube robot demonstrates more efficient locomotion within granular media in comparison to hard surfaces. This highlights its substantial potential for broader applications within granular environments. ### Motion recording and analysis To gain insight into the mechanisms governing the bi-cube robot's translation through vibration and subsequently develop either a kinematic or dynamic model to elucidate its motion, we utilized a high-speed 240 fps camera to capture the rigid body orientations of the robot. Subsequently, we extracted specific frames to calculate the instantaneous displacements in both the forward (\(x\)) and gravity (\(z\)) directions, as well as the rotational motion within the xz plane (\(\theta\)). In Fig. 6A, we present a sequence of the robot's postures in a period of motion in the quasi-2D setup under 5V input. Fig. 6B shows the \(x\) displacements of the center of geometry. The \(x\) trajectory exhibits Figure 4: Bi-cube velocity test on (A) glass particles, (B) fine granular media (fine sand), (C) coarse granular media (coarse sand), and (D) hard wooden surface. Each figure in the left column shows the averaged velocity of three experiment trials as a function of input voltage from 0 to 8V, with the error bar showing the standard derivation. The local maximum velocity on each terrain is marked by the blue dot. The right column shows an example locomotion of the robot operated with the optimal input voltage in each terrain, in which the frames are recorded at t=0 and t=4s. The black scale bars indicate 5cm. The performance are recorded in supplementary video S2. periodic behavior with a pattern: over the course of a single motion cycle, the robot experienced two distinct phases. In one half of the cycle, the robot moved backward, while in the other half, it moved forward. A net forward displacement was achieved because the distance covered during the forward movement phase is greater than that during the backward movement phase. ### Slope climbing Slope climbing remains challenging for many robots designed to navigate granular environments (Shrivastava et al., 2020), as steep granular slopes tend to be sensitive to stress and shear forces disturbances (Gravish and Goldman, 2014; Tegzes et al., 2003), which can cause avalanche. The intricate nature of these terrains becomes particularly evident during climbing maneuvers, as even minor disturbances introduced by robot motion can trigger avalanches and instigate the transition of the terrain from a solid to a fluid-like state, resulting in failure. To investigate the slope-climbing capabilities of the bi-cube robot, we conducted tests on the testbed with coarse granular media (coarse sand, 1000 to 1200 micrometers), with one side of the testbed elevated, Figure 5: Bi-cube locomotion efficiency test on (A) glass particles, (B) fine granular media (fine sand), (C) coarse granular media (coarse sand), and (D) a hard wooden surface. Each figure shows the averaged cost of transport (COT) of three trials as a function of input voltage from 0 to 8V, with the error bar showing the standard derivation. The red zigzag dash arrow marks the region where the cube does not move effectively, and thus COT rises tremendously. thereby forming granular slopes with 4, 8, and 12 degrees of inclined angle. Fig. 7A shows the average speed and standard deviation of the cube's motion, powered by 6V input voltage, averaged over three trials. With increasing slope angles, there is a continuous decrement in the cube's velocity. At a 12-degree slope angle, we observed that the relatively gentle slope would sometimes lead to collapse during robot locomotion. Consequently, the robot would become stuck shortly after climbing a few body lengths and be unable to escape. ### Escaping from sticking Given that the testbed is fluidized before each experiment, the loosely compacted granular terrain can unpredictably collapse during robot locomotion. This can lead to the formation of pits in the terrain which can cause the robot to get stuck. Thus we carried out experiments to test the bi-cube robot's ability to escape from pits. Through experiments, we verified that with appropriate input voltages (e.g., 5V in fine granular media), the bi-cube robot can manage to extricate itself from a pit: The robot first engaged in a process of crawling, gradually moving the sand pile from its front to rear; and eventually, this enabled the robot to successfully exit. We provide a demonstration in supplementary video S5, in which the robot first got stuck in a pit in fine granular media (fine sand) when actuated by 3V, and escaped with a 5V input. ### Maneuverability test The bi-cube robot exhibits the capacity to execute forward, left, and right turning maneuvers, enabling effective navigation across 2D granular terrains. We illustrate this capability through a maneuver demonstration conducted on a coarse granular media surface (comprising particles ranging from 1000 to 1200 micrometers). Notably, two cylindrical obstacles have been rigidly placed within the terrain, as depicted in Fig. 8. In this demonstration, the robot's maneuvering actions are manually switched among forward moving, left turning and right turning as shown in Fig. 2. The cube follows an '\(\alpha\)'-shaped trajectory, avoiding any potential collisions with the cylinder obstacles. We provide supplementary video S6, which describes the bi-cube robot's agility and maneuverability while traversing complex granular terrains. Figure 6: Bi-cube motion tracking. (A) A sequence of the robot posture in the abstracted 2D workspace over one complete period, the red dot represents the center of geometry. Slow motion record is provided in supplementary video S3. (B) The periodic trajectories of \(x\) displacement of the center of geometry. ## 4 Conclusion In this paper, we systematically tested the capability of vibratory locomotion on the granular medium based on experiments. The vibration cube exhibits the capability to navigate across granular terrains of various particle sizes as well as solid ground. Strikingly, compared to the hard ground, such vibratory locomotion method outperforms on granular media surfaces both in velocity and efficiency. We posit that the flowable nature of the granular terrain plays a pivotal role in stabilizing motion and attenuating extraneous vibration energy, thereby amplifying locomotive efficacy. The inherent vibratory locomotion mechanism showcases an inherent affinity for granular terrains, suggesting a harmonious alignment between the mechanism and such environments. A solitary cube only demonstrates the capacity to execute left and right turns. However, through the fusion of two individual cubes, a bi-cube configuration is achieved, facilitating forward, left, and right turning. This amalgamation imparts a notable enhancement to maneuverability. Moreover, the inherent simplicity of the vibration cube underscores its potential to exhibit swarming capabilities on granular terrain. Future work includes revealing the granular vibratory locomotion mechanism via both comprehensive theoretical modeling and experimental validation. Besides, we intend to conduct an in-depth investigation into the influence of particle size and density on locomotion efficiency based on simulation. Additionally, we will upgrade the vibration cube into a swarm robotic system, integrating self-feedback loops and inter-unit communication, for potential future application in exploration. Figure 7: Bi-cube slope climbing. (A) The averaged velocity of three trials versus slope angle with standard derivation bar. The cube is tested on coarse granular media (coarse sand) at 6V. At 12 12-degree slope, we recorded the average speed before the bi-cube got stuck. (B) The bi-cube slope climbing motion (4, 8, 12 degrees), recorded at frames of \(t=0\) s and \(t=\)5 s. The scale bar is 5 cm. The performance is recorded in supplementary video S4.
2309.05716
New evidence about HW Vir's circumbinary planets from Hipparcos-Gaia astrometry and a reanalysis of the eclipse timing variations using nested sampling
The post common-envelope eclipsing binary HW Virginis has had many circumbinary companions proposed based on eclipse timing variations. Each proposed solution has lacked in predictability and orbital stability, leaving the origin of the eclipse timing variations an active area of research. Leveraging the catalogue of \textit{Hipparcos} and \textit{Gaia} proper motion anomalies, we show there is slight evidence for a circumbinary companion orbiting HW Vir. We place an upper limit in mass for such a companion which excludes some previously claimed companions. We also apply this method to V471 Tauri and confirm the non-detection of a previously claimed brown dwarf. We adapt the {\tt kima} nested sampling code to analyse eclipse timing variations and re-analyse archival data on HW Vir, varying the order of the ephemeris that we fit for and the amount of the data that we use. Although signals are clearly present, we find two signals around 2500 and 4000 day periods that are not coherent between different \textit{chunks} of the data, so are likely to not be of planetary origin. We analyse the whole dataset and find the best solution to contain four signals. Of these four we argue the outermost is the most compatible with astrometry and thus the most likely to be of planetary nature. We posit the other three pseudo-periodic signals are caused by physical processes on the white dwarf. The eventual release of the full \textit{Gaia} epoch astrometry is a promising way to confirm whether circumbinary planets exist around HW Vir (and other similar systems), and explore white dwarf physics.
Thomas A. Baycroft, Amaury H. M. J Triaud, Pierre Kervella
2023-09-11T18:00:06Z
http://arxiv.org/abs/2309.05716v2
New evidence about HW Vir's circumbinary planets from _Hipparcos-Gaia_ astrometry and a reanalysis of the eclipse timing variations using nested sampling ###### Abstract The post common-envelope eclipsing binary HW Virginis has had many circumbinary companions proposed based on eclipse timing variations. Each proposed solution has lacked in predictability and orbital stability, leaving the origin of the eclipse timing variations an active area of research. Leveraging the catalogue of _Hipparcos_ and _Gaia_ proper motion anomalies, we show there is slight evidence for a circumbinary companion orbiting HW Vir. We place an upper limit in mass for such a companion which excludes some previously claimed companions. We also apply this method to V471 Tauri and confirm the non-detection of a previously claimed brown dwarf. We adapt the kima nested sampling code to analyse eclipse timing variations and re-analyse archival data on HW Vir, varying the order of the ephemeris that we fit for and the amount of the data that we use. Although signals are clearly present, we find two signals around 2500 and 4000 day periods that are not coherent between different _chunks_ of the data, so are likely to not be of planetary origin. We analyse the whole dataset and find the best solution to contain four signals. Of these four we argue the outermost is the most compatible with astrometry and thus the most likely to be of planetary nature. We posit the other three pseudo-periodic signals are caused by physical processes on the white dwarf. The eventual release of the full _Gaia_ epoch astrometry is a promising way to confirm whether circumbinary planets exist around HW Vir (and other similar systems), and explore white dwarf physics. keywords: binaries:close - binaries: eclipsing - astrometry - planets and satellites: detection - stars: individual: HW Vir - stars: subdwarfs. ## 1 Introduction Although the majority of known exoplanets have been detected around single stars on the main sequence, planetary systems around post-main sequence stars and in binary star systems are known to exist. The first detected exoplanetary system was around a pulsar (Wolszczan and Faali, 1992) and planets orbiting single white dwarfs are known to exist (e.g. Bachelet et al., 2012; Vanderburg et al., 2015, 2020). Many single white dwarf stars have been found to exhibit irregular transit-like and dimming events as well as having atmospheres polluted with heavy elements, both pointing to debris being accreted onto the star which could potentially have been scattered inwards by an invisible companion (Koester et al., 2014; Farihi et al., 2022). Planetary systems around main sequence binaries have been detected in transit by Kepler (e.g. Doyle et al., 2011) and _TESS_(e.g. Kostov et al., 2020) and also in radial velocity (e.g. Standing et al., 2023). Planets are therefore known to orbit main-sequence binaries and are able to survive the evolution of a single star. There have been many claims of planets1 orbiting evolved binaries, but they are yet to be fully confirmed as planets. These candidate planets, orbiting post-common envelope binaries, are currently claimed based on periodic variations of the binary's mid-eclipse times. These variations can arise due to the light travel-time effect (LTTE) from the eclipsing binary orbiting the common center-of-mass between itself and the companion. These putative planets could be the counterparts of the detected main-sequence circumbinary planets that have lived through the evolution of their host binary (e.g. Columba et al., 2023). Footnote 1: Many of these have masses that would put them above the Deuterium burning limit and should be referred to as brown dwarfs, however for simplicity we refer to call them all “planets”. The existence of these planets has been debated (Mustill et al., 2013). In many cases the claimed planetary solutions fail to predict future eclipses (e.g. Pulley et al., 2022). One candidate, orbiting V471 Tauri (Beavers et al., 1986) was later followed up with direct imaging and was not detected with a high confidence (Hardy et al., 2015). HW Virginis (HW Vir) is one of the most famous examples of post-common envelope binaries with claimed companions, first proposed by Lee et al. (2009). The system consists of a sdB primary of mass \(M_{\rm A}=0.418\pm 0.008\,\rm M_{\odot}\) and an M-dwarf secondary of mass \(M_{\rm B}=0.128\pm 0.004\,\rm M_{\odot}\) in a binary with orbital period \(P_{\rm bin}=0.116719556\pm 7.4\times 10^{-9}\) days2. Eclipses have been precisely measured for over 30 years, with many conflicting solutions proposed (e.g. Beuermann et al., 2012; Esmer et al., 2021) with either one or two planets proposed as the cause of the eclipsing timing variations. One major issue is that none of the single-planet solutions fit the data satisfactorily, but none of the better-fitting two-planet solutions appear to be dynamically stable (Brown-Sevilla et al., 2021; Mai and Mutel, 2022). Another issue, as mentioned above, is that all of the proposed solutions very quickly diverge from the data subsequently collected. Footnote 2: Parameters are taken from Esmer et al. (2021), these values are used for the rest of the analysis when the mass of the central binary is needed Non-planetary explanations have been suggested which can produce eclipse timing variations in short-period binaries such as HW Vir. The period (or apparent period) of the binary could be affected by apsidal precession if it is eccentric, and magnetic braking (Rappaport et al., 1983) or emission of gravitational waves (Paczynski, 1967) could cause the orbit to shrink due to angular momentum loss. Other magnetic effects have also been proposed, such as the Applegate mechanism (Applegate, 1992), or a more recent mechanism, requiring less energy suggested by Lanza (2020). However in most cases these are insufficient to fully explain the shape or the amplitude of the observed modulations in eclipse time. Many of these candidate planets will need to be confirmed/rejected through other methods. One example of this happening is with V471 Tau. This system consists of a WD primary of mass \(M_{\rm A}=0.797\pm 0.016\) M\({}_{\odot}\) and a K-Dwarf secondary of mass \(M_{\rm B}=0.864\pm 0.029\) M\({}_{\odot}\) in a binary with orbital period \(P_{\rm bin}=0.5211834194\pm 7.2\times 10^{-9}\) days (Muirhead et al., 2022). This system shows periodic variations of the mid-eclipse times, which have been used to suggest an orbiting brown dwarf (Beaves et al., 1986; Guinan and Ribas, 2001). The system has since been directly imaged with SPHERE, and these observations resulted in a non-detection (Hardy et al., 2015), thus rejecting the claimed brown dwarf. Planets around ultra-short period evolved binaries such as these may also eventually be detectable in gravitational waves, for example by the Laser Interferometer Space Antenna, LISA (Danielski et al., 2019). LISA will however only be sensitive to binaries of shorter orbital period than HW Vir. Another possibility for confirming or rejecting post-common envelope circumbinary planets is precise astrometry. The space telescope _Gaia_(Gaia Collaboration et al., 2021) is performing a precise astrometric survey of the whole sky which will have a baseline of about 10 years. _Gaia_'s astrometric solution will be able to investigate some of these systems without relying on any eclipse timing data (Sahlmann et al., 2015). However, individual astrometric measurements are expected to released around late 2025. In the meantime, we can only rely on the proper motion anomaly method (Kervella et al., 2019). Before _Gaia_, _Hipparcos_(ESA, 1997) also performed an astrometric survey, but of a much smaller sample of stars, at a lower precision. HW Vir is within that sample, as is V471 Tau. The proper motion anomaly method combines positions and proper motions of a star from _Hipparcos_ and one of _Gaia_'s recent data releases to estimate the effect of an orbiting of an orbiting companion on the proper motion of the star. This method has been applied to single stars, and combined with other techniques such as radial velocity, has led to the detection and characterisation of several planetary companions (e.g. Mesa et al., 2022; Rickman et al., 2022). In this paper we present a new piece of astrometric information, in the form of the proper motion anomaly, to the puzzle that is HW Vir, and we perform a new fit of the eclipse timing data utilising nested sampling and analysing different _chunks_ of data separately. We report on the lack of consistency of signals in the eclipse times and present the model that we find to fit best to the whole dataset, providing a suggestion of which signal is favoured to be planetary since not all the detected signals can be. The paper is set out as follows. We describe the proper motion anomaly method, and apply it to both V471 Tau and HW Vir in Section 2. In Section 3, the use of kina to fit eclipse times is described. Section 4 details the eclipse timing data used, and the results from the analysis of the data. We discuss the results and implications and conclude in Section 5. ## 2 Astrometry: using the proper motion anomaly method We firstly explain the method of the astrometric proper motion anomaly, and secondly apply this to both V471 Tau and HW Vir. We compare the results from the astrometric proper motion anomaly to some of the previously proposed planetary solutions. ### How does the proper motion anomaly work? The proper motion anomaly analysis method is described in detail in Kervella et al. (2019). Using positional measurements from _Gaia_ and _Hipparcos_, we determine the long-term, mean proper motion vector of the system \(\mu_{\rm HG}\) by dividing the observed change in position by the time baseline \(\delta t_{\rm HG}\) between the two measurements (that is, 24.75 years between _Hipparcos_ and _Gaia_ DR3). For the nearest stars, second order effects must be taken into account in this computation, but they are negligible for the systems discussed in the present paper. Thanks to the long time baseline, and assuming that the orbital period of the companion is significantly shorter than \(\delta t_{\rm HG}\), \(\mu_{\rm HG}\) essentially traces the proper motion of the center of mass of the system. Separately, the short-term proper motion measurements \(\mu_{\rm Hip}\) and \(\mu_{\rm DR3}\) (obtained respectively by _Hipparcos_ and _Gaia_) trace the vector sum of 1) the linear proper motion of the center of mass and 2) the orbital motion \(\mu_{\rm orbit}\) of the photocentre of the system around the center of mass. Figure 1 visually presents the different vector quantities considered in these computations. When considering a planetary companion, the photocentre is located very close to the geometrical center of the star. In this configuration, subtracting the long-term proper motion \(\mu_{\rm HG}\) from the _Gaia_ DR3 short-term proper motion \(\mu_{\rm DR3}\) gives access to the proper motion anomaly of the star \(\Delta\mu\), that traces the orbital motion of the star around the center of mass of the system. The quantity \(\Delta\mu\) can be scaled to a linear tangential velocity anomaly \(v_{\rm tan}\) using the parallax. This is the two-dimensional counterpart of the radial velocity that is traditionally employed to detect exoplanets. Corrective terms must be considered to interpret the measured proper motion anomaly in terms of companion properties. Firstly, the \(\mu_{\rm orbit}\) quantity is an average quantity over the _Gaia_ integration window, that has a duration of \(\delta t_{\rm DR3}=34\) months. This averaging implies that the measured proper motion anomaly will be smeared, reducing the sensitivity in terms of companion mass. This loss of sensitivity is particularly strong for orbital periods shorter than \(\delta t_{\rm DR3}\) Secondly, the time baseline \(\delta t_{\rm HG}\) between _Hipparcos_ and _Gaia_ DR3 results in the subtraction of part of the proper motion signature of very long period companions (\(P>3\times\delta t_{\rm HG}\)) during the computation of the \(\mu_{\rm HG}\) quantity. This effect induces a loss of sensitivity to such very long period companions. These two effects determine the companion mass sensitivity function of the proper motion anomaly method. We now derive the equation for the tangential velocity caused by a companion if measured instantaneously. This equation can then be combined with the sensitivity function calculated numerically. The proper motion is usually divided into its components in right-ascension (ra) and declination (dec). \[\mu=\mu_{\rm ra}\mathbf{e}_{\rm ra}+\mu_{\rm dec}\mathbf{e}_{\rm dcc}, \tag{1}\] with \(\mathbf{e}_{\rm n}\) and \(\mathbf{e}_{\rm dec}\) the basis vectors in ra and dec. We then subtract the long-term HG proper motion from the _Gaia_ DR3 proper motion and take the magnitude of this vector to get the tangential velocity anomaly. \[\mathbf{\Delta}\mu=(\mu_{\rm DR3,ra}-\mu_{\rm HG,ra})\mathbf{e}_{\rm ra }+(\mu_{\rm DR3,dec}-\mu_{\rm HG,dec})\mathbf{e}_{\rm dec}, \tag{2}\] \[\nu_{\rm tan}=\frac{1}{\varpi}\sqrt{(\mu_{\rm DR3,ra}-\mu_{\rm HG,ra })^{2}+(\mu_{\rm DR3,dec}-\mu_{\rm HG,dec})^{2}}, \tag{3}\] where \(\varpi\) is the parallax. Now given an inner mass \(M\) and an outer mass \(m\) the relative orbital velocity is \[V=\sqrt{G(M+m)}\left(\frac{2}{r}-\frac{1}{a}\right)^{1/2}, \tag{4}\] where G is the gravitational constant, \(a\) the semi-major axis of the relative orbit, and \(r\) the relative orbital distance at the measured time. The distance is given by \[r=\frac{a(1-e^{2})}{1+e\cos f}, \tag{5}\] with \(e\) and \(f\) the eccentricity and true anomaly of the orbit at the measured time (they are the same for the relative orbit or the orbit of one of the components). Combining these two equations and using that the velocity of the inner body (i.e. the luminous one) relates to the outer velocity by \[v_{0}=\frac{m}{M+m}V, \tag{6}\] gives us: \[v_{\rm tan}=\sqrt{\frac{Gm^{2}}{a(M+m)}\left(\frac{2(1+e\cos f)} {(1-e^{2})}-1\right)^{1/2}} \tag{7}\] \[v_{\rm tan}=\sqrt{\frac{Gm^{3}}{a_{1}(M+m)^{2}}\left(\frac{2(1+e _{1}\cos f_{1})}{(1-e_{1}^{2})}-1\right)^{1/2}}, \tag{8}\] where \(a_{1}\) is the semi-major axis of the outer orbit which relates to that of the relative orbit by \(a_{1}=\frac{m}{M+m}a\). This derivation is valid for an instantaneous measurement of \(v_{\rm tan}\), which would correspond to an instantaneous measurement of \(\mu_{\rm DR3}\) and an infinitely long baseline for \(\mu_{\rm BG}\). This is, of course, not the case. As described above we must also include a sensitivity function. This function has been numerically calculated by Kervella et al. (2019). This leads to the sensitivity curve for the proper motion anomaly at different periods which is used in the following section. These curves give the areas of Period-Mass space that are consistent with a measured proper motion anomaly, under the assumption of a circular orbit. ### Applying the proper motion anomaly to V471 Tau and HW Vir Calculating the long-term proper motion between _Hipparcos_ and _Gaia_ DR3 has been done and combined into a catalogue by both Kervella et al. (2022) and Brandt (2021), using different combinations of the main _Hipparcos_ reductions. The values for the tangential velocity for HW Vir and V471 Tau are shown in Table 1. We note that the values are in good agreement between both catalogues and choose arbitrarily to use the Kervella et al. (2022) value from now on. #### 2.2.1 V471 Tau For V471 Tau, we have a proper motion anomaly between 2-\(\sigma\) and 3-\(\sigma\). Figure 2 shows the sensitivity curve of the proper motion anomaly method associated with the value for V471 Tau from Kervella et al. (2022). The dark green line shows the curve on which a body needs to lie to produce the observed tangential velocity value, the darker and lighter shaded regions show the 1\(\sigma\) and 3\(\sigma\) regions around that line. The spikes towards shorter orbital periods are the result of the sensitivity function as described above. The slope at longer orbital periods is produced when only small fraction of an orbital arc is covered and hence the efficiency function is small. In between, is a region of highest sensitivity. This provides an upper bound that is far below the mass of the proposed solutions by Beavers et al. (1986) and Guinan and Ribas (2001). This re-affirms the conclusion of Hardy et al. (2015) which did not find evidence of the proposed brown dwarf, and confirms that the variations in the mid-eclipse times must be coming from some other source. We numerically estimate the proper motion anomaly that would be caused by the binary. For a given set of parameters (\(M_{0}\), \(M_{1}\), \(P\)) we perform a bisection algorithm suggesting values for \(v_{\rm tan}\), and comparing the value of \(M_{1}\) obtained (given \(M_{0}\) and \(P\)) to the given value, until the masses agree to 0.001M\({}_{\rm jup}\). We repeat this for 1000 realisations of the binary parameters to then obtain the median and 1\(\sigma\) values of \(32^{+13}_{-21}\) m s\({}^{-1}\). This is entirely consistent with the tentative signal that is seen. The proper motion anomaly is sensing the smeared binary motion and so does not suggest an orbiting companion. #### 2.2.2 HW Vir For HW Vir, the tangential velocity is distinct from zero at around 2\(\sigma\) confidence in both catalogues. We cannot therefore conclude from the proper motion anomaly that there is definitely an orbiting body, but this _Gaia-Hipparcos_ combined measurement brings new evidence that suggests such a body is more likely to exist than not. First, we validate that this tentative proper motion anomaly is not caused by the smeared orbital motion of binary. In the same way as for V471 Tau, we numerically estimate the proper motion anomaly Figure 1: Diagram of the proper motion anomaly method. that would be induced by the binary and obtain the median and \(1\sigma\) values of \(2.52^{+0.80}_{-1.60}\) m s\({}^{-1}\). The excess tangential velocity is therefore not caused by the HW Vir binary. The top panel of Figure 3 shows the sensitivity curve of the proper motion anomaly method associated with HW Vir. The curve has the same shape as in Figure 2 (since all the spikes are primarily related with the _Gaia_ 34-month observing window) but is zoomed in on the area of best sensitivity. We overplot the locations of the orbiting bodies proposed by three previous studies (Beuermann et al., 2012; Brown-Sevilla et al., 2021; Esmer et al., 2021). We note that four of the proposed solutions include one orbiting body above the \(3\sigma\) line. These solutions are disfavoured3 by the observed proper motion anomaly which is too weak to have been produced by these putative objects. This plot also shows the locations of the four components from our best-fitting model4, which we describe in Section 4.2.2. Footnote 3: They may still be possible in reality, if we have a very eccentric orbit for the companion, and we observe it close to apastron (where the motion is slower) Footnote 4: We do not claim that all four of the signals are indeed planets This tentative proper motion anomaly is an extra piece of information about the HW Vir system which provides astrometric evidence that there may be an orbiting circumbinary companion. The catalogues of accelerations mentioned above rely on _Gaia_ EDR3 (Gaia Collaboration et al., 2021). The same analysis was done earlier using _Gaia_ DR2 (Gaia Collaboration et al., 2018), and the value from Kervella et al. (2019) for HW Vir is \(309\pm 200\). From this we infer that the astrometric signal is getting more confident as more _Gaia_ data becomes available. This implies that if there is indeed a signal there, it should be detectable from future _Gaia_ data releases. ## 3 Fitting eclipse timing variations with kima Whilst verifying whether proposed solutions for the HW Vir systems were compatible with the proper motion anomaly, we also decided to re-analyse the eclipse times of HW Vir with a nested sampler, which we believe has not been attempted yet. Most of the literature uses \(\chi^{2}\) maps or reduced \(\chi^{2}_{\nu}\) to make inferences about the number of signals present in the data, but none have conducted a Bayesian model comparison in this way yet. Amongst Bayesian methods, nested sampling has the advantage to let some key parameters free that are usually fixed in other types of analyses. In our case, the number of orbiting planets, \(N_{\rm p}\) is a free parameter which allows for a robust model comparison, based on a ratio of Bayesian evidence. All planetary signals are adjusted at once and models with 0, 1, 2... planets are constantly compared to one another. kima is an orbital fitting algorithm originally designed for application to radial velocities (Faria et al., 2018). We adapt it to fit mid-eclipse times instead, to then apply it to HW Vir. kima leverages nested sampling using DNEST4 (Brewer and Koreman-Mackey, 2018) to explore parameter space and calculate the likelihood of proposed samples. Using the trans-dimensional sampling in kima the number of Keplerian signals, \(N_{\rm p}\), being fit is a free parameter as described above. This allows a comparison of the different numbers of signals present in the data with a Bayes Factor. The Bayes Factor for a \(N_{\rm p}=n\) model5 compared to a \(N_{\rm p}=n-1\) model is the ratio of the evidence \(Z\) for each model. Footnote 5: \(N_{\rm p}\) meaning number of planets in the model The evidence is the primary output of nested sampling and is the integral of the likelihood over the prior mass. In nested sampling this integral is calculated as a weighted sum, with the weights being associated to the change in prior mass between consecutive samples (Skilling, 2006). In this case the evidence for an \(N_{\rm p}=n\) model is the sum of the _weights_ of all the samples with \(n\) planets, and then the Bayes Factor is \(BF=\frac{Z_{n}}{Z_{n-1}}\). We use a detection threshold of 150 as recommended by Kass and Raftery (1995). A Bayes Factor larger than 150 is taken as very strong evidence for the more complex model over the less complex model (and roughly equivalent to a p-value of 0.001). It is common that a nested sampler finds the sum of the _weights_ of all the samples is highest for the highest \(N_{\rm p}\) explored by the sampler (Faria et al., 2018; Standing et al., 2022). However, so long as the ratio is not \(>150\) those most complex solutions, whilst providing a better fit to the data, do not contain enough statistical evidence to warrant the extra number of parameters. As a by-product of the nested sampling to calculate the evidences, we can obtain posterior samples for the various parameters from kima. These allow us to perform parameter estimation on any detected signals, assuming a light travel-time effect (LTTE) model with a companion on a Keplerian orbit. kima has already been used to detect and test the detectability of circumbinary planets with radial-velocities (Triaud et al., 2022; Standing et al., 2022, 2023). We redirect the reader to these publications for more thorough explanations of how the model comparison works. We fit the eclipse times in kima with a number of Keplerian signals \begin{table} \begin{tabular}{l c c} \hline \hline & HW Vir & V471 Tau \\ \hline Kervella et al. (2022) & \(214\pm 111\) & \(38\pm 13\) & m s\({}^{-1}\) \\ Brandt (2021) & \(226\pm 111\) & \(28\pm 13\) & m s\({}^{-1}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Tangential velocities from the proper motion anomaly between _Gaia_ DR3 and the _Hipparcos-Gaia_ long-term vector, for HW Vir and V471 Tau. Values are reported using both the Kervella et al. (2022) and Brandt (2021) catalogues of accelerations. Figure 2: Sensitivity curve for proper motion anomaly applied to V471 Tau. Green shows the mean, 1–\(\sigma\) region and 3–\(\sigma\) region of parameter space that could correspond to an orbiting body giving rise to the proper motion anomaly. The coloured dots show locations of claimed solutions by 2 previous papers. The dashed lines show the locations of the hydrogen and deuterium fusing limits. as well as an ephemeris function. We allow the ability to fit for one of a linear, quadratic or cubic ephemeris. These are shown in equation 9 below: \[T(E)=T_{0}+P_{0}E+\frac{1}{2}\dot{P}_{0}P_{0}E^{2}+\frac{1}{6}\ddot{P}_{0}P_{0}^{ 2}E^{3}+\sum_{i}\tau_{i}(E), \tag{9}\] where \(E\) is the epoch of an eclipse (i.e. the number of the eclipse with the first eclipse being 0), \(T(E)\) is the time of that eclipse in our model, \(T_{0}\) is the reference time (time at epoch 0 here), \(P_{0}\) is the period of the eclipsing binary at the reference time, \(\dot{P}_{0}\) and \(\ddot{P}_{0}\) are the first and second time-derivatives of the binary period (at the reference time), and \(\tau_{i}\) is the time-delay due to the LITE of an orbiting body. The middle three terms are a Taylor series and if we ignore the terms of order \(\geq E^{2}\) we are using a linear ephemeris, if we ignore terms of order \(\geq E^{3}\) we are using a quadratic ephemeris, and using all the terms above is a cubic ephemeris. The functional form for the LITE due to an orbiting body is as in Irwin (1952): \[\tau(t)=\frac{K}{\sqrt{1-e^{2}\cos^{2}\omega}}\left(\frac{1-e^{2}}{1+e\cos \nu(t)}\sin(\nu(t)+\omega)+e\sin\omega\right), \tag{10}\] where \(e\) and \(\omega\) are the eccentricity and argument of periastron of the orbiting body, \(\nu(t)\) its true anomaly at time \(t\) and \(K\) the semi-amplitude of the signal: \[K=\frac{m\sin i}{c(M+m)}\left(\frac{G(M+m)}{4\pi^{2}}\right)^{1/3}P^{2/3}, \tag{11}\] where \(m\) and \(P\) are the mass and orbital period of the orbiting body, \(M\) the total mass of the eclipsing binary, i the inclination to the line of sight of the planetary orbit, and \(c\) and \(G\) the speed of light and gravitational constant. In equation 10, \(\tau\) and \(\nu\) are functions of \(t\) (time). The orbital period of an orbiting body is much greater than the difference in time from all other terms, so we use \(t\approx P_{0}E\) as a first order approximation. ## 4 Fitting Eclipse Times In this section we first detail from where the eclipse timing data is obtained, and then present the results from the analysis using kima. ### Data for HW Vir We use archival data for HW Vir eclipse times (considering only the primary eclipse). We use the data from Brown-Sevilla et al. (2021), which collated data from Kilkenny et al. (1994), Lee et al. (2009), and Beuermann et al. (2012), as well as their own data. To this we add the data from Baran et al. (2018), Esmer et al. (2021), and Mai and Mutel (2022). Of the data reported in Baran et al. (2018), we find that the data taken with SAAO have a small offset of \(\sim\) 80 sec from the rest of the datasets (including the other data reported in the same publication). Since there is still good coverage without this, we exclude these data from our analysis. We perform the analysis on the whole dataset, but also divide it into smaller _chunks_ to assess how consistent any signals that appear are. This way we can assess if, although the overall model doesn't have good predictive power, a subset of the signals might be predictably and consistently present. We divide the dataset into _chunks_ of approximately 1/3 and 2/3 the length of the whole dataset, with epochs as shown in Table 2. The _chunk_ "tier3", is extended back an extra 10 000 epochs and overlaps with "tier2". The different _chunks_ are also visualised alongside the data in the top panel of Figure 4. ### Results from eclipse timing variation fits In this section we present the results from a reanalysis of the mid-eclipse time data, analysed using kima. The nested sampling implemented requires a prior distribution for each parameter, these are detailed in Table 3. The Kumaraswamy distribution (Kumaraswamy, 1980) approximates the beta distribution, and the shape parameters as shown in Table 3 are those that Kipping (2013) argues best represent the distribution of exoplanetary eccentricities, based on exoplanets detected with the radial velocity method. The analysis is performed with each of a linear, quadratic, and cubic ephemeris for each _chunk_ as well as for the whole dataset. #### 4.2.1 Results from analysing different chunks A LITE signal due to an orbiting companion should to be coherent in time. The analysis of different _chunks_, using a Keplerian prescription, would therefore be expected to lead to posteriors that are consistent across _chunks_. Our analysis of the different _chunks_ shows a lack of consistency and therefore casts doubts on the ETV signals being solely due to an orbiting companion (or more). \begin{table} \begin{tabular}{l c} \hline _chunk_ & epoch range \\ \hline tier1 & \(0\leq E<40000\) \\ tier2 & \(40000\leq E<80000\) \\ tier3 & \(70000\leq E\) \\ tier1-2 & \(0\leq E<80000\) \\ tier2-3 & \(40000\leq E\) \\ tier1-3 & \(E\leq 40000\) or \(80000\leq E\) \\ full & \(0\leq E\) \\ \hline \end{tabular} \end{table} Table 2: Division of the eclipse time data into _chunks_, the names of the _chunks_ are specified as well as the corresponding epoch ranges included in each one. The data is effectively partitioned into thirds. \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Unit & Prior distribution \\ \hline ephemeris parameters & & \\ \(P_{0}\) & day & \(\mathcal{N}(0.11672,0.00001)\) \\ \(P_{t,0}\) & & \(\mathcal{N}(0,0.00001)\) \\ \(P_{tr,0}\) & day\({}^{-1}\) & \(\mathcal{N}(0,0.0000001)\) \\ \hline planet parameters & & \\ \(P\) & day & \(\mathcal{LU}(500,20000)\) \\ \(K\) & s & \(\mathcal{MLU}(0.1,10000)\) \\ \(e\) & & \(\mathcal{K}(0.867,3.03)\) \\ \(\omega\) & rad & \(\mathcal{LU}(0,2\pi)\) \\ \(\phi_{0}\) & rad & \(\mathcal{LU}(0,2\pi)\) \\ \hline other parameters & & \\ \(N_{\rm p}\) & & \(\mathcal{LU}(0,3)\)* \\ \(\sigma_{\rm jt}\) & s & \(\mathcal{MLU}(0.01,1000)\) \\ \hline \end{tabular} \end{table} Table 3: Prior distributions for the nested sampling analysis. \(\mathcal{N}\), \(\mathcal{LU}\), \(\mathcal{MLU}\), \(\mathcal{K}\) refer to Normal, log-Uniform, Modified log-Uniform (with a knee and an upper limit), and Kumaraswamy distributions, each taking two parameters. *This prior for Np is used in all cases except the analysis of the full dataset where instead the prior used is \(\mathcal{U}(0,6)\). Throughout the analysis of the different _chunks_ of data, recurring signals are seen around two periods: 4000 days and 2500 days. Longer period (and higher amplitude) signals do exist in many of the _chunks_, however they are far from consistent. To assess the consistency of the signals at the recurring periods, the clustering algorithm HDBSCAN (McInnes et al., 2017) is used to identify clusters in the P-K plane from the kima posterior samples. The clusters are then visually associated with one of the two recurring periods or not. The lower panels of Figure 4 show the clusters of posterior density around 2500 and around 4000 days from runs of kima on different _chunks_ of data6. These are shown as corner plots (Foreman-Mackey, 2016) between the period \(P\) and semi-amplitude \(K\). While there is a cluster of posterior density around this period in each7 of these runs, the periods and amplitudes of the signals vary between the runs. Footnote 6: tier3 did not show signature of a detectable signal around 2500 days so only 5 _chunks_ are shown Footnote 7: tier3 notwithstanding The lack of consistency of these signals points to them not being due to a Keplerian LTTE orbit. These signals may then have a non-periodic or quasi-periodic source. If this is the case, then attempting to fit them with strictly periodic Keplerian signals is unideal. This is exemplified in the upper panel of Figure 4, where we show the best fitting model from the run where tier1-3 is analysed using a linear ephemeris. The best model, a sum of three Keplerian orbits, is owefully incorrect for the middle section. This shows how not only are Keplerian LTTE models not successful at predicting future eclipse times, but they are unsuccessful at interpolating. In the future, a better approach might be to use Gaussian Processes (Rasmussen & Williams, 2006) to model the shorter eclipse timing variation signals. These tools are particularly good at modelling quasi-periodic functions to stellar activity for instance (in photometry and spectroscopy; Barros et al., 2020; Faria et al., 2016) and would seem appropriate in the case of HW Vir. #### 4.2.2 Results from analysing the full dataset We now show the results from an analysis of the full dataset. We allow kima to fit freely up to \(N_{\rm p}=6\) signals along with a quadratic ephemeris. One advantage of using kima is its ability to assess the number of signals present using Bayesian model comparison. In this Figure 3: Top: Sensitivity curve for proper motion anomaly applied to HW Vir. Green shows the mean, 1-\(\sigma\) region and 3-\(\sigma\) region of parameter space that could correspond to an orbiting body giving rise to the proper motion anomaly. The coloured dots show locations of claimed solutions by 3 previous analyses as well as the best-fitting solution from this work. The dashed lines show the locations of the hydrogen and deuterium fusing limits. Middle: posterior density histogram of the periods of planets suggested in all Np=4 posterior samples from the analysis of the full dataset. Dashed lines show the locations of the best fitting 4-planet solution. case a four-signal solution is favoured as it is the highest number of planets with a significant Bayes Factor over a model with one fewer planet. The respective Bayes Factors can be seen in Table 4. Four signals is more than most other analyses which only find up to two signals. The two signals already discussed (around 2500 and 4000 days) are both present in the best-fitting solution. We know this because a large fraction of the posterior sample congregate at these two orbital periods (as shown in the upper panel of Figure 3). The other two signals are not nearly as well constrained and do not correspond to any clear over-density in the posterior, likely because these are longer signals that have not had the chance to repeat yet, making their parameters uncertain. Past analyses have regularly identified a signal corresponding to the one we find around 4000 days (e.g. Beuermann et al., 2012; Esmer et al., 2021), however none have identified a signal near 2500 days. We do not consider any formal stability arguments. Many previous studies have found that multiple-planet solutions are unstable, and since we have a strong reason to doubt that the detected signals Figure 4: Top: Best fitting solution from the analysis of _chunk_ tier1-3, which is all the data except that which is between the dashed grey lines. The best-fitting model is shown in red, the full dataset (including the middle _chunk_ that was not included in the fit) is shown in blue. Highlighted in colour are the names and spans of data covered by the other _chunks_. The x-axis is the Epoch \(E\) as in equation 9. Bottom: Posterior density smoothed corner plots showing the period and semi-amplitude of signals found around 2500 days (left) and 4000 days (right). The different colours correspond to analysis of different _chunks_ of the data being analysed. within the eclipsing times are produced by an orbiting planet, we feel a stability analysis is meaningless. The upper-left diagram of Figure 5 shows the orbital configuration of planets corresponding to all four signals. The inner two of them are reasonably circular, the outer two are more eccentric, and the outermost crosses the other orbits. Clearly not all four signals can be from orbiting bodies. Ignoring the outermost, eccentric orbit, the lower-left diagram in Figure 5 shows the orbital configuration of planets corresponding to the inner three signals. While these three signals do not cross into each others orbits, they present a very compact configuration, that would likely not be stable either. The astrometric tangential velocity implies it is more likely than not there is one orbiting companion to the HW Vir binary. Of the four signals, if one is of planetary nature, we favour the fourth and outermost signal. The analysis reported in section 4.2.1 casts strong doubts on the inner two signals since they appear only quasi-periodic. The third signal has too long a period to assess the consistency with the chunking method, but its 'orbital parameters' are similar to the inner two signals with a small amplitude and mild eccentricity. Compared to all others, the outermost signal lies closest to where the median value of the proper motion anomaly predicts (the dark line on Figure 3). We note that this candidate planet signal is of a similar mass and period to components of the solutions by Esmer et al. (2021) and Brown-Sevilla et al. (2021). These all likely correspond to the same signal but vary in orbital period due to the data having not covered multiple cycles yet. The plots on the right hand side of Figure 5 show the model curves for each of the signals along with the combined model and data. The \begin{table} \begin{tabular}{c c} \hline Number of planets compared & Bayes Factor \\ \hline 1 : 0 & \(>1.8\times 10^{308}\) \\ 2 : 1 & \(1.8\times 10^{30}\) \\ 3 : 2 & \(4.0\times 10^{44}\) \\ 4 : 3 & \(1.1\times 10^{5}\) \\ 5 : 4 & 2.1 \\ 6 : 5 & 1.5 \\ \hline \end{tabular} \end{table} Table 4: Bayes Factors produced by kina when analysing the entire eclipsing timing dataset. Figure 5: Orbital configurations shown on the left. Top-left: all four signals from the best-fitting model obtained from the analysis of the whole eclipse timing dataset. Bottom-left: The eccentric outer orbit is removed and the remain three signals are shown along with the dashed lines showing circular orbits at apo- and peri-centre. Best-fitting model obtained from the analysis of the whole eclipse timing dataset as well as the data itself is shown on the right (ephemeris removed). Top-right: all four signals included, the most massive is shown in green and the sum of all four in blue, this signal in green is the one we claim as the most likely candidate for being a planet. The x-axis is the Epoch \(E\) as in equation 9. Bottom-right: the inner three Keplerian functions are shown with the sum of these three in blue. The data is represented with the fourth, large-amplitude, signal removed. These three signals are most likely not of planetary nature. upper-right panel shows the full model in the background and the outer orbit Keplerian signal in green, the lower-right panel shows the other three individual signals in shades of purple as well as their sum in the background. The three signals shown together in the lower right panel are those we claim to be most likely not produced by a planet (especially the two at shorter periods), these might be better modelled together as a Gaussian Process. While we know that this four-component solution cannot correspond to four orbiting companions, to allow future comparison with our work, we still report the parameters of the orbits as if they were real. The parameters are detailed in Table 5. The uncertainties associated with the parameters of the inner two orbits are well-defined as they are associated with clear clusters of posterior density (clustering using HDBSCAN is also used here). The outer two orbits do not belong to large clusters, so while they can be associated with clusters found by HDBSCAN the uncertainty on the parameters extracted from these are likely underestimated. This is because there has not been enough data for the signal to repeat. In the case of the outer signal the data has not even covered a whole phase yet. This also causes a degeneracy between the orbital parameters of the outer signals and the ephemeris terms. To partly address the underestimation, we take two analysis runs with kima, one using a linear ephemeris and one using a quadratic. The uncertainty on the amplitude, period are then taken as the difference between the values from the two models, with the quoted value remaining the value from the analysis with a quadratic ephemeris. This is to keep the whole table representing a coherent solution. Corresponding planetary masses for the outer two signals are then produced in a monte-carlo way. It should be noted this is therefore not a statistically derived uncertainty, but is a rough representation of the uncertainty from the fitting procedure. ## 5 Discussion and Conclusion Our analysis of the eclipse times of HW Vir does not find a single conclusive solution. This is in agreement with past work since every published solution has subsequently diverged from new data acquired afterwards (e.g. Pulley et al., 2022). We have shown that there are two strong periodicities in the full dataset which are also seen independently in some of the smaller _chunks_. However, though signals can be found near these periods in most of the _chunks_, their posterior distributions in \(P\) and \(K\) are not completely coherent in time, nor statistically consistent with one another. While past analyses have identified the periodicity around 4000 days, none have identified the signal around 2500 days. We have presented our solution from the analysis of the whole eclipse timing dataset performed in a fully Bayesian way using nested sampling within kima. This 4-component model includes signals at both of the strong periodicities. It is abundantly clear that not all four of the signals in this model are due to orbiting companions, in fact it is possible that none of them are. There likely must be some other mechanism involved for the variation in eclipse times, one possibility being a magnetic effect which is not yet fully understood (e.g. Lanza, 2020). We suggest that using a Keplerian prescription for non-planetary, quasi-periodic signals like what these appear to be is insufficient and that using a Gaussian Process method may work better (as in Faria et al., 2016). We propose that of the four signals, if one is produced by a planet, it is most likely to be the outermost one. This signal also best fits the astrometric evidence and has a signature that looks most different to the other three. We have applied the proper motion anomaly method to V471 Tau and confirmed the non-detection of a previously proposed orbiting brown dwarf. We have also applied it to HW Vir and shown that there is a tentative 2-\(\sigma\) signal of an acceleration due to an orbiting body. From the upper limit this poses, we can discount some previously proposed companions which are too massive to be consistent with the proper motion anomaly. Comparing the astrometric signal with the four signals we extract from the eclipse timings in Fig. 3, we find that the outermost signal is the most consistent. If correct this corresponds to a 17 \(\,\mathrm{M}_{\mathrm{jup}}\), 16 000 day, highly eccentric companion. Thanks to additional data, a longer baseline, and an improved astrometric solution, the full epoch astrometry from _Gaia_ will (circa, 2025) likely be able to help resolve whether the HW Vir binary is \begin{table} \begin{tabular}{l c c} \hline Parameter & Value & Units \\ \hline _Assumed parameters_ & & \\ \(M_{0}+M_{1}\) & \(0.54\pm 0.0089\) & \(\,\mathrm{M}_{\odot}\) \\ \hline _Binary parameters_ & & \\ \(P_{0}\) & \(0.116719590^{+1.05-8}_{-1.8e-8}\) & days \\ \(P_{0}\) & \(-1.03\mathrm{e}-11^{+4.4e-12}_{-2.06-12}\) & days/day \\ \hline _Keplerian parameters_ & & \\ \(P_{1}\) & \(2.612^{+43}_{-44}\) & days \\ \(K_{1}\) & \(7.66^{+0.80}_{-0.81}\) & s \\ \(e_{1}\) & \(<0.1\) & \\ \(\omega_{1}+\phi_{0.1}\)\({}^{\mathrm{a}}\) & \(1.97\pm 0.38\) & rad \\ \(M_{1}(\sin i_{1})\) & \(2.88^{+0.22}_{-0.29}\) & \(\,\mathrm{M}_{\mathrm{Jup}}\) \\ \(P_{2}\) & \(3.710^{+58}_{-1.7}\) & days \\ \(K_{2}\) & \(21.2^{+1.7}_{-1.6}\) & s \\ \(e_{2}\) & \(0.089^{+0.03}_{-0.03}\) & \\ \(\omega_{2}\) & \(1.74^{+0.34}_{-0.34}\) & rad \\ \(T_{\mathrm{pre},2}\) & \(2.442\,500^{+380}_{-350}\) & BJD \\ \(M_{2}(\sin i_{2})\) & \(6.34^{+0.45}_{-0.36}\) & M\({}_{\mathrm{Jup}}\) \\ \(P_{3}\)\({}^{\mathrm{c}}\) & \(8.400\pm 1600\) & days \\ \(K_{3}\)\({}^{\mathrm{c}}\) & \(23\pm 10\) & s \\ \(e_{3}\)\({}^{\mathrm{b}}\) & \(<0.45\) & \\ \(\omega_{3}\)\({}^{\mathrm{b}}\) & \(4.5\pm 1.3\) & rad \\ \(T_{\mathrm{pre},3}\)\({}^{\mathrm{b}}\) & \(2.443\,600\pm 2000\) & BJD \\ \(M_{3}(\sin i_{3})\)\({}^{\mathrm{c}}\) & \(4.0\pm 1.9\) & M\({}_{\mathrm{Jup}}\) \\ \(P_{4}\)\({}^{\mathrm{c}}\) & \(15\,600\pm 3500\) & days \\ \(K_{4}\)\({}^{\mathrm{c}}\) & \(148\pm 45\) & s \\ \(e_{4}\)\({}^{\mathrm{b}}\) & \(0.6867\pm 0.013\) & \\ \(\omega_{4}\)\({}^{\mathrm{b}}\) & \(1.77^{+0.06}_{-1.4}\) & rad \\ \(T_{\mathrm{pre},4}\)\({}^{\mathrm{b}}\) & \(2.433\,100^{+140}_{-120}\) & BJD \\ \(M_{4}(\sin i_{4})\)\({}^{\mathrm{c}}\) & \(17.4^{+6.5}_{-5.7}\) & M\({}_{\mathrm{Jup}}\) \\ \hline _Other fit parameters_ & & \\ Jitter & \(0.92\pm 0.23\) & s \\ \(T_{0}\) & \(2.445\,730.556669\) & BJD \\ \hline \end{tabular} \end{table} Table 5: Parameters from the analysis of the full dataset, with a quadratic ephemeris. The keplerian parameters for each signal is shown as if it was keplerian LITE orbit. \({}^{\mathrm{a}}\) for the circular orbit we combine the \(\omega_{A}\) and \(\phi\) parameters together since otherwise they are extremely correlated and no information can be gained. \({}^{\mathrm{b}}\) for the two outer signals, the posterior density is not well constrained so clusters around the best-fitting solutions are used for some of the parameters. We note the uncertainties are likely too small to represent the true uncertainty in the model. \({}^{\mathrm{c}}\) For these periods and amplitudes, the uncertainty is reported as the difference between the value of the parameter when a linear ephemeris is fit and when a quadratic ephemeris is fit. The median of the posterior density cluster from the quadratic ephemeris fit is retained as the quoted value. The mass distribution is the propagated in a monte-carlo way. indeed host to an orbiting circumbinary companion.The _Gaia_ baseline will still be much shorter than the most likely planet's orbital period, so while the whole period would not be covered, astrometry may still tell us whether or not such a planet exists, independently of the ETVs. This will help identify which (if any) of the varying eclipse timing signals is actually caused by that orbiting body. In turn, this will help isolate the functional form of the potential new physics causing the other signals (for instance the 2500 and 4000 day signals). As described in Sahlmann et al. (2015), thanks to _Gaia_'s final solution, other post-common envelope circumbinary systems will be solved astrometrically with our paper being the first attempt at doing so. ## Acknowledgements This paper is dedicated to the memory of Tom Marsh whose kindness, curiosity and many discussions inspired much interest in circumbinary planets, and more specifically to those in post-common envelope systems. We thank Annelies Mortier and Lalitha Siriam for productive discussions and insightful comments on this work. This research leading to these results is supported by two grants from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreements n\({}^{\circ}\) 803193/BEBOP and 951549/UniverScale). The computations described in this paper were performed using the University of Birmingham's BlueBEAR HPC service. ## Data Availability Proper motion anomaly and eclipse timing data are available from the original publications cited in this work and described in Sections 2 and 4.1. The version of kima used to analyse the eclipse timing data is available at [https://github.com/TomAB99/kima/tree/etvs](https://github.com/TomAB99/kima/tree/etvs)
2309.13612
Digital Twins and the Future of their Use Enabling Shift Left and Shift Right Cybersecurity Operations
Digital Twins (DTs), optimize operations and monitor performance in Smart Critical Systems (SCS) domains like smart grids and manufacturing. DT-based cybersecurity solutions are in their infancy, lacking a unified strategy to overcome challenges spanning next three to five decades. These challenges include reliable data accessibility from Cyber-Physical Systems (CPS), operating in unpredictable environments. Reliable data sources are pivotal for intelligent cybersecurity operations aided with underlying modeling capabilities across the SCS lifecycle, necessitating a DT. To address these challenges, we propose Security Digital Twins (SDTs) collecting realtime data from CPS, requiring the Shift Left and Shift Right (SLSR) design paradigm for SDT to implement both design time and runtime cybersecurity operations. Incorporating virtual CPS components (VC) in Cloud/Edge, data fusion to SDT models is enabled with high reliability, providing threat insights and enhancing cyber resilience. VC-enabled SDT ensures accurate data feeds for security monitoring for both design and runtime. This design paradigm shift propagates innovative SDT modeling and analytics for securing future critical systems. This vision paper outlines intelligent SDT design through innovative techniques, exploring hybrid intelligence with data-driven and rule-based semantic SDT models. Various operational use cases are discussed for securing smart critical systems through underlying modeling and analytics capabilities.
Ahmad Mohsin, Helge Janicke, Surya Nepal, David Holmes
2023-09-24T11:20:58Z
http://arxiv.org/abs/2309.13612v1
Digital Twins and the Future of their Use Enabling Shift Left and Shift Right Cybersecurity Operations ###### Abstract Digital Twins (DTs), optimize operations and monitor performance in Smart Critical Systems (SCS) domains like smart grids and manufacturing. DT-based cybersecurity solutions are in their infancy, lacking a unified strategy to overcome challenges spanning next three to five decades. These challenges include reliable data accessibility from Cyber-Physical Systems (CPS), operating in unpredictable environments. Reliable data sources are pivotal for intelligent cybersecurity operations aided with underlying modeling capabilities across the SCS lifecycle, necessitating a DT. To address these challenges, we propose Security Digital Twins (SDTs) collecting realtime data from CPS, requiring the Shift Left and Shift Right (SLSR) design paradigm for SDT to implement both design time and runtime cybersecurity operations. Incorporating virtual CPS components (VC) in Cloud/Edge, data fusion to SDT models is enabled with high reliability, providing threat insights and enhancing cyber resilience. VC-enabled SDT ensures accurate data feeds for security monitoring for both design and runtime. This design paradigm shift propagates innovative SDT modeling and analytics for securing future critical systems. This vision paper outlines intelligent SDT design through innovative techniques, exploring hybrid intelligence with data-driven and rule-based semantic SDT models. Various operational use cases are discussed for securing smart critical systems through underlying modeling and analytics capabilities. Security Digital Twins, Cybersecurity Operations, Smart Critical Systems ## I Introduction Modern smart systems are composed of Operational Technology (OT), with increased integration of IT systems. These systems are collectively referred to as Smart Critical Systems (SCS), and consist of multiple Cyber-Physical Systems (CPS)1. The SCSs manage physical and operational processes across domains like smart manufacturing, autonomous production systems, and critical public infrastructures. Examples of their applications include smart grids, transportation systems, and water systems [1, 2]. SCS facilitate realtime monitoring and control of industrial processes, ensuring safety and operational efficiency in critical environments. The continued convergence of IT/OT systems has significantly increased cybersecurity threats. Specifically, SCSs have emerged as primary targets for cyberattacks, capitalizing on vulnerabilities amplified by the expanded connectivity between OT/IT system components. Therefore, SCSs require cyber resilient security solutions that can effectively protect these systems in the next three to five decades. Recent reports [3], indicate an escalating threat landscape for SCSs. This is attributed to unpatched vulnerabilities and unpredictable software and hardware supply chains and further compounded by inherent system complexity, insecure design, and operational silos. Footnote 1: A CPS in SCSs has cyber and physical components integrated such as Supervisory Control and Data Acquisition (SCADA), Programmable Logic Controllers (PLCs)/Remote Terminal Unit (RTU) and Industrial Internet of Things (IIoT). During a cyber attack system operators often remain unaware of these malicious activities [4] for a prolonged time. Earlier detection and containment, can reduce the impact and of an attack significantly. Cyber attacks on critical systems are often carried out by Advanced Persistent Threats (APT) groups. These APTs use 'pivoting' tactics to successfully breach networks across SCSs. For example, the 2021 Colonial Oil Pipeline ransomware attack in the US [5], the 2020 SolarWinds software supply chain attack affecting numerous businesses [6], and the 2017 Triton malware-driven attack on a Saudi Chemical Plant [7] that disrupted safety systems and forced operations to halt, all highlight the seriousness of emerging security threats. These attack examples demonstrate that SCSs are under constant threats [8, 9] by actors who successfully target these systems and their complex interactions and interdependencies. International security standards, such as IEC 62443 [10] and publications from the National Institute of Standards and Technology (NIST), including the NIST Cyber Security Framework (CSF) [11], emphasize the importance of integrating security throughout the entire lifecycle of systems, from design to incident detection, response, and recovery. Despite this emphasis on cybersecurity operations implementations and compliance obligations, current security tools like Intrusion Detection (ID) Systems and Security Incident and Event Management Systems (SIEMS) often fall short in capturing core assets, vulnerability scanning and identification, and correlating incidents with threats and vulnerabilities. Consequently, these tools struggle to safeguard critical systems effectively or respond to cyber incidents promptly. While their enabling technologies such as Artificial Intelligence (AI) or knowledge-based systems alone are not capable enough to manage cybersecurity operations effectively. Digital Twins (DTs), are promising technological solutions which can effectively tackle the present and anticipated cybersecurity challenges over the next three to five decades of future critical systems. DTs hold the potential to risk mitigation, providing platforms for security evaluation through simulations and testing the efficiency of cybersecurity measures, and even predicting realtime security threats. A DT is a virtual counterpart of a physical system that mirrors their behaviors, synchronized with data connections, enabling dynamic analysis, predictive insights, and informed decision-making for enhanced operational efficiency and innovation across industries [12]. A DT monitors the physical processes, environmental, and operational parameters of a SCS with simulations allowing optimization and predictive maintenance analytics [13, 14]. DTs are commonly developed using rule-based, semantic annotation, and Machine Learning (ML)/AI-based approaches to facilitate advanced analysis and simulations. We propose that just as DTs are effectively employed across Industry 4.0, spanning tasks like asset monitoring and product lifecycle management, involving automated and human-assisted interventions, a similar cybersecurity operations strategy can be adapted for SCSs [15]. The main thrust here is to improve the cybersecurity operations of SCSs employing Shift Left and Shift Right (SLSR) design paradigm at both design and runtime by developing their security-driven DTs to cope with system dynamics and complexities while improving cybersecurity operations enhancing system resilience and trustworthiness at scale. The Security Digital Twin (SDT) is a software-driven solution focused on solving SCSs cybersecurity tasks. A SDT can ensure the separation of concerns by ensuring cybersecurity operations are at first initialized and fixed in a virtual replica and then later can be reflected in the SCS without interrupting core business and safety operations. A perspective SDT with enabled cybersecurity operations following NIST CSF is visualized in Figure 1. Considering the attack on Saudi Chemical Plant [7] in which attackers targeted plant safety systems by manipulating OT systems through advanced malware, an SDT could be used to emulate and monitor plant safety operations states, sending them realtime instructions to maintain required safety measures if certain events occurred in DT models of the plant. An SDT could also help reveal an attacker's presence in the network early during the cyber breach, which might otherwise remain undetected for many months. The SDT ensures reliable and synchronized data flows from the CPS to orchestrate behaviors for effective security modeling and analysis. Therefore, physical control components such as PLCs and IIoTs should have more reliable connections and data streams for SDTs to model and simulate respective behaviors in a DT environment to orchestrate cybersecurity operations. To overcome these challenges, we explore the introduction of Virtual Components (VC) replacing components of the CPS with virtualised counterparts2 directly manipulating and controlling physical processes from Edge or Cloud environments and feed reliable data to SDTs enabling their seamless integration, design, deployment, and testing for intelligent cybersecurity operations. The VCs in the SDT as building blocks then support the cybersecurity of SCSs throughout the lifecycle employing the Shift Left and Shift Right (SLSR) design paradigm. The SLSR design paradigm emphasizes the importance of security starting from the very beginning of system design (Shift Left) and continuing as the systems operate (Shift Right) using SDT. This ensures SCSs are more resilient and dependable against emerging cyber threats. Footnote 2: Throughout this paper, a virtual CPS component (VC) represents different critical components such as PLC/RTU and IIoTs We propose innovative SDT-driven cybersecurity operations of the future SCSs with a new design paradigm of SLSR employing VCs. With this architectural shift in design, we explore and discuss the enhancement of SDT modeling capabilities incorporating a mix of state based modeling, semantic technologies, Knowledge Graphs (KGs), and emerging ML and generative AI approaches. We also identify research challenges that need to be overcome for the implementation of DT-driven cybersecurity of future systems. The paper is organized as follows: Section II introduces the concept of virtual twins using SDTs and the SLSR design paradigm. In Section III, core aspects of security-driven DTs are presented as SDTs for vital cybersecurity operations. Section IV explores emerging modeling and simulation methods for building advanced analytics capabilities for cybersecurity using SDTs. Section V covers related techniques, and Section VI outlines research challenges. Finally, Section VII concludes with future research directions. ## II Shift Left and Shift Right Design with Virtual Components This section introduces the concept of SLSR design paradigm for SDTs using VCs as building blocks across the SCS security lifecycle. We first establish the rationale for VCs replacing components of the CPS, and then describe how the SLSR design paradigm works with SDTs. Fig. 1: Security Digital Twins based cybersecurity operations ### _Virtual Components_ **Motivation:** The existing class of SCSs consists of a variety of diverse CPS hardware and software components. These components function across different levels following Purdue model [16] and as such consistent and reliable interaction of these components as CPS to DTs is always at risk for monitoring of core system properties during operations. For example, consider a scenario involving a smart grid, where CPS components such as PLCs play a pivotal role in overseeing and controlling sub-stations. The primary responsibility of these PLCs is the continuous monitoring and control of critical system processes like voltage control, and power flows, while communicating critical systems information and other related factors where they work in a closed loop with DTs [17, 18]. When addressing the implementation of DTs in SCS, security needs to be considered as the key driver to ensure system resilience. DTs must have secure and consistent connectivity to the CPS for effective monitoring and intelligence gathering. Establishing and maintaining an uninterrupted connection and synchronization of data between the physical and digital counterparts can be a significant challenge. The criticality of meaningful data ingestion by the DT is vital for its operational resilience. These challenges persist from the early stages of design, development, testing, and security analysis through to deployment and operationalization. With the advent of increased WiFi performance connectivity using 5G/6G, available protocols3[19], the Physical CPS components can be replaced with a Virtual Component of similar functionalities and has the ability to integrate with SDTs and SCSs. These VCs hosted in edge or cloud computing environments reside closely with SDTs and send operational data to DTs based upon reference architectures such as the Purdue model [16]. A VC is a programmed software component to replicate the logic and behavior of CPS and is integrated with physical components (sensors and actuators) at the device level to directly interact and control physical processes, refer to Figure 2. The introduction of VCs into SCSs such as smart grids from constrained sensors and actuators, enable data capture at the Edge environment [20, 21] address interoperability issues for data exchange between the SCS and SDT, delivering a more structured data framework providing advanced analytics for improved security operations. Footnote 3: Communication and Security protocols in use between control devices and field devices i.e., (OPCU UA, Modbus) for PLCs/RTUs and (HTTPS/MQTT) for IIoTs when interacting with sensors and actuators With VCs, the likelihood of data loss for SDT is minimized, and a more reliable source of truth is enabled with increased availability of data to an SDT. The scalability of security operations using VCs is more flexible and easy to manage as additional components can be added to enrich SDT features to efficiently monitor and support SCS functionality. A VC supporting interactions with these devices and software components can gather contextual information about vulnerabilities and threats, which can be fed into the SDT for improved security operations, otherwise not available for collection from the traditional CPS. **Advantages:** (1) More reliable data integration to SDTs enables constant information feeds to build SDT behaviors with the ability to store historical and realtime data in the edge and cloud platforms enhancing the scalability of SDTs for application in various security use cases which otherwise would not have been possible. (2) The SDT can seamlessly intervene VCs when certain changes or conditions for normal operations are violated and indicated in SDTs. Respective business safety and security operations can be managed by re-adjusting VC parameters and program logic with minimal impacts. This reduces overall downtime in case of anomalous behaviors in field devices and physical equipment failures during operations. (3) Remote security operations with VCs can be achieved where each VC receives alerts and recommendations from Fig. 2: Design Paradigm Shift: Virtual Components based on Security Digital Twins enabling cybersecurity operations with Shift Left and Shift Right an SDT regarding security incidents with an intervention functionality to improve upon security controls through implementation of identity and access management, minimizing security threats, and improving incident response reaction time as compared to a SCS where cyber incursions may remain undetected for months. ### _Shift Left and Shift Right Design Paradigm_ Figure 2 outlines the SLSR paradigm for cybersecurity operations in SDTs. Shift left design based on SDT ensures the integration of security design tasks at design time ensuring early SCSs security testing, simulation, identification and mitigation of vulnerabilities. In rare cases, the shift left design approach is used for evaluating functional correctness and performing certain simulations and validation using DTs, while the shift left design for cybersecurity brings together new opportunities for security teams to evaluate system security aspects using SDTs early at design time. Shift left approach only suffices design time security using SDTs while SCSs need to be secured, monitored, and analyzed for their security throughout the lifecycle after they go into the production phase. To this end, we propose the use of shift right ensuring runtime/operational cybersecurity of SCSs employing SDT modeling capabilities with various operational cybersecurity use cases. Security testing, monitoring, simulation, and analysis are considered for shift right SDT-based cybersecurity of critical systems. We discuss the potential of SDT-supported, shift left and shift right cybersecurity capabilities in the next section. ## III Digital Twins for Cybersecurity Operations Cybersecurity operations for SCSs require the presentation of diverse perspectives from overall system security. This facilitates evaluating and preparing for potential threats, vulnerabilities, and attacks, particularly when these systems operate in complex environments. Current SIEMs, together with endpoint detection and IDS tools, offer either limited intrusion detection/prevention capabilities or a very basic ability to monitor specific SCS components in networks. Moreover, these security tools and other state-of-the-art security modeling solutions are not able to augment and analyze for timely incident response in case of security incidents. An SDT capability offers many opportunities and features for security teams and researchers to mitigate security risks early in the security design phase while ensuring resilience to cyber attacks at runtime. In an SDT, the fusion of historical data with realtime data offers various multi-dimensional features such as simulation/testing of critical assets, various types of security analytics, security controls optimization, and interventions, depending upon security incident severity and context. Figure 3 depicts VCs integrated with field devices (sensors and actuators) to enable the availability of different types of data states for security-driven DTs. These are used to design intelligent and robust security models supporting broad security use cases by delivering protection, detection, response, and recovery at both the design and runtime of SCSs. As visualized in Figure 3, the VC obtains the physical process operational information through sensor outputs from field devices and feeds this data to the SDTs for cybersecurity operations monitoring. The current process state of various physical processes is captured through VCs, which then defines their functional behavior as a collection of states over a period of time within SDTs. The data and process states contain information about critical systems, their subsystems, and interactions (what type of sensor data VCs read and what inputs they send to actuators as well as SCADA and HMIs) to carry out physical/business processes. The data can be specified in XML/JSON formats such that it describes the CPS business operations conditions, process controls logic, and threshold values for certain operations, such as voltage values and power flow in a smart grid for a power generation device should not cross certain threshold values to ensure safe operations. All this information from VCs and their physical counterparts is used to enrich SDT models. The SDT models are further leveraged with ML and Large Language Models (LLMs) capabilities to provide various cybersecurity operations use cases. Similarly, semantic models, knowledge representation, and reasoning capabilities enable hybrid intelligence, see Figure 3 for SLSR SDT-based modeling capabilities. In the following, we discuss various cybersecurity SLSR operations using SDTs-driven capabilities. ### _Security Digital Twins-Enabled Systems Security with Shift Left and Shift Right Design Paradigm_ #### Iii-A1 Protection The protection of SCSs is vital to safeguard against rising threats and attacks. The protection use cases enabled through SDTs are described in detail aligned with SLSR paradigm **Addressing Security Misconfigurations:** The SCS, with diverse components and interconnections, face many security misconfigurations such as software, hardware and network-related misconfigurations. Since SDTs are virtual replicas of exact system components, they can easily simulate and mimic the component's configuration details. For example, a virtual component of a CPS with SDT can provide network connection details, and communication protocols, and present the way internal components, such as virtual PLC perform interactions with Sensors and actuators. This way SDT can simulate network configurations that are specified and compared with baseline configurations. If network or hardware/software configurations have gone through certain changes, then these can be detected in SDT simulated environment to identify misconfigurations and fix those providing security teams with the opportunity to address these issues early in the lifecycle [22]. Within the SDT environment, CPS configurations vital to system security can be tested by generating multiple security scenarios verifying and evaluating each configuration against certain attacks and exploitable vulnerabilities both at design and runtime to further enhance the security posture of systems. **Threat Modeling:** Threat modeling of SCSs is a vital cybersecurity feature for SDT. An SDT can model and simulate internal and external interfaces of CPS through VCs for malicious actor's entry points to the system, helping to identify system vulnerabilities and associated risks, especially at design time. This way threat modeling can help explore various interactions between interdependent system components of complex SCSs at early stages of development. For example, threat modeling techniques such as STRIDE and DREAD [23] can be applied to identify vulnerabilities at each interface both internal (between VCs to field devices and HMIs/SCADA systems) and external (OT systems providing access to third-party users and applications). The vulnerabilities can then be analyzed and mapped to CVE4 for multi-stage attacks on SCS where each CVE can be assigned to different attacks. Take the example of PLC related vulnerabilities found in program logic which can be mapped to CVEs and CWE5 entries for risks rating and their severity levels. The analytics capability in a SDT employing ML [24] in a simulation environment can help visualize asset threats and their interdependencies for addressing associated risks at various stages of critical systems. Both at design and runtime SDTs can leverage semantic models and KGs for effectively building threat models showing relationships about potential threats, vulnerabilities, and their origins. Footnote 4: Common Vulnerabilities and Exposures, [https://cve.mitre.org/](https://cve.mitre.org/) Footnote 5: Common Weakness Enumeration, [https://cwe.mitre.org/](https://cwe.mitre.org/) **Vulnerabilities Fixing:** At Design time SDTs can help fix and patch vulnerabilities through threat modeling and security testing of SCS components early in the lifecycle. This helps security teams to minimize potential threats with less number of vulnerabilities when the system eventually goes into the operational phase. **Intrusion Prevention:** With the shift right perspective for operational security, SCSs can be protected against attacks at runtime by employing SDTs. The existing DTs with behavior-specific security patterns using data-driven and semantic technologies [25, 26] provide certain ID capabilities; however, ID alone is not sufficient for the protection of CPS and related components with increased convergence of IT/OT systems. In relation to SDT right shift approach, Intrusion Prevention (IP) models must be designed to observe system dynamic states capturing system physical properties such as CPS sate-space-based methods for behavior change prediction. With these predictions, the SDT can determine if particular physical process-related sensor values violate threshold values and reach unsafe or undesired states [27]. With the advanced prediction of unwanted behaviors, certain security incidents can be avoided. The SDT can identify relevant assets from these predictions to activate security countermeasures, such as improving security controls around PLCs/HMIs for unauthorized access to safeguard against false data injection and Man-in-the-middle types of attacks. #### Iii-A2 Detection Intrusion and anomaly detection using SDTs show promise for accurate and reliable detection of security anomalies across physical and virtual components of critical systems [25]. **Intrusion Detection:** By utilizing realtime and historical data repositories from physical and virtual components, intelligent solutions are created to enhance the features of ID system using SDTs, enabling malware and unauthorized access detection at different levels of the system. For instance, specific unusual patterns, such as changes in system configurations in Fig. 3: Security Digital Twins: Cybersecurity Design time and Runtime Modeling Capabilities with Shift Left and Shift Right (SLSR) for Smart Critical Systems. Virtual components work closely with SDTs while providing realtime and historical data about SCSs CPS components. Modeling capabilities for SDTs can be replicated from design time to runtime for improving the cybersecurity of SCSs SDT, can be observed using knowledge-based detection with static data (historical data) [28] from sensor devices, system logs, and network traffic. The SCS behavior models within SDT can be updated with ongoing changes in SCSs complex environment to simulate malicious activities and perspective attacks. Similarly, certain rules can be applied to oversee the behavior of Virtual Components for security and safety incidents. By employing ML techniques [29, 30], dynamic ID can be facilitated using both historical and realtime data by behavior refinement. This allows for continuous monitoring of unusual activities from DTs to CPS. **Digital Twins-based Security Tools:** Shift right security operations features from SIEM and Extended Detection and Response (XDR) tools can be simulated and tested for SCSs before such systems are actually implemented to assess their suitability for various security use cases. Deployment and testing SIEM tools directly on critical systems are quite expensive and cumbersome, especially testing their correctness of incidents detection, alert generation and response. SIEM and XDR systems can gradually be tested with critical CPS components within SDTs with complete life cycles of events indigestion, pre-processing to detection, and mitigation testing in SDTs [31] for better configuration and performance. Selected security functions with SDT against selected CPS components are validated and passed through scenarios, then the gradual deployment of SIEM/XDR can be carried out across real-world SCSs. This enables the correction of security functions' applicability and avoids security failures in production. #### Iii-B3 Response and Recovery Business and safety-related operations delayed response and recovery seriously effects an organization's reputation and financial revenues in the eve of cybersecurity incidents. From an SCS perspective, response and recovery to cyberattacks are not straightforward. Take the example of the Solarwinds attack, which is believed to have started in 2020 [6], which was discovered in late December 2020, and initial response and recovery procedures started in mid-2021. Security Orchestration, Automation, and Response (SOAR) aided with ML and rule-based advanced analytics are used for incident response activities. However, SOAR applicability for critical systems has challenges with additional OT components, IT/Enterprise systems, impacted stakeholders, and business function differences and operational silos, adding technical complexities leading to delays in security incidents response and recovery. One way to overcome these challenges is to test and evaluate incident response and recovery in actual CPS environments across OT/IT landscape, which is usually very expensive and nearly impossible. SDTs allow security teams and security analysts to test/simulate various response scenarios [26]. The SDTs with enough data from CPS can be used as effective incident response training platforms for critical systems. A digital twin-based SOAR for critical systems provides innovative ways to respond and recover from advanced and complex attacks [22]. **Response Scenarios Simulations:** Cyber attack incident response scenarios can be simulated within SDTs, before implementing them in the real world. Penetration testing of CPS is conducted in SDTs to evaluate the impact of various attacks on actual systems and their tailored responses. During the attack, the impacted IT/OT system components can be isolated or quarantined to verify the spread of malicious code or artifacts within the SDT. Security teams can effectively test various containment, remediation and mitigation strategies within SDT before these strategies in case of such incidents can be implemented in actual critical systems environments. During response simulations, discovered security vulnerabilities can be prioritized and fixed through digital twin automated recommendations early in the lifecycle. ## IV Towards Intelligent and Reliable Security Digital Twins The SDT driven cybersecurity operations elaborated in the above section use a variety of modeling techniques to bring intelligent, security analysis capabilities for smart critical systems. The modeling capabilities of SDTs mostly rely on rule-based, physics-based, and machine-learning methodologies [24, 32, 33]. However, these methodologies are often explored and implemented independently within DTs without cohesive support for various cybersecurity operations in the context of SLSR design. Consequently, the security use cases commonly lack centralized analytics employing integration and automation of SDTs, which is essential for providing comprehensive support to cybersecurity operations. Security researchers and practitioners should adopt innovative approaches for designing the architecture of physical/Digital Twins, enhancing the security intelligence capabilities of SDTs, and enabling dependable and seamless automated support for decision-making throughout the systems lifecycle. **Integrated and Automated Cybersecurity:** An integrated and automated approach leveraging intelligent modeling paradigms such as AI/ML, rule-based, knowledge-based, and ontological approaches can be combined to enable innovative cybersecurity operations use cases within SDTs. Following such an approach integrated security modeling within SDTs can be proven useful where each SDT model can use a variety of CPS-generated data to augment cybersecurity operations. As visualized in Figure 3, realtime and historical data are fed to digital wins for developing SDT models. The data types ingested through physical and VCs can be further categorized into static descriptive data, dynamic critical assets data, dynamic operational environments data, and semantic data. Each of these data categories is aligned with SDT modeling capability for cybersecurity operations. These modeling capabilities can be categorized into state-based behavioral security models, ML-based Security models, and knowledge-based hybrid security models. These security models potentially add advanced analytics, simulation, and descriptive security capabilities for cybersecurity operations within SDTs. In the sections below, we explore and discuss the development and enrichment methods of these modeling capabilities for digital twin-driven cybersecurity operations. ### _AI Capability for Security Digital Twins_ Digital Twins based on AI/ML modeling techniques are suitable to leverage big data for advanced analytics, operational intelligence, and optimization. Enhancing SDT-driven cybersecurity operations involves leveraging various ML/DL algorithms and design strategies like anomaly detection, security prediction modeling, and attack forecasting with state-based and historical data [34]. While several research endeavors have focused on designing DTs for specific security tasks, many techniques primarily rely on static data states, using historical data to train ML models for identifying security anomalies and intrusion detection. Instead of these techniques, we require a bottom-up approach. Here, ML/DL models for SDTs receive realtime and historical data from SCSs, which helps establish baseline behaviors for CPS. During model training, this data is correlated to detect outliers and predict potential threats and malicious activities, see details in Figure 3. For critical systems, SDTs can collect large amounts of data from CPSs for training AI/ML models which can be tailored to particular cybersecurity operations by combining classification algorithms i.e., Support Vector Machine (SVM), Artificial Neural Network (ANN), Generative Adversarial Networks (GANs) and Gradient Boost (GB) for intrusion and anomaly detection related cybersecurity tasks [29, 30]. The underlying SDTs models can take advantage of AI models to improve security risk management through threat modeling, incident response management, and similar use cases implementing the SLSR approach. ### _Generative AI-based Security Digital Twins_ **Attacks Simulations through Generative AI:** Generative AI-based attack scenario generation using GANS for testing and evaluation of CPS security can assist in determining their ability to resist sophisticated attacks and improve the overall cybersecurity posture. An SDT using Generative AI can generate attack simulations representing real-world attackers' tactics. For example, a simulation of a cyber attack targeting CPS components such as HMI/Engineering Work Station can reveal the tactics, techniques, and procedures used to breach OT systems and can assist security teams in proactively formulating an effective response strategy to mitigate cybersecurity threats and incidents. As depicted in Figure 3, Large Language Models (LLM) are an emerging type of generative AI [35] used in combination with traditional ML/DL models to evaluate and support security operations using DT technology. The malicious code and its infiltration within the CPS support meaningful impact analysis of AI-generated code exploiting system vulnerabilities. As a result, gaps/weaknesses in current security controls of SCSs, policies and vulnerabilities can be preemptively addressed through the utilisation of the SDT. **Generative AI for Incident Response and Recovery:** By employing LLM-based pre-trained models such as Open AI ChatGPT [36] and Microsoft Security co-pilot fine-tuned 6 models into the SDT landscape, security researchers can leverage the intelligence and agility of LLM agents for cybersecurity operations augmentation. This enhances security team agility by simplifying security operations through LLM SDT agents to interpret and relate to the context of security incidents, including VCs and the physical environment. LLM-based simulated behaviors of SCSs are supported by large dataset feeds of realtime and historical observations from physical and virtual counterparts see Figure 3. Footnote 6: [https://www.bleepingcomputer.com/news/microsoft/microsoft-brings-gpt-4-powered-security-copilot-to-incident-response/](https://www.bleepingcomputer.com/news/microsoft/microsoft-brings-gpt-4-powered-security-copilot-to-incident-response/) Furthermore, these LLM-based SDT agent datasets seamlessly incorporate security appliance data sourced from CPS, encompassing data from firewalls, network logs, SIEM, and EDR logs. This holistic approach provides a comprehensive insight into system architecture and the flow of information traffic, empowering incident response efforts. At scale, security teams can effectively pinpoint and comprehend potential threats by employing natural language queries to inquire LLMs about security vulnerabilities, threats, and the overall security stance of critical systems. This, in turn, equips them with the knowledge needed to proactively prepare and respond to potential attacks. SDTs, powered by LLM security agents, enable concurrent investigation of ongoing attacks for quicker containment, remediation, and mitigation, reducing the mean time to protect, detect, respond, and recover critical systems. ### _Semantic Technologies Capability for Security Digital Twins_ Semantically enriched ontologies and KGs [37] can be leveraged to create an intelligent and context-aware SDT modeling capability through the integration of metadata feeds from the CPS devices, sensors, and actuators. Semantically enriched SDT modeling capability is used to analyze and simulate the behavior of the system under different operational circumstances and facilitate meaningful predictions for critical system security concerns. **Knowledge Representation and Security Modeling:** From a security Digital Twins perspective, ontological reasoning and KGs can further facilitate the execution of complex security analysis using entity resolution, clustering, and classification. Machine reasoning in support of cybersecurity operations can be implemented using rulesets and inference techniques. New threats are identified from the existing knowledge base, which can further be used for in-progress attack detection and a greater understanding of attack paths and pivot points. For security ontologies development, ontology languages such as Resource Description Framework (RDF) and (Web Ontology Language (OWL) are commonly used [37]. The SDTs enrich the contextual data of CPS with underlying ontologies attributes against threats and vulnerabilities for developing situation aware cybersecurity operations. Data models with different types from CPS, including physical and virtual assets, network logs, and incident reports, can be aggregated to develop multiple security ontologies, each with a specific objective for cybersecurity operations [33]. **Knowledge Graphs for Cybersecurity Operations:** Constructing KGs for SDTs in the context of cybersecurity operations involves combining metadata from various sources to create a structured representation of the relationships, entities, and attributes within the DT environment. Incident detection can be intelligently done through KGs capabilities. Detected cyber incidents within SDTs can be co-relations to inference and track the origin of attacks and vulnerabilities discovery and their paths with inner-related components using KGs data models. The KGs enriched with CPS states and asset specifications data with underlying ontologies can be used for incident response and management. In this regard, KG-based annotations aided with MITRE ATT&CK framework provide valuable insights into cyber threat techniques and respective Indicators of Compromise (IoC) for IT/OT systems attacks mapping7. This mapping can be aligned to attack vectors for building automated attack paths to enable automated IoCs identification against each attack tactic employing Semantic modeling and KGs within SDT capability. These MITRE ATT&CK-based models can then be used in incident playbooks for devising better response and mitigation strategies. Footnote 7: [https://attack.mitre.org/](https://attack.mitre.org/) ## V Related Techniques In this section, we evaluate the commonalities and differences of techniques with their pros and cons for the cybersecurity operations of critical systems in relation to digital twins. Closely related to SDTs, hardware and software-based emulators and simulators are used for CPS testing and validating cybersecurity operations [38]. However, cybersecurity modeling and simulation methods similar to this do not have the capability to monitor system security operations with realtime synchronization. The SDT with shift left and shift right security testing using intelligent modeling capabilities, are both flexible in implementation and effective in operation, remaining consistent throughout the system lifecycle. Cyber ranges are useful platforms for training, identifying system vulnerabilities, associated threats, and attacker's tactics for exploitation of vulnerabilities using tailored scenarios [39, 40]. Security defenders can test and validate security hardening techniques by utilizing different security appliances, including EDRs, SIEMs, and firewalls designed to detect and protect against bespoke attacks. In comparison to SDTs-based security operations, cyber ranges operate mostly with hybrid system components, only offering security testing and training capabilities, which are mostly manually simulated. In contrast, SDT technology offers such automated features as security-driven behavior state modeling, simulation, replication and advanced support diagnostic analytics. Security system design, prototyping, deployment, commissioning and standards compliance phases are each the recipient of increased capabilities when using SDTs. The use of SDT technology enhances routine security administrative tasks when performing system security testing and analysis activities. A number of AI/ML modeling approaches [29, 30, 34] have been developed as standalone prototypes as well as integrated into existing security appliances. As discussed in Section IV, these techniques have a certain level of intelligence for security incident prediction and detection. However, unlike an SDT, they do not possess the essential contextual knowledge and situational awareness necessary for the meaningful visualization of cybersecurity incidents, the tracking of incursion sources, and the interaction activity between system critical components. Moreover, standalone AI/ML-based security tools and appliances are not integrated in a way to streamline security operations with design and operational scenarios to test and evaluate the cybersecurity of SCSs. The SDTs do not necessarily replace existing security appliances and tools, but they do enhance their existing ability to understand the complexities of SCSs with state-based and semantically enriched KGs improving understanding the context of security incidents through simulated analytical techniques which conventional security modeling tools alone cannot achieve. ## VI Research challenges While Digital Twins show promise for driving cybersecurity operations, there are challenges in their design, architecture, and enabling technologies. These issues require attention from security researchers and practitioners. In the following, we highlight some of the challenges that demand further exploration and research endeavors from academic and industry practitioners: **Virtual Components Security:** The addition of VCs suggests many significant security benefits but also introduces associated risks and security architecture challenges. The perspective VCs, embedded in either Edge or Cloud environments, may expose particular security weaknesses and provide entry points for attackers to gain access to the SCSs if not designed securely. In this regard, security architecture patterns, both for Edge and cloud-hosted DTs should be designed to cater for security controls incorporation at various levels [21]. Specialized security verification and validation methods should be used to evaluate the security of VCs and their environments. Data transfers from field devices to Edge, pre-processing, and further distribution to cloud and SDTs require the exploration of new data encryption methods across the channels while the shared responsibility models between edge and cloud should be analyzed and understood by stakeholders. **Digital Twins Security:** The SLSR integration of SDTs into SCSs introduces certain risks as the left and right interactions increase the opportunities for adversaries. One of the heightened security risks that an SDT introduces to the CPS is that of additional attack vector vulnerabilities. The result of its digitalization of the physical hardware, the SDT digital hardware, and software components adds to the security exploitation opportunities for malicious actors. For example, by unauthorized access through the SDT of a smart grid, an attacker can gain full visibility of core grid functions and then pivot into access to critical assets. Exploiting security weaknesses in SDT, a malicious actor could potentially launch advanced attacks on the smart grid leading to the disruption of essential services, and compromising the safety and security of the system. To address these issues, security researchers and system architects should review methods of system security and modeling integration with a view to the development of secure cybersecurity frameworks with the implementation of multi-layer protocols prioritizing cybersecurity [15]. Emerging technologies such as Blockchains [41] can be employed for SDT identity and access management across SLSR interfaces to overcome such issues. **Interoperability and Communication Issues:** The interacting twins, i.e. SDTs and CPS physical counterparts, use different data formats, and protocols, for communication. Integration of both twins is a complex and challenging task due to the non-existence of standardized interfaces and governance. This adds to performance overheads and delays in data processing and system response times, potentially affecting control and decision making in CPS via SDTs. To address these challenges researchers should focus on developing standardized communication protocols and data representation formats, as well as semantic data integration techniques, to facilitate a common understanding of data. The design of dynamic protocols and middleware solutions can bridge the gap in realtime data synchronization to facilitate better CPS-SDT integration. **Data and Modeling issues in Digital Twins:** Using diverse CPS data with SDTs adds data complexity and heterogeneity for training AI/ML and semantic models. In this regard, data curation, i.e. data labeling and distribution with sampling bias, have security implications. The integration of sensor source data and synthetic data should be used with techniques such as transfer learning and active learning to minimize false positives/false negatives resulting in more robust models [42]. To address the constraints of limited labeled data, automated methods can be used effectively. This involves generating additional simulated data based on existing labeled data, and introducing variations, noise, or anomalies to increase the diversity of data. The AI/ML models' robustness should be enhanced with emerging semantic and knowledge-based integration enabling context-aware models. These models encompass basic detection and prediction techniques to include increased refinement of machine learning using ML and machine reasoning technologies. Tailoring techniques to meet the specific requirements of cybersecurity within the digital twin operational environment can meaningfully contribute to the efficient handling of data and improved robust model development. **Trustworthy Modeling for Digital Twins:** The absence of transparency and interpretability (Blackbox) in current AI/ML algorithms when used in Digital Twins erodes the trust of security teams in the models employed to counteract cyberattacks. Foundational models utilized for prediction, detection, and analysis need to insit a measurable level of confidence among stakeholders regarding their relevance, reliability, and accuracy. However, most of these AI/ML models function as black boxes, offering limited insight into the reasoning behind their output. To address this shortcoming, Explainable AI (XAI) methods [43] should be developed and integrated into cyber defense, attack analysis, and cybersecurity incident response stages. SDTs, empowered by XAI methods, should offer unambiguous transparency, interpretability, and clarity of understanding at each phase of automated cybersecurity operations. ## VII Conclusion and Future Research Directions In this vision paper, we present innovative concepts for crafting intelligent security DTs, poised to enhance cybersecurity operations by applying the SLSR design paradigm for securing systems. These security-focused Digital Twins aim to bolster trust and resilience within critical smart systems, encompassing IT/OT in critical infrastructure and Industry 4.0 operational domains. Future systems cybersecurity is envisioned with virtual components and the SLSR design paradigm enabling reliable monitoring of CPS, fostering robust security twins model development across the system lifecycle. The future security-driven DTs shall have integrated support for diverse cybersecurity use cases employing AI/ML, semantic structures, and KGs with continuous automation and integration. By harnessing generative AI combined with KGs and modeling strategies, SDTs shall pave the way for innovative cybersecurity solutions for critical systems of the future. For future research, a number of issues are identified for exploration and development, as outlined in the research challenges sections. Data curation, management, and processing for SDT consumption require further application refinement and extensive research. The integration of generative AI for cybersecurity within the context of DT holds significant promise. However, the trustworthy and responsible design of these AI/ML models requires additional research before they can be seamlessly integrated into security Digital Twins. ## Acknowledgment The work has been supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government's Cooperative Research Centres Program.
2309.00108
Laplacian-Former: Overcoming the Limitations of Vision Transformers in Local Texture Detection
Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87\% and +0.76\% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at https://github.com/mindflow-institue/Laplacian-Former
Reza Azad, Amirhossein Kazerouni, Babak Azad, Ehsan Khodapanah Aghdam, Yury Velichko, Ulas Bagci, Dorit Merhof
2023-08-31T19:56:14Z
http://arxiv.org/abs/2309.00108v1
# Laplacian-Former: Overcoming the Limitations of Vision Transformers in Local Texture Detection ###### Abstract Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub. Keywords:Deep Learning Texture Segmentation Laplacian Transformer ## 1 Introduction The recent advancements in Transformer-based models have revolutionized the field of natural language processing and have also shown great promise in a wide range of computer vision tasks [5, 15]. As a notable example, the Vision Transformer (ViT) model utilizes Multi-head Self-Attention (MSA) blocks to globally model the interactions between semantic tokens created by treating local image patches as individual elements [7]. This approach stands in contrast to CNNs, which hierarchically increase their receptive field from local to global to capture a global semantic representation. Nevertheless, recent studies [25, 3] have shown that ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and it is vital for many diagnostic and prognostic tasks. This weakness in local representation can be attributed to the way in which ViT models process images. ViT models split an image into a sequence of patches and model their dependencies using a self-attention mechanism, which may not be as effective as the convolution operation used in CNN models in extracting local features within receptive fields. This difference in how ViT and CNN process images may explain the superior performance of CNN models in local feature extraction [9, 1]. Innovative approaches have been proposed in recent years to address the insufficient local texture representation within Transformer models. One such approach is the integration of CNN and ViT features through complementary methods, aimed at seamlessly blending the strengths of both in order to compensate for any shortcomings in local representation [5, 16]. **Transformers as a Complement to CNNs:** TransUNet [5] is one of the earliest approaches incorporating the Transformer layers into the CNN bottleneck to model both local and global dependency using the combination of CNN and ViT models. Heidari et al. [12] proposed a novel solution called HiFormer, which leverages a Swin Transformer module and a CNN-based encoder to generate two multi-scale feature representations, which are then integrated via a Double-Level Fusion module. UNETR [11] used a Transformer to create a powerful encoder with a CNN decoder for 3D medical image segmentation. By bridging the CNN-based encoder and decoder with the Transformer, CoTr [31], and TransBTS [27], the segmentation performance in low-resolution stages was improved. Despite these advances, there remain some limitations in these methods such as computationally inefficiency (e.g., TransUNet model), the requirement of a heavy CNN backbone (e.g., HiFormer), and the lack of consideration for multi-scale information. These limitations have resulted in less effective network learning results in the field of medical image segmentation. **New Attention Models:** The redesign of the self-attention mechanism within pure Transformer models is another method aiming to augment feature representation to enhance the local feature representation ultimately [14]. In this direction, Swin-Unet [4] utilizes a linear computational complexity Swin Transformer [18] block in a U-shaped structure as a multi-scale backbone. MISS-Former [13] besides exploring the Efficient Transformer [30] counterpart to diminish the parameter overflow of vision transformers, applies a non-invertible down-sampling operation on input blocks transformer to reduce the parameters. D-Former [29] is a pure transformer-based pipeline that comprises a double at tention module to capture locally fine-grained attention and interaction with different units in a dilated manner through its mechanism. **Drawbacks of Transformers:** Recent research has revealed that traditional self-attention mechanisms, while effective in addressing local feature discrepancies, have a tendency to overlook important high-frequency information such as texture and edge details [26]. This is especially problematic for tasks like tumor detection, cancer-type identification through radiomics analysis, as well as treatment response assessment, where abnormalities often manifest in texture. Moreover, self-attention mechanisms have a quadratic computational complexity and may produce redundant features[23]. **Our Contributions:** We propose Laplacian-Former, a novel approach that includes new efficient attention (EF-ATT) consisting of two sub-attention mechanisms: _efficient attention_ and _frequency attention_. The efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output. The frequency attention mechanism is modeled using a Laplacian pyramid to emphasize each frequency information's contribution selectively. Then, a parametric frequency attention fusion strategy to balance the importance of shape and texture features by recalibrating the frequency features. These two attention mechanisms work in parallel. We also introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. Our method not only alleviates the problem of the traditional self-attention mechanism mentioned above, but also it surpasses all its counterparts in terms of different evaluation metrics for the tasks of medical image segmentation. ## 2 Methods In our proposed network, illustrated in Figure 1, taking an input image \(X\in R^{H\times W\times C}\) with spatial dimensions \(H\) and \(W\), and \(C\) channels, it is first passed through a patch embedding module to obtain overlapping patch tokens of size \(4\times 4\) from the input image. The proposed model comprises four encoder blocks, each containing two efficient enhancement Transformer layers and a patch merging layer that downsamples the features by merging 2\(\times\)2 patch tokens and increasing the channel dimension. The decoder is composed of three efficient enhancement Transformer blocks and four patch-expanding blocks, followed by a segmentation head to retrieve the final segmentation map. Laplacian-Former then employs a novel efficient enhancement multi-scale bridge to capture local and global correlations of different scale features and effectively transfer the underlying features from the encoder to the decoder. ### Efficient Enhancement Transformer Block In medical imaging, it is important to distinguish different structures and tissues, especially when tissue boundaries are ill-defined. This is often the case for accurate segmentation of small abnormalities, where high-frequency information plays a critical role in defining boundaries by capturing both textures and edges. Inspired by this, we propose an Efficient Enhancement Transformer Block that incorporates an Efficient Frequency Attention (EF-ATT) mechanism to capture contextual information of an image while recalibrating the representation space within an attention mechanism and recovering high-frequency details. Our efficient enhancement Transformer block first takes a LayerNorm (LN) from the input \(x\). Then it applies the EF-ATT mechanism to capture contextual information and selectively include various types of frequency information while using the Laplacian pyramid to balance the importance of shape and texture features. Next, \(x\) and diversity-enhanced shortcuts are added to the output of the attention mechanism to increase the diversity of features. It is proved in [24] that as Transformers become deeper, their features become less varied, which restrains their representation capacity and prevents them from attaining optimal performance. To address this issue, we have implemented an _augmented shortcut_ method from [10], a Diversity-Enhanced Shortcut (DES), employing a Kronecker decomposition-based projection. This approach involves inserting additional paths with trainable parameters alongside the original shortcut \(x\), which enhances feature diversity and improves performance while requiring minimal hardware resources. Finally, we apply LayerNorm and MiX-FFN [30] to the re Figure 1: Architecture of our proposed Laplacian-Former. sulting feature representation to enhance its power. This final step completes our efficient enhancement Transformer block, as illustrated in Figure 2. ### Efficient Frequency Attention (EF-ATT) The traditional self-attention block computes the attention score \(S\) using query (**Q**) and key (**K**) values, normalizes the result using Softmax, and then multiplies the normalized attention map with value (**V**): \[S(\textbf{Q},\textbf{K},\textbf{V})=Softmax\left(\frac{\textbf{Q}\textbf{K}^{ \textbf{T}}}{\sqrt{d_{k}}}\right)\textbf{V}, \tag{1}\] where \(d_{k}\) is the embedding dimension. One of the main limitations of the dot-product mechanism is that it generates redundant information, resulting in unnecessary computational complexity. Shen et al. [23] proposed to represent the context more effectively by reducing the computational burden from \(\mathcal{O}(n^{2})\) to linear form \(\mathcal{O}(d^{2}n)\): \[E(\textbf{Q},\textbf{K},\textbf{V})=\rho_{\textbf{q}}(\textbf{Q})\left(\rho_{ \textbf{k}}(\textbf{K})^{\textbf{T}}\textbf{V}\right). \tag{2}\] Their approach involves applying the Softmax function (\(\rho\)) to the key and query vectors to obtain normalized scores and formulating the global context by multiplying the key and value matrix. They demonstrate that efficient attention \(E\) can provide an equivalent representation of self-attention while being computationally efficient. By adopting this approach, we can alleviate the issues of feature redundancy and computational complexity associated with self-attention. Figure 2: The structure of our frequency enhancement Transformer block. Wang et al. [26] explored another major limitation of the self-attention mechanism, where they demonstrated through theoretical analysis that self-attention operates as a low-pass filter that erases high-frequency information, leading to a loss of feature expressiveness in the model's deep layers. Authors found that the Softmax operation causes self-attention to keep low-frequency information and loses its fine details. Motivated by this, we propose a new frequency recalibration technique to address the limitations of self-attention, which only focuses on low-frequency information (which contains shape information) while ignoring the higher frequencies that carry texture and edge information. First, we construct a Laplacian pyramid to determine the different frequency levels of the feature maps. The process begins by extracting \((L+1)\) Gaussian representations from the encoded feature using different variance values of the Gaussian function: \[\mathbf{G}_{l}(\mathbf{X})=\mathbf{X}*\frac{1}{\sigma_{l}\sqrt{2\pi}}e^{- \frac{i^{2}+j^{2}}{2\sigma_{l}^{2}}}, \tag{3}\] where \(\mathbf{X}\) refers to the input feature map, \((i,j)\) corresponds to the spatial location within the encoded feature map, the variable \(\sigma_{l}\) denotes the variance of the Gaussian function for the \(l\)-th scale, and the symbol \(*\) represents the convolution operator. The pyramid is then built by subtracting the \(l\)-th Gaussian function (\(\mathbf{G}_{l}\)) output from the \((l+1)\)-th output (\(\mathbf{G}_{l}-\mathbf{G}_{l+1}\)) to encode frequency information at different scales. The Laplacian pyramid is composed of multiple levels, each level containing distinct types of information. To ensure a balanced distribution of low and high-frequency information in the model, it is necessary to efficiently aggregate the features from all levels of the frequency domain. Hence, we present frequency attention that involves multiplying the key and value of each level (\(\mathbf{X}_{l}\)) to calculate the attention score and then fuses the resulting attention scores of all levels using a fusion module, which performs summation. The resulting attention score is multiplied by Query (\(\mathbf{Q}\)) to obtain the final frequency attention result, which subsequently concatenates with the efficient attention result and applies the depth-wise convolution with the kernel size of \(2\times 1\times 1\) in order to aggregate both information and recalibrate the feature map, thus allowing for the retrieval of high-frequency information. ### Efficient Enhancement Multi-scale Bridge It is widely known that effectively integrating multi-scale information can lead to improved performance [13]. Thus, we introduce the Efficient Enhancement Multi-scale Bridge as an alternative to simply concatenating the features from the encoder and decoder layers. The proposed bridge, depicted in Figure 1, delivers spatial information to each decoder layer, enabling the recovery of intricate details while generating output segmentation masks. In this approach, we aim to calculate the efficient attention mechanism for each level and fuse the multi-scale information in their context; thus, it is important that all levels' embedding dimension is of the same size. Therefore, in order to calculate the global context (\(\mathbf{G}_{i}\)), we parametrize the query and value of each level using a convolution \(1\times 1\) where it gets the size of \(mC\) and outputs \(C\), where \(m\) equals 1, 2, 5, and 8 for the first to fourth levels, respectively. We multiply the new key and value to each other to attain the global context. We then use a summation module to aggregate the global context of all levels and reshape the query for matrix multiplication with the augmented global context. Taking the second level with the dimension of \(\frac{H}{8}\times\frac{W}{8}\times 2C\), the key and value are mapped to \((\frac{H}{8}\frac{W}{8})\times C\), and the query to \((2\frac{H}{8}\frac{W}{8})\times C\). The augmented global context with the shape of \(C\times C\) is then multiplied by the query, resulting in an enriched feature map with the shape of \((2\frac{H}{8}\frac{W}{8})\times C\). We reshape the obtained feature map into \(\frac{H}{8}\times\frac{W}{8}\times 2C\) and feed it through an LN and MiX-FFN module with a skip connection to empower the feature representations. The resulting output is combined with the expanded feature map, and then projected using a linear layer onto the same size as the encoder block corresponding to that level. ## 3 Results Our proposed technique was developed using the PyTorch library and executed on a single RTX 3090 GPU. A batch size of 24 and a stochastic gradient descent algorithm with a base learning rate of 0.05, a momentum of 0.9, and a weight decay of 0.0001 was utilized during the training process, which was carried out for 400 epochs. For the loss function, we used both cross-entropy and Dice losses (\(Loss=\gamma\cdot L_{dice}+(1-\gamma)\cdot L_{ce}\)), \(\gamma\) set to 0.6 empirically. **Datasets:** We tested our model using the _Synapse_ dataset [17], which comprises 30 cases of contrast-enhanced abdominal clinical CT scans (a total of 3,779 axial slices). Each CT scan consists of 85 \(\sim\) 198 slices of the in-plane size of \(512\times 512\) and has annotations for eight different organs. We followed the same preferences for data preparation analogous to [5]. We also followed [2, 8] experiments to evaluate our method on the ISIC 2018 skin lesion dataset [6] with 2,694 images. **Synapse Multi-Organ Segmentation:** Table 1 presents a comparison of our proposal with previous SOTA methods using the DSC and HD metrics across eight abdominal organs. Laplacian-Former clearly outperforms SOTA CNN-based methods. We extensively evaluated EfficientFormer (EffFormer) plus another drift of Laplacian-Former without utilizing the bridge connections to en \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline **Methods** & **\# Params (M)** & **DSC + HD** & **\#Aarts Gallbladder Kidney(L) Kidney(R)** & **Liver Panarcus** & **Splomet Stomach** \\ \hline RE90 U-Net [8] & 30.42 & 74.68 & 36.87 & 87.74 & 63.66 & 80.40 & 78.19 & 93.74 & 56.90 & 85.87 & 74.16 \\ U-Net [20] & 14.8 & 76.85 & 39.70 & 94.07 & 69.72 & 77.77 & 68.60 & 93.43 & 53.98 & 86.67 & 75.58 \\ Att-UNet [2] & 34.9 & 77.77 & 60.26 & 69.55 & 66.88 & 77.76 & 71.11 & 93.57 & 56.80 & 87.30 & 75.75 \\ TransNet [2] & 105.28 & 77.48 & 31.09 & 87.23 & 63.13 & 81.87 & 77.02 & 94.08 & 55.86 & 85.08 & 75.62 \\ Swin-Test [2] & 27.17 & 79.13 & 21.55 & 84.47 & 66.53 & 84.28 & 76.94 & 92.49 & 56.58 & 90.66 & 76.60 \\ LeViv-Unet [2] & 52.17 & 78.53 & 18.64 & 78.63 & 62.23 & 84.61 & 80.25 & 93.11 & 50.97 & 88.86 & 72.70 \\ TransDeepLab [2] & 21.14 & 80.16 & 21.25 & 80.04 & 68.16 & 84.08 & 79.88 & 93.53 & 61.19 & 89.00 & 78.40 \\ HiFi dorse the superiority of Laplacian-Former. Laplacian-Former exhibits superior learning ability on the Dice score metric compared to other transformer-based models, achieving an increase of +1.59% and +2.77% in Dice scores compared to HiFormer and Swin-Unet, respectively. Figure 3 illustrates a qualitative result of our method for different organ segmentation, specifically we can observe that the LalacianFormer produces a precise boundary segmentation on Gallbladder, Liver, and Stomach organs. It is noteworthy to mention that our pipeline, as a pure transformer-based architecture trained from scratch without pretraining weights, outperforms all previously presented network architectures. **Skin Lesion Segmentation:** Table 2 shows the comparison results of our proposed method, Laplacian-Former, against leading methods on the skin lesion segmentation benchmark. Our approach outperforms other competitors across most evaluation metrics, indicating its excellent generalization ability across different datasets. In particular, our approach performs better than hybrid methods such as TMU-Net [19] and pure transformer-based methods such as Swin-Unet [4]. Our method achieves superior performance by utilizing the frequency attention in a pyramid scale to model local textures. Specifically, our frequency attention emphasizes the fine details and texture characteristics that are indicative of skin lesion structures and amplifies regions with significant intensity variations, thus accentuating the texture patterns present in the image and resulting in better performance. In addition, we provided the spectral response of LaplacianFormer vs. Standard Transformer in identical layers in Table 2. It is evident Standard design frequency response in deep layers of structure attenuates more than the LaplacianFormer, which is a visual endorsement of the capability of LaplacianFormer for its ability to preserve high-frequency details. The supplementary provides more visualization results. Figure 3: Segmentation results of the proposed method on the _Synapse_ dataset. Our Laplacian-Former shows finer boundaries (high-frequency details) for the region of the stomach and less false positive prediction for the pancreas. ## 4 Conclusion In this paper, we introduce Laplacian-Former, a novel standalone transformer-based U-shaped architecture for medical image analysis. Specifically, we address the transformer's inability to capture local context as high-frequency details, e.g., edges and boundaries, by developing a new design within a scaled dot attention block. Our pipeline benefits the multi-resolution Laplacian module to compensate for the lack of frequency attention in transformers. Moreover, while our design takes advantage of the efficiency of transformer architectures, it keeps the parameter numbers low.
2309.09892
Chaotic properties for billiards in circular polygons
We study billiards in domains enclosed by circular polygons. These are closed $C^1$ strictly convex curves formed by finitely many circular arcs. We prove the existence of a set in phase space, corresponding to generic sliding trajectories close enough to the boundary of the domain, in which the return billiard dynamics is semiconjugate to a transitive subshift on infinitely many symbols that contains the full $N$-shift as a topological factor for any $N \in \mathbb{N}$, so it has infinite topological entropy. We prove the existence of uncountably many asymptotic generic sliding trajectories approaching the boundary with optimal uniform linear speed, give an explicit exponentially big (in $q$) lower bound on the number of $q$-periodic trajectories as $q \to \infty$, and present an unusual property of the length spectrum. Our proofs are entirely analytical.
Andrew Clarke, Rafael Ramírez-Ros
2023-09-18T15:55:52Z
http://arxiv.org/abs/2309.09892v2
# Chaotic properties of billiards in circular polygons ###### Abstract We study billiards in domains enclosed by circular polygons. These are closed \(C^{1}\) strictly convex curves formed by finitely many circular arcs. We prove the existence of a set in phase space, corresponding to generic sliding trajectories close enough to the boundary of the domain, in which the return billiard dynamics is semiconjugate to a transitive subshift on infinitely many symbols that contains the full \(N\)-shift as a topological factor for any \(N\in\mathbb{N}\), so it has infinite topological entropy. We prove the existence of uncountably many asymptotic generic sliding trajectories approaching the boundary with optimal uniform linear speed, give an explicit exponentially big (in \(q\)) lower bound on the number of \(q\)-periodic trajectories as \(q\to\infty\), and present an unusual property of the length spectrum. Our proofs are entirely analytical. **Keywords:** Billiards, circular polygons, symbolic dynamics, periodic trajectories, length spectrum ## 1 Introduction A _billiard problem_ concerns the motion of a particle inside the domain bounded by a closed plane curve \(\Gamma\) (or the domain bounded by a hypersurface of some higher-dimensional Euclidean space). The motion in the interior of the domain is along straight lines, with elastic collisions at the boundary according to the optical law of reflection: the angles of incidence and reflection are equal. These dynamical systems were first introduced by Birkhoff [9]. See [41, 57, 18] for a general description. In the case of dispersing billiards (i.e. when the boundary is a union of concave components), the dynamics is chaotic [55]; indeed, such billiards exhibit ergodicity, the Bernoulli property, sensitive dependence on initial conditions, and so forth. In fact, it was believed for some years that billiards without any dispersing walls could not display chaos. Thus it came as a surprise when Bunimovich, in his famous paper, detailed a proof that the billiard in a stadium exhibits the Bernoulli property [12]. The boundary of the stadium billiard consists of two straight parallel lines connected at either end by semicircles. Stadia are \(C^{1}\) and convex, but not \(C^{2}\) or strictly convex. We study the class of \(C^{1}\)_strictly_ convex billiards bounded by finitely many circular arcs. No billiard in this class satisfies the celebrated _B-condition_--that is, all circular arcs can be completed to a disk within the billiard domain--, which is the hallmark for the _defocusing mechanism_ in billiards whose focusing boundaries are all circular arcs [18, Section 8.3]. In spite of it, we observe several chaotic phenomena. A closed curve \(\Gamma\) in \(\mathbb{R}^{2}\) is called a _circular polygon_ if it is a union of a finite number of circular arcs such that \(\Gamma\) is \(C^{1}\) and strictly convex. We do not consider circumferences as circular polygons. A _circular \(k\)-gon_ is a circular polygon with exactly \(k\) arcs. If \(\Gamma\) is a circular \(k\)-gon, then \(k\geq 4\). The phase space \(\mathcal{M}\) is a 2-dimensional cylinder; the circular component of the cylinder is a parameter on the curve \(\Gamma\), and the height component is the angle of incidence/reflection. We denote by \(f:\mathcal{M}\to\mathcal{M}\) the billiard map (i.e. the collision map of the billiard flow with the boundary \(\Gamma\) of the domain; see Section 4 for a precise definition). In what follows we give heuristic statements of our main results. **Theorem A**.: _If \(\Gamma\) is a circular polygon, then there is a set \(\mathcal{J}\subset\mathcal{M}\) accumulating on \(\partial\mathcal{M}\) such that the return map \(F:\mathcal{J}\to\mathcal{J}\) of \(f\) to \(\mathcal{J}\) is topologically semiconjugate to a transitive subshift on infinitely many symbols that contains the full \(N\)-shift as a topological factor for any \(N\in\mathbb{N}\), so it has infinite topological entropy._ See Proposition 27 in Section 5 and Theorem 31 in Section 6 for a precise formulation of this theorem. Be aware that the map with infinite entropy is the return map \(F\), not the billiard map \(f\). The _final sliding motions_ are the possible qualitative behaviors that a sliding billiard trajectory posses as the number of impacts tends to infinity, forward or backward. Every forward counter-clockwise sliding billiard trajectory \((\varphi_{n},\theta_{n})=f^{n}(\varphi,\theta)\), where \(\varphi_{n}\) are the angles of impact and \(\theta_{n}\) are the angles of incidence/reflection, belongs to exactly one of the following three classes: * _Forward bounded_ (\(\mathcal{B}^{+}_{0}\)): \(\inf_{n\geq 0}\theta_{n}>0\); * _Forward oscillatory_ (\(\mathcal{O}^{+}_{0}\)): \(0=\liminf_{n\rightarrow+\infty}\theta_{n}<\limsup_{n\rightarrow+\infty} \theta_{n}\); and * _Forward asymptotic_ (\(\mathcal{A}^{+}_{0}\)): \(\lim_{n\rightarrow+\infty}\theta_{n}=0\). This classification also applies for backward counter-clockwise sliding trajectories when \(n\leq-1\) and \(n\rightarrow-\infty\), in which case we write a superindex \(-\) instead of \(+\) in each of the classes: \(\mathcal{B}^{-}_{0}\), \(\mathcal{O}^{-}_{0}\) and \(\mathcal{A}^{-}_{0}\). And it also applies to (backward or forward) clockwise sliding trajectories, in which case we replace \(\theta_{n}\) with \(|\theta_{n}-\pi|\) in the definitions above and we write a subindex \(\pi\) instead of \(0\) in each of the classes: \(\mathcal{B}^{\pm}_{\sigma}\), \(\mathcal{O}^{\pm}_{\pi}\) and \(\mathcal{A}^{\pm}_{\pi}\). Terminologies _bounded_ and _oscillatory_ are borrowed from Celestial Mechanics. See, for instance, [32]. In our billiard setting, bounded means bounded away from \(\theta=\pi\) in the clockwise case and bounded away from \(\theta=0\) in the counter-clockwise case. That is, a sliding billiard trajectory is bounded when it does not approach \(\partial\mathcal{M}\). The following corollary is an immediate consequence of Theorem A, see Section 6. **Corollary B**.: _If \(\Gamma\) is a circular polygon, then \(\mathcal{X}^{-}_{\lambda}\cap\mathcal{Y}^{+}_{\lambda}\neq\emptyset\) for \(\mathcal{X},\mathcal{Y}\in\{\mathcal{B},\mathcal{O},\mathcal{A}\}\) and \(\lambda\in\{0,\pi\}\)._ From now on, we focus on counter-clockwise sliding trajectories. Corollary B does not provide data regarding the maximal _speed_ of diffusion for asymptotic trajectories. Which is the faster way in which \(\theta_{n}\to 0\) for asymptotic sliding trajectories? The answer is provided in the following theorem. **Theorem C**.: _If \(\Gamma\) is a circular polygon, then there are uncountably many asymptotic generic sliding billiard trajectories that approach the boundary with uniform linear speed. That is, there are constants \(0<a<b\) such that if \(\{\theta_{n}\}_{n\in\mathbb{Z}}\) is the corresponding sequence of angles of incidence/reflection of any of these uncountably many asymptotic generic sliding billiard trajectories, then_ \[a|n|\leq 1/\theta_{n}\leq b|n|,\qquad\forall|n|\gg 1.\] _Linear speed is optimal. That is, there is no billiard trajectory such that_ \[\lim_{n\rightarrow+\infty}n\theta_{n}=0.\] See Theorem 35 in Section 7, where we also get uncountably many one-parameter families (paths) of forward asymptotic generic sliding billiard trajectories, for a more detailed version of Theorem C. The term _sliding_ in Theorem C means that our trajectories do not skip any arc in any of their infinite turns around \(\Gamma\). The definition of _generic_ billiard trajectories is a bit technical, see Definition 13 and Remark 14. The term _uniform_ means that constants \(0<a<b\) do not depend on the billiard trajectory. The term _linear_ means that \(1/\theta_{n}\) is bounded between two positive multiples of \(|n|\). _Optimality_ comes as a no surprise since \(\sum_{n\geq 0}\theta_{n}=+\infty\) for any billiard trajectory in any circular polygon--or, for that matter, in any strictly convex billiard table whose billiard flow is defined for all time [33]. Optimality is proved in Proposition 36 in Section 7. There are two key insights (see Section 4) behind this theorem. First, when we iterate \(f\) along one of the circular arcs of the circular polygon \(\Gamma\), the angle of reflection \(\theta\) is constant, so \(\theta\) can drop only when the trajectory crosses the singularities between consecutive circular arcs. Second, the maximal drop corresponds to multiply \(\theta\) by an uniform (in \(\theta\)) factor smaller than one. As we must iterate the map many (order \(\theta^{-1}\)) times to fully slide along each circular arc, we can not approach the boundary with a faster than linear speed. As Theorem A gives us only a topological _semi_conjugacy to symbolic dynamics, it does not immediately provide us with the abundance of periodic orbits that the shift map possesses. However our techniques enable us to find many periodic sliding billiard trajectories. We state in the following theorem that the number of such trajectories in circular polygons grows exponentially with respect to the period. In contrast, Katok [38] showed that the numbers of isolated periodic billiard trajectories and of parallel periodic billiard trajectories grow subexponentially in any (linear) polygon. Given any integers \(1\leq p<q\), let \(\Pi(p,q)\) be the set of \((p,q)\)-periodic billiard trajectories in \(\Gamma\). That is, the set of periodic trajectories that close after \(p\) turns around \(\Gamma\) and \(q\) impacts in \(\Gamma\), so they have rotation number \(p/q\). Let \(\Pi(q)=\cup_{1\leq p<q}\Pi(p,q)\) be the set of periodic billiard trajectories with period \(q\). The symbol \(\#\) denotes the _cardinality_ of a set. **Theorem D**.: _If \(\Gamma\) is a circular \(k\)-gon and \(p\in\mathbb{N}\), there are constants \(c_{\star}(p),M_{\star},h_{\star}>0\) such that_ * \(\#\Pi(p,q)\geq c_{\star}(p)q^{kp-1}+\mathrm{O}(q^{kp-2})\) _as_ \(q\to\infty\) _for any fixed_ \(p\in\mathbb{N}\)_; and_ * \(\#\Pi(q)\geq M_{\star}\mathrm{e}^{h_{\star}q/q}\) _as_ \(q\to+\infty\)_._ We give explicit expressions for \(c_{\star}(p)\), \(M_{\star}\) and \(h_{\star}\) in Section 8. The optimal value of \(c_{\star}(p)\) is equal to the volume of certain \((kp-1)\)-dimensional compact convex polytope with an explicitly known half-space representation, see Proposition 39. We do not give optimal values of \(M_{\star}\) and \(h_{\star}\). The relation between the optimal value of \(h_{\star}\) and the topological entropy of the billiard map \(f\) is an open problem. We acknowledge that some periodic trajectories in \(\Pi(p,q)\) may have period less that \(q\) when \(\gcd(p,q)\neq 1\), but they are a minority, so the previous lower bounds captures the growth rate of the number of periodic trajectories with rotation number \(p/q\) and minimal period \(q\) even when \(p\) and \(q\) are not coprime. If the circular billiard has some symmetry, we can perform the corresponding natural reduction to count the number of symmetric sliding periodic trajectories, but then the exponent \(kp-1\) in the first lower bound would be smaller because there are less reduced arcs than original arcs. Exponent \(h_{\star}\) would be smaller too. See [16, 28] for samples of symmetric periodic trajectories in other billiards. The first reference deals with axial symmetries. The second one deals with rotational symmetries. Let \(|\Gamma|\) be the length of \(\Gamma\). If \(g=\{z_{0},\ldots,z_{q-1}\}\subset\Gamma\) is a \(q\)-periodic billiard trajectory, let \(L(g)=|z_{1}-z_{0}|+\cdots+|z_{q-1}-z_{0}|\) be its _length_. If \((g_{q})_{q}\) is any sequence such that \(g_{q}\in\Pi(1,q)\), then \(\lim_{q\to+\infty}L(g_{q})=|\Gamma|\). There are so many generic sliding \((1,q)\)-periodic billiard trajectories inside circular polygons that we can find sequences \((g_{q})_{q}\) such that the differences \(L(g_{q})-|\Gamma|\) have rather different asymptotic behaviors as \(q\to+\infty\). **Theorem E**.: _If \(\Gamma\) is a circular polygon, then there are constants \(c_{-}<c_{+}<0\) such that for any fixed \(c\in[c_{-},c_{+}]\) there exist a sequence \((g_{q})_{q}\), with \(g_{q}\in\Pi(1,q)\), such that_ \[L(g_{q})=|\Gamma|+c/q^{2}+\mathrm{O}(1/q^{3}),\quad\text{as $q\to+\infty$}.\] _Consequently, there exist a sequence \((h_{q})_{q}\), with \(h_{q}\in\Pi(1,q)\), such that_ \[c_{-}=\liminf_{q\to+\infty}\big{(}(L(h_{q})-|\Gamma|)q^{2}\big{)}<\limsup_{q \to+\infty}\big{(}(L(h_{q})-|\Gamma|)q^{2}\big{)}=c_{+},\quad\text{as $q\to+\infty$}.\] _Besides, \(c_{-}\leq-\pi^{2}|\Gamma|/6\) and \(c_{+}=-\frac{1}{24}\left[\int_{\Gamma}\kappa^{2/3}(s)\,\mathrm{d}s\right]^{3}\), where \(\kappa(s)\) is the curvature of \(\Gamma\) as a function of an arc-length parameter \(s\in[0,|\Gamma|)\)._ Let us put these results into perspective by comparing them with the observed behavior in sufficiently smooth (say \(C^{6}\)) and strictly convex billiards, which for the purpose of this discussion we refer to as _Birkhoff billiards_. Lazutkin's theorem (together with a refinement due to Douady) implies that Birkhoff billiards possess a family of caustics1 accumulating on the boundary [23, 42]. These caustics divide the phase space into invariant regions, and therefore guarantee a certain _regularity_ of the dynamics near the boundary, in the sense that the conclusion of Theorem A never holds for Birkhoff billiards. Not only does the conclusion of Theorem C not hold for Birkhoff billiards, but in such systems there are no trajectories approaching the boundary asymptotically as the orbits remain in invariant regions bounded by the caustics. As for Theorem D, a well-known result of Birkhoff [9] implies that Birkhoff billiards have \(\#\Pi(p,q)\geq 2\) for each coprime pair \(p,q\) such that \(1\leq p<q\). This lower bound turns out to be sharp, in the sense that for any such pair \(p,q\), there exist Birkhoff billiards with exactly two geometrically distinct periodic orbits of rotation number \(p/q\)[51]; a simple example is that the billiard in a non-circular ellipse has two periodic orbits of rotation number \(1/2\), corresponding to the two axes of symmetry. It follows that the conclusion of Theorem D does not hold in general for Birkhoff billiards. Finally, as for Theorem E, a well-known result of Marviz-Melrose [45] implies that if \((g_{q})_{q}\), with \(g_{q}\in\Pi(1,q)\), is any sequence of periodic billiard trajectories in a Birkhoff billiard \(\Gamma\), then Footnote 1: A closed curve \(\gamma\) contained in the interior of the region bounded by \(\Gamma\) is called a _caustic_ if it has the following property: if one segment of a billiard trajectory is tangent to \(\gamma\), then so is every segment of that trajectory. \[L(g_{q})=|\Gamma|+c_{+}/q^{2}+\mathrm{O}(1/q^{4}),\quad\text{as $q\to+\infty$},\] where \(c_{+}=-\frac{1}{24}\left[\int_{\Gamma}\kappa^{2/3}(s)\,{\rm d}s\right]^{3}\). Hence, \((1,q)\)-periodic billiard trajectories in circular polygons are _asymptotically shorter_ than the ones in Birkhoff billiards. An interesting question in general that has been considered to a significant extent in the literature is: what happens to the caustics of Lazutkin's theorem, and thus the conclusions of Theorems A and C, if we loosen the definition of a Birkhoff billiard? Without altering the basic definition of the billiard map \(f\), there are three ways that we can generalize Birkhoff billiards: (i) by relaxing the strict convexity hypothesis, (ii) by relaxing the smoothness hypothesis, or (iii) by increasing the dimension of the ambient Euclidean space. * Mather proved that if the boundary is convex and \(C^{r}\) for \(r\geq 2\), but has at least one point of zero curvature, then there are no caustics and there exist trajectories which come arbitrarily close to being positively tangent to the boundary and also come arbitrarily close to being negatively tangent to the boundary [46]. Although this result is about finite segments of billiard trajectories, there are also infinite trajectories tending to the boundary both forward and backward in time in such billiards: \(\mathcal{A}_{0}^{\pm}\cap\mathcal{A}_{\pi}^{\mp}\neq\emptyset\), see [47]. * Despite six continuous derivatives being the standard smoothness requirement for Lazutkin's theorem [23, 42], there is some uncertainty regarding what happens for \(C^{5}\) boundaries, and in fact it is generally believed that 4 continuous derivatives should suffice. Halpern constructed billiard tables that are strictly convex and \(C^{1}\) but not \(C^{2}\) such that the billiard particle experiences an infinite number of collisions in finite time [33]; that is to say, the billiard flow is incomplete. This construction does not apply to our case, as our billiard boundaries have only a finite number of singularities (points where the boundary is only \(C^{1}\) and not \(C^{2}\)), whereas Halpern's billiards have infinitely many. The case of boundaries that are strictly convex and \(C^{1}\) but not \(C^{2}\) and have only a finite number (one, for example) of singularities was first considered by Hubacher [36], who proved that such billiards have no caustics in a neighborhood of the boundary. This result opens the door for our analysis. * It has been known since the works of Berger and Gruber that in the case of strictly convex and sufficiently smooth billiards in higher dimension (i.e. the billiard boundary is a codimension 1 submanifold of \(\mathbb{R}^{d}\) where \(d\geq 3\)), only ellipsoids have caustics [7, 30]. However Gruber also observed that in this case, even in the absence of caustics, the Liouville measure of the set of trajectories approaching the boundary asymptotically is zero [29]. The question of existence of such trajectories was thus left open. It was proved in [20] (combined with results of [19]) that generic strictly convex analytic billiards in \(\mathbb{R}^{3}\) (and'many' such billiards in \(\mathbb{R}^{d}\) for \(d\geq 4\)) have trajectories approaching the boundary asymptotically. It is believed that the meagre set of analytic strictly convex billiard boundaries in \(\mathbb{R}^{d}\), \(d\geq 3\), for which these trajectories do not exist consists entirely of ellipsoids, but the perturbative methods of [20] do not immediately extend to such a result. Billiards in circular polygons have been studied numerically in the literature [6, 24, 34, 35, 44]. In the paper [4] the authors use numerical simulations and semi-rigorous arguments to study billiards in a 2-parameter family of circular polygons. They conjecture that, for certain values of the parameters, the billiard is ergodic. In addition they provide heuristic arguments in favor of this conjecture. A related problem is the lemon-shaped billiard, which is known to display chaos [11, 17, 37]. These billiards are strictly convex but not \(C^{1}\), so the billiard map is well-defined only on a proper subset of the phase space. The _elliptic flowers_ recently introduced by Bunimovich [13] are closed \(C^{0}\) curves formed by finitely many pieces of ellipses. _Elliptic polygons_ are elliptic flowers that are \(C^{1}\) and strictly convex, so they are a natural generalization of circular polygons. One can obtain a 1-parameter family of elliptic polygons with the string construction from any convex (linear) polygon. The _string construction_ consists of wrapping an inelastic string around the polygon and tracing a curve around it by keeping the string taut. Billiards in elliptic polygons can be studied with the techniques presented here for circular polygons. We believe that all results previously stated in this introduction, with the possible exception of inequality \(c_{-}\leq-\pi^{2}|\Gamma|/6\) given in Theorem E, hold for generic elliptic polygons. However, there are elliptic polygons that are globally \(C^{2}\), and not just \(C^{1}\), being the _hexagonal string billiard_ first studied by Fetter [25] the most celebrated example. We do not know how to deal with \(C^{2}\) elliptic polygons because jumps in the curvature of the boundary are a key ingredient in our approach to get chaotic billiard dynamics. Fetter suggested that the hexagonal string billiard could be integrable, in which case it would be a counterexample to the Birkhoff conjecture2. However, such integrability was numerically put in doubt in [8]. Later on is was analytically proved that a string billiard generated from a convex polygon with a least three sides can not have caustics near the boundary [14]. Footnote 2: It is well-known that billiards in ellipses are integrable. The so-called _Birkhoff conjecture_ says that elliptical billiards are in fact the only integrable billiards. This conjecture, in its full generality, remains open. In what follows we describe the main ideas of our proofs. Let \(\Gamma\) be a circular \(k\)-gon. It is well-known that the angle of incidence/reflection is a constant of motion for billiards in circles. Therefore, for the billiard in \(\Gamma\), the angle of incidence/reflection can change only when we pass from one circular arc to another, and not when the billiard has consecutive impacts on the same circular arc. The main tool that we use to prove our theorems is what we call the _fundamental lemma_ (Lemma 18 below), which describes how trajectories move up and down after passing from one circular arc to the next. The phase space \(\mathcal{M}\) of the billiard map \(f\) is a cylinder, with coordinates \((\varphi,\theta)\) where \(\varphi\in\mathbb{T}\) is a parameter on the boundary \(\Gamma\), and where \(\theta\in[0,\pi]\) is the angle of incidence/reflection. We consider two vertical segments \(\mathcal{L}_{j}\) and \(\mathcal{L}_{j+1}\) in \(\mathcal{M}\) corresponding to consecutive singularities of \(\Gamma\), and sufficiently small values of \(\theta\). The index \(j\) that labels the singularities is defined modulo \(k\). The triangular region \(\mathcal{D}_{j}\) bounded by \(\mathcal{L}_{j}\) and \(f(\mathcal{L}_{j})\) is a fundamental domain of the billiard map \(f\); that is, a set with the property that sliding trajectories have exactly one point in \(\mathcal{D}_{j}\) on each turn around \(\Gamma\). Consider now the sequence of backward iterates \(\left\{f^{-n}(\mathcal{L}_{j+1})\right\}\) of \(\mathcal{L}_{j+1}\). This sequence of slanted segments divides the fundamental domain \(\mathcal{D}_{j}\) into infinitely many quadrilaterals, which we call _fundamental quadrilaterals_. The fundamental lemma describes which fundamental quadrilaterals in \(\mathcal{D}_{j+1}\) we can visit if we start in a given fundamental quadrilateral in \(\mathcal{D}_{j}\). In order to prove Theorem A, we apply the fundamental lemma iteratively to describe how trajectories visit different fundamental quadrilaterals consecutively in each of the \(k\) fundamental domains \(\mathcal{D}_{j}\) in \(\mathcal{M}\). A particular coding of possible sequences of \(k\) fundamental quadrilaterals that trajectories can visit gives us our symbols. We then use a method due to Papini and Zanolin [49, 50] (extended to higher dimensions by Pireddu and Zanolin [52, 53, 54]) to prove that the billiard dynamics is semiconjugate to a shift map on the sequence space of this set of symbols; this method is called _stretching along the paths_. Observe that we could equally have used the method of correctly aligned windows [3, 27, 59], or the crossing number method [40]; note however that the latter would not have provided us with the large amount of periodic orbits that the other two methods do. We note that, although Theorem A provides us with a topological semiconjugacy to symbolic dynamics, we expect that this could be improved to a full conjugacy by using other methods. Once the proof of Theorem A is completed, Theorems C, D, and E are proved by combining additional arguments with the symbolic dynamics we have constructed. With respect to the symbolic dynamics, we choose a coding of the fundamental quadrilaterals visited by a trajectory that corresponds to \(\theta\) tending to \(0\) in the fastest way possible. We then prove that the corresponding billiard trajectories satisfy the conclusion of Theorem C. As for Theorem D, the method of stretching along the paths guarantees the existence of a periodic billiard trajectory for every periodic sequence of symbols. Consequently, the proof of Theorem D amounts to counting the number of sequences of symbols that are periodic with period \(p\) (because each symbol describes one full turn around the table; see Section 5 for details) such that the corresponding periodic sliding billiard trajectories after \(p\) turns around the table have rotation number \(p/q\). It turns out that this reduces to counting the number of integer points whose coordinates sum \(q\) in a certain \(kp\)-dimensional convex polytope. We do this by proving that the given convex polytope contains a hypercube with sides of a certain length, and finally by counting the number of integer points whose coordinates sum \(q\) in that hypercube. The structure of this paper is as follows. In Section 2 we describe the salient features of circular polygons. We summarize the _stretching along the paths_ method in Section 3. Section 4 is concerned with the definition of fundamental quadrilaterals, as well as the statement and proof of the fundamental lemma. Symbols are described in Section 5. Chaotic dynamics, and thus the proofs of Theorem A and Corollary B, is established in Section 6, whereas Section 7 contains the proof of Theorem C. In Section 8, we count the periodic orbits, thus proving Theorem D. Finally, Theorem E is proved in Section 9. Some technical proofs are relegated to appendices. Circular polygons In this section we define our relevant curves, construct their suitable parametrisations, and introduce notations that will be extensively used in the rest of the paper. A _piecewise-circular curve_ (or _PC curve_ for short) is given by a finite sequence of circular arcs in the Euclidean plane \(\mathbb{R}^{2}\), with the endpoint of one arc coinciding with the beginning point of the next. PC curves have been studied by several authors. See [5, 15] and the references therein. Lunes and lemons (two arcs), yin-yang curves, arbelos and PC cardioids (three arcs), salinons, Moss's eggs and pseudo-ellipses (four arcs), and Reuleaux polygons (arbitrary number of arcs) are celebrated examples of simple closed PC curves [22, 1]. A simple closed PC curve is a PC curve not crossing itself such that the endpoint of its last arc coincides with the beginning point of its first arc. All simple closed PC curves are Jordan curves, so we could study the billiard dynamics in any domain enclosed by a simple closed PC curve. However, such domains are too general for our purposes. We will only deal with strictly convex domains without corners or cusps. Strict convexity is useful, because then any ordered pair of points on the boundary defines a unique billiard trajectory. Absence of cusps and corners implies that the corresponding billiard map is a global homeomorphism in the phase space \(\mathcal{M}\), see Section 4. Therefore, we will only consider _circular polygons_, defined as follows. **Definition 1**.: A _circular \(k\)-gon_ is a simple closed strictly convex curve in \(\mathbb{R}^{2}\) formed by the concatenation of \(k>1\) circular arcs, in such a way that the curve is \(C^{1}\), but not \(C^{2}\), at the intersection points of any two consecutive circular arcs. The _nodes_ of a circular polygon are the intersection points of each pair of consecutive circular arcs. Reuleaux polygons, lemons, lunes, yin-yang curves, arbelos, salinons, PC cardioids are not circular polygons, but pseudo-ellipses and Moss's eggs (described later on; see also [22, Section 1.1]) are. We explicitly ask that consecutive arcs always have different radii, so the curvature has jump discontinuities at all nodes. We do not consider circumferences as circular polygons since circular billiards are completely integrable. Let \(\Gamma\) be a circular \(k\)-gon with arcs \(\Gamma_{1},\ldots,\Gamma_{k}\), listed in the order in which they are concatenated, moving in a counter-clockwise direction. Each arc \(\Gamma_{j}\) is completely determined by its _center_\(O_{j}\), its _radius_\(r_{j}>0\) and its _angular range_\([a_{j},b_{j}]\subset\mathbb{T}\). Then \(\delta_{j}=b_{j}-a_{j}\) is the _central angle_ of \(\Gamma_{j}\). Using the standard identification \(\mathbb{R}^{2}\simeq\mathbb{C}\), let \[A_{j}=O_{j}+r_{j}\mathrm{e}^{\mathrm{i}a_{j}},\qquad B_{j}=O_{j}+r_{j}\mathrm{ e}^{\mathrm{i}b_{j}}\] be the two nodes of arc \(\Gamma_{j}\). We denote by \[\Gamma_{\star}=\{A_{1},\ldots,A_{k}\}=\{B_{1},\ldots,B_{k}\} \tag{1}\] the _set of nodes_ of \(\Gamma\). **Notation 2**.: The index \(j\) that labels the arcs of any circular \(k\)-gon is defined modulo \(k\). Hence, \(\Gamma_{j}=\Gamma_{j\mod k}\), \(r_{j}=r_{j\mod k}\), \(a_{j}=a_{j\mod k}\) and so forth. In particular, \(\Gamma_{k+1}=\Gamma_{1}\). **Definition 3**.: The _polar parametrisation_ of \(\Gamma\) is the counter-clockwise parametrisation \[z:\mathbb{T}\to\Gamma\subset\mathbb{R}^{2}\simeq\mathbb{C},\qquad z(\varphi)= O_{j}+r_{j}\mathrm{e}^{\mathrm{i}\varphi},\qquad\forall\varphi\in[a_{j},b_{j}].\] The points \(a_{1},\ldots,a_{k}\) are the _singularities_ of \(\Gamma\). This parametrisation is well-defined because, by definition, \(B_{j}=A_{j+1}\) (the endpoint of any arc coincides with the beginning point of the next), and \(b_{j}=a_{j+1}\) (two consecutive arcs have the same oriented tangent line at their intersecting node). From now on, the reader should keep in mind that singularities \(a_{1},\ldots,a_{k}\) are always ordered in such a way that \[a_{1}<b_{1}=a_{2}<b_{2}=a_{3}<\cdots<b_{k-1}=a_{k}<b_{k}=a_{1}+2\pi. \tag{2}\] As far as we know, all the billiards in circular polygons that have been studied in the past correspond to cases with _exactly_ four arcs [4, 6, 24, 35, 44, 34]. It turns out that this is the simplest case, in the context of the next lemma. **Lemma 4**.: _Let \(\Gamma\) be a circular \(k\)-gon with radii \(r_{j}>0\), singularities \(a_{j}\in\mathbb{T}\) (or \(b_{j}=a_{j+1}\)) and central angles \(\delta_{j}=b_{j}-a_{j}\in(0,2\pi)\) for \(j\mod k\). Set \(w_{j}=\mathrm{e}^{\mathrm{i}b_{j}}-\mathrm{e}^{\mathrm{i}a_{j}}\in\mathbb{C}\). Then \(\Gamma\) has at least four arcs: \(k\geq 4\), and_ \[\sum_{j=1}^{k}\delta_{j}=2\pi,\qquad\sum_{j=1}^{k}r_{j}w_{j}=0. \tag{3}\] Proof.: Clearly, \(\sum_{j=1}^{k}\delta_{j}=\sum_{j=1}^{k}(b_{j}-a_{j})=b_{k}-a_{1}=2\pi\). It is known that a bounded measurable function \(\rho\in L(\mathbb{T})\) is the radius of curvature of a closed curve if and only if \[\int_{a_{1}}^{b_{k}}\rho(\varphi)\mathrm{e}^{\mathrm{i}\varphi}\,\mathrm{d} \varphi=\int_{0}^{2\pi}\rho(\varphi)\mathrm{e}^{\mathrm{i}\varphi}\,\mathrm{d} \varphi=0. \tag{4}\] Since the radius of curvature of \(\Gamma\) is the piecewise constant function \(\rho_{[(a_{j},b_{j})}\equiv r_{j}\), the general condition (4) becomes \(-\mathrm{i}\sum_{j=1}^{k}r_{j}w_{j}=0\). Note that \(\sum_{j=1}^{k}w_{j}=0\), \(w_{j}\neq 0\) for all \(j\), and \(\dim_{\mathbb{R}}[w_{1},\ldots,w_{k}]=2\) when \(k\geq 3\). If \(\Gamma\) has just two arcs: \(k=2\), then \[r_{1}w_{1}+r_{2}w_{2}=0,\qquad w_{1}+w_{2}=0,\qquad w_{1},w_{2}\neq 0.\] This implies that \(r_{1}=r_{2}\) and contradicts our assumption about radii of consecutive arcs. If \(\Gamma\) has just three arcs: \(k=3\), then \[r_{1}w_{1}+r_{2}w_{2}+r_{3}w_{3}=0,\qquad w_{1}+w_{2}+w_{3}=0,\qquad\dim_{ \mathbb{R}}[w_{1},w_{2},w_{3}]=2.\] This implies that \(r_{1}=r_{2}=r_{3}\) and we reach the same contradiction. Necessary conditions (2) and (3) are sufficient ones too. To be precise, if the radii \(r_{j}>0\), the angular ranges \([a_{j},b_{j}]\subset\mathbb{T}\) and the central angles \(\delta_{j}=b_{j}-a_{j}\in(0,2\pi)\) satisfy (2) and (3), then there exists a \(2\)-parameter family of circular \(k\)-gons sharing all those elements. To be precise, all circular \(k\)-gons in this family are the same modulo translations. Let us prove this claim. Once we put the center \(O_{1}\) at an arbitrary location, all other centers are recursively determined by imposing that \(A_{j+1}=B_{j}\), which implies (since \(b_{j}=a_{j+1}\)) that \[O_{j+1}=O_{j}+(r_{j}-r_{j+1})\mathrm{e}^{\mathrm{i}b_{j}},\qquad j=1,\ldots,k-1.\] The obtained PC curve \(\Gamma=z(\mathbb{T})\), where \(z(\varphi)\) is the polar parametrisation introduced in Definition 3, is closed by (4) and it is \(C^{1}\) and strictly convex by construction. Hence, \(\Gamma\) is a circular \(k\)-gon. This means that any circular \(k\)-gon is completely determined once we know its first center \(O_{1}\), its first singularity \(a_{1}\), its radii \(r_{1},\ldots,r_{k}\), and its central angles \(\delta_{1},\ldots,\delta_{k}\). The above discussion shows that circular \(k\)-gons form, modulo traslations and rotations, a \((2k-3)\)-parameter family. To be precise, if we set \(O_{1}=(0,0)\) and \(a_{1}=0\) by means of a traslation and a rotation, then parameters \(r_{1},\ldots,r_{k},\delta_{1},\ldots,\delta_{k}\) are restricted by (3), which has codimension three. If, in addition, we normalize somehow (in the literature one can find many different choices) our circular \(k\)-gons with a scaling, we get that they form a \((2k-4)\)-parameter family modulo similarities. The reader can find a complete geometric description, modulo similarities, of the four-parameter family of (convex and nonconvex, symmetric and nonsymmetric) closed \(C^{1}\) PC curves with four arcs in [34], whose goal was to numerically exhibit the richness of the billiard dynamics in those \(C^{1}\) PC curves. For brevity, we only give a few simple examples of symmetric and non-symmetric circular polygons with four and six arcs. We skip many details. _Pseudo-ellipses_ are the simplest examples. We may define them as the circular \(4\)-gons with a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)-symmetry. They form, modulo translations and rotations, a three-parameter family. The radii and central angles of any pseudo ellipse have the form \[r_{1}=r_{3}=r,\qquad r_{2}=r_{4}=R,\qquad\delta_{1}=\delta_{3}=\alpha,\qquad \delta_{2}=\delta_{4}=\pi-\alpha,\] for some free parameters \(\alpha\in(0,\pi)\), and \(r,R>0\). We will assume that \(0<r<R\) for convenience. We will denote by \(E_{\alpha,r,R}\) the corresponding pseudo-ellipse. Given any pseudo-ellipse \(E_{\alpha,r,R}\), its centers form a _rhombus_ (4 equal sides) and its nodes form a _rectangle_ (4 equal angles). If \(\alpha=\pi/2\), then \(\delta_{1}=\delta_{2}=\delta_{3}=\delta_{4}=\pi/2\) and we say that \(E_{\pi/2,r,R}\) is a _squared pseudo-ellipse_. The term _squared_ comes from the fact that the centers of such pseudo-ellipses form a square. See Figure 1. The nodes of a squared pseudo-ellipse still form a rectangle, not a square. On the contrary, the celebrated Benettin-Strelcyn ovals, whose billiard dynamics was numerically studied in [6, 35, 44], are pseudo-ellipses whose nodes form a square, but whose centers only form a rhombus. Later on, the extend of chaos in billiards associated to general pseudo-ellipses was numerically studied in [24]. Another celebrated example of circular \(4\)-gon is _Moss's egg_[22, Section 1.1], whose radii and central angles have the form \[r_{1}=r,\quad r_{2}=2r=r_{4},\quad r_{3}=(2-\sqrt{2})r,\quad\delta_{1}=\pi, \quad\delta_{2}=\pi/4=\delta_{4},\quad\delta_{3}=\pi/2,\] for some free parameter \(r>0\), called the radius of the egg. All Moss's eggs are congruent modulo similarities. They have a \(\mathbb{Z}_{2}\)-symmetry, so their nodes form an _isosceles trapezoid_ (2 pairs of consecutive equal angles) and its centers form a _kite_ (2 pairs of adjacent equal-length sides). In fact, this kite is a bit degenerate, since it looks like a triangle. See Figure 2. Billiards in a 2-parameter family of circular \(4\)-gons with \(\mathbb{Z}_{2}\)-symmetry, but not containing Moss's egg, were considered in [4]. The heuristic analysis of sliding trajectories contained in Section 4.5 of that paper is closely related to our study. Next, we describe a way to construct some circular \(6\)-gons. Fix a triangle \(\triangle ABC\) with vertexes \(A\), \(B\) and \(C\) ordered in the _clockwise_ direction. Let \(\alpha\), \(\beta\) and \(\gamma\) be its internal angles. Let \(a\), \(b\) and \(c\) be the lengths of its sides, following the standard convention. That is, \(a\) refers to the side opposed to vertex \(A\) and so forth. Then we look for circular \(6\)-gons with centers \(O_{1}=O_{4}=A\), \(O_{2}=O_{5}=B\), \(O_{3}=O_{6}=C\) and central angles \(\delta_{1}=\delta_{4}=\alpha\), \(\delta_{2}=\delta_{5}=\beta\) and \(\delta_{3}=\delta_{6}=\gamma\). In this setting, all radii are determined by the choice of the first one. Namely, we can take \[r_{1}=r,\quad r_{2}=r+c,\quad r_{3}=r+c-a,\quad r_{4}=r+c-a+b,\quad r_{5}=r+b-a,\quad r_{6}=r+b,\] for any \(r>\max\{0,a-c,a-b\}\). Therefore, we obtain a one-parameter family of parallel circular \(6\)-gons, parameterized by the first radius \(r_{1}=r\). See Figure 2 for a non-symmetric sample with \(A=(3,-1)\), \(B=(-1,-1)\), \(C=(0,1)\) and \(r=1\). One can draw circular polygons with many arcs by applying similar constructions, but that challenge is beyond the scope of this paper. The interested reader can look for inspiration in the nice Bunimovich's construction of elliptic flowers [13]. To end this section, we emphasize that all our theorems are general. They can be applied to any circular polygon. Thus, we do not need to deal with concrete circular polygons. Figure 1: Left: Pseudo-ellipse \(E_{\pi/4,1,2}\). Right: Squared pseudo-ellipse \(E_{\pi/2,1,2}\). Pseudo-ellipses are represented with thick lines, their pairs of symmetry lines with thin continuous lines, their centers \(O_{j}\) with solid dots, the circumferences of radii \(r_{j}\) centered at \(O_{j}\) with dashed thin lines, their angular ranges \([a_{j},b_{j}]\) with dash-dotted thin lines, and their nodes are the intersections of the thick and dash-dotted thin lines. ## 3 The "stretching along the paths" method In this section, we present the main ideas of the _stretching along the paths_ method developed by Papini and Zanolin [49, 50], and extended by Pireddu and Zanolin [53, 54, 52]. It is a technical tool to establish the existence of _topological chaos_; that is, chaotic dynamics in continuous maps. We present a simplified version of the method because we work in the two-dimensional annulus \(\mathcal{M}=\mathbb{T}\times[0,\pi]\) and our maps are _homeomorphisms_ on \(\mathcal{M}\). We also change some terminology because our maps stretch along _vertical_ paths, instead of _horizontal_ paths. The reader interested in more general statements about higher dimensions, finding fixed and periodic points in smaller compact sets, study of crossing numbers, non-invertible maps, and maps not defined in the whole space \(\mathcal{M}\), is referred to the original references. Let \(\mathcal{M}=\mathbb{T}\times[0,\pi]\). By a _continuum_ we mean a compact connected subset of \(\mathcal{M}\). _Paths_ and _arcs_ are the continuous and the homeomorphic images of the unit interval \([0,1]\), respectively. Most definitions below are expressed in terms of paths, but we could also use arcs or continua, see [49, Table 3.1]. _Cells_ are the homeomorphic image of the unit square \([0,1]^{2}\) so they are simply connected and compact. The Jordan-Shoenflies theorem implies that any simply connected compact subset of \(\mathcal{M}\) bounded by a Jordan curve is a cell. **Definition 5**.: An _oriented cell_\(\widetilde{\mathcal{Q}}\) is a cell \(\mathcal{Q}\subset\mathcal{M}\) where we have chosen four different points \(\widetilde{\mathcal{Q}}_{\mathrm{bl}}\) (base-left), \(\widetilde{\mathcal{Q}}_{\mathrm{br}}\) (base-right), \(\widetilde{\mathcal{Q}}_{\mathrm{tr}}\) (top-right) and \(\widetilde{\mathcal{Q}}_{\mathrm{tl}}\) (top-left) over the boundary \(\partial\mathcal{Q}\) in a counter-clockwise order. The _base side_ of \(\widetilde{\mathcal{Q}}\) is the arc \(\widetilde{\mathcal{Q}}_{\mathrm{bl}}\subset\partial\mathcal{Q}\) that goes from \(\widetilde{\mathcal{Q}}_{\mathrm{bl}}\) to \(\widetilde{\mathcal{Q}}_{\mathrm{br}}\) in the counter-clockwise orientation. Similarly, \(\widetilde{\mathcal{Q}}_{\mathrm{l}}\), \(\widetilde{\mathcal{Q}}_{\mathrm{r}}\) and \(\widetilde{\mathcal{Q}}_{\mathrm{t}}\) are the _left_right_ and _top sides_ of \(\widetilde{\mathcal{Q}}\). Finally, \(\widetilde{\mathcal{Q}}_{\mathrm{h}}=\widetilde{\mathcal{Q}}_{\mathrm{b}} \cup\widetilde{\mathcal{Q}}_{\mathrm{t}}\) and \(\widetilde{\mathcal{Q}}_{\mathrm{v}}=\widetilde{\mathcal{Q}}_{\mathrm{l}} \cup\widetilde{\mathcal{Q}}_{\mathrm{r}}\) are the _horizontal_ and _vertical sides_ of \(\widetilde{\mathcal{Q}}\). All our cells will have line segments as vertical sides, some being even quadrilaterals. **Definition 6**.: Let \(\widetilde{\mathcal{Q}}\) be an oriented cell. A path \(\gamma:[a,b]\to Q\) is _vertical_ (respectively, _horizontal_) in \(\widetilde{\mathcal{Q}}\) when it connects the two horizontal (respectively, vertical) sides of \(\widetilde{\mathcal{Q}}\) and \(\gamma(t)\not\in\widetilde{\mathcal{Q}}_{\mathrm{h}}\) (respectively, \(\gamma(t)\not\in\widetilde{\mathcal{Q}}_{\mathrm{v}}\)) for all \(t\in(a,b)\). We say that an oriented cell \(\widetilde{\mathcal{K}}\) is a _horizontal slab_ in \(\widetilde{\mathcal{Q}}\) and write \[\widetilde{\mathcal{K}}\subset_{\mathrm{h}}\widetilde{\mathcal{Q}}\] Figure 2: Left: Moss’s egg. Right: A nonsymmetric circular \(6\)-gon with centers \(O_{1}=O_{4}=(3,-1)\), \(O_{2}=O_{5}=(-1,-1)\) and \(O_{3}=O_{6}=(0,1)\), which form a triangle. Circular polygons are represented with thick lines, the symmetry line of Moss’s egg with a thin continuous line, their centers \(O_{j}\) with solid dots, the circumferences of radii \(r_{j}\) centered at \(O_{j}\) with dashed thin lines, their angular ranges \([a_{j},b_{j}]\) with dash-dotted thin lines, and their nodes are the intersections of the thick and dash-dotted thin lines. when \(\mathcal{K}\subset\mathcal{Q}\) and, either \(\widetilde{\mathcal{K}}_{1}\subset\widetilde{\mathcal{Q}}_{1}\) and \(\widetilde{\mathcal{K}}_{r}\subset\widetilde{\mathcal{Q}}_{r}\), or \(\widetilde{\mathcal{K}}_{1}\subset\widetilde{\mathcal{Q}}_{r}\) and \(\widetilde{\mathcal{K}}_{r}\subset\widetilde{\mathcal{Q}}_{l}\). If, in addition, \(\mathcal{K}\cap\widetilde{\mathcal{Q}}_{h}=\emptyset\), then we say that \(\widetilde{\mathcal{K}}\) is a _strict horizontal slab_ in \(\widetilde{\mathcal{Q}}\) and write \[\widetilde{\mathcal{K}}\varsubsetneq_{h}\widetilde{\mathcal{Q}}.\] _Vertical slabs_ can be analogously defined. Note that \(\widetilde{\mathcal{K}}\varsubsetneq_{h}\widetilde{\mathcal{Q}}\) is a much stronger condition than \(\widetilde{\mathcal{K}}\subset_{h}\widetilde{\mathcal{Q}}\) and \(\mathcal{K}\varsubsetneq\mathcal{Q}\). **Definition 7**.: Let \(g:\mathcal{M}\to\mathcal{M}\) be a homeomorphism. Let \(\widetilde{\mathcal{Q}}\) and \(\widetilde{\mathcal{Q}}^{\prime}\) be oriented cells in \(\mathcal{M}\). We say that \(g\)_stretches_\(\widetilde{\mathcal{Q}}\) to \(\widetilde{\mathcal{Q}}^{\prime}\)_along vertical paths_ and write \[g:\widetilde{\mathcal{Q}}\rightsquigarrow\widetilde{\mathcal{Q}}^{\prime}\] when every path \(\gamma:[a,b]\to\mathcal{Q}\) that is vertical in \(\widetilde{\mathcal{Q}}\) contains a _subpath_\(\gamma^{\prime}=\gamma_{[[s,t]}\) for some \(a\leq s<t\leq b\) such that the image path \(g\circ\gamma^{\prime}:[s,t]\to\mathcal{Q}^{\prime}\) is vertical in \(\widetilde{\mathcal{Q}}^{\prime}\). This stretching condition does not imply that \(g(\mathcal{Q})\subset\mathcal{Q}^{\prime}\). In fact, we see \(\mathcal{Q}^{\prime}\) as a 'target set' that we want to 'visit', and not as a codomain. If \(\gamma:[a,b]\to\mathcal{M}\) is a path, we also use the notation \(\gamma\) to mean the set \(\gamma([a,b])\subset\mathcal{M}\). This allows us to estate the stretching condition more succinctly. Namely, we ask that every path \(\gamma\) vertical in \(\widetilde{\mathcal{Q}}\) contains a subpath \(\gamma^{\prime}\subset\gamma\) such that the image path \(g(\gamma^{\prime})\) is vertical in \(\widetilde{\mathcal{Q}}^{\prime}\). **Definition 8**.: Let \(f:\mathcal{M}\to\mathcal{M}\) be a homeomorphism. Let \((\mathcal{Q}_{i};n_{i})_{i\in I}\) be a two-sided sequence: \(I\in\mathbb{Z}\), one-sided sequence: \(I\in\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\), \(p\)-periodic sequence: \(I=\mathbb{Z}/p\mathbb{Z}\), or finite sequence \(I=\{0,1,\ldots,k\}\) with \(\mathcal{Q}_{i}\subset\mathcal{M}\) and \(n_{i}\in\mathbb{N}\). Let \(x\in\mathcal{Q}_{0}\). We say that the point \(x\)\(f\)_-realizes_ the sequence \((\mathcal{Q}_{i};n_{i})_{i\in I}\) when \[f^{-(n_{-1}+\cdots+n_{-i})}(x)\in\mathcal{Q}_{-i},\qquad f^{n_{0}+\cdots+n_{i- 1}}(x)\in\mathcal{Q}_{i},\qquad\forall i\geq 1.\] Clearly, condition \(f^{-(n_{-1}+\cdots+n_{-i})}(x)\in\mathcal{Q}_{-i}\) does not apply in the case of one-sided or finite sequences. A subset of \(\mathcal{Q}_{0}\)\(f\)_-realizes_ the sequence \((\mathcal{Q}_{i};n_{i})_{i\in I}\) when all its points do so. The subsets \(\mathcal{Q}_{i}\) in this definition do not have to be cells, but that is the case considered in the following powerful 3-in-1 theorem about the existence of points and paths of the phase space \(\mathcal{M}\) that \(f\)-realize certain two-sided, one-sided, and periodic sequences of oriented cells. **Theorem 9** (Papini & Zanolin [50]).: _Let \(f:\mathcal{M}\to\mathcal{M}\) be a homeomorphism. Let \((\widetilde{\mathcal{Q}}_{i};n_{i})_{i\in I}\) be a two-sided, one-sided or \(p\)-periodic sequence where \(\widetilde{\mathcal{Q}}_{i}\) are oriented cells with \(\mathcal{Q}_{i}\subset\mathcal{M}\) and \(n_{i}\in\mathbb{N}\). If_ \[f^{n_{i}}:\widetilde{\mathcal{Q}}_{i}\rightsquigarrow\widetilde{\mathcal{Q}}_{ i+1},\qquad\forall i,\] _then the following versions hold._ * _If_ \(I=\mathbb{Z}\)_, there is a point_ \(x\in\mathcal{Q}_{0}\) _that_ \(f\)_-realizes the two-sided sequence_ \((\mathcal{Q}_{i};n_{i})_{i\in\mathbb{Z}}\)_._ * _If_ \(I=\mathbb{N}_{0}\)_, there is a path_ \(\gamma\) _horizontal in_ \(\widetilde{\mathcal{Q}}_{0}\) _that_ \(f\)_-realizes the sequence_ \((\mathcal{Q}_{i};n_{i})_{i\geq 0}\)_._ * _If_ \((\widetilde{\mathcal{Q}}_{i+p};n_{i+p})=(\widetilde{\mathcal{Q}}_{i};n_{i})\) _for all_ \(i\in\mathbb{Z}\) _and_ \(n=n_{0}+\cdots+n_{p-1}\)_, there is a point_ \(x\in\mathcal{Q}_{0}\) _such that_ \(f^{n}(x)=x\) _and_ \(x\) _\(f\)_-realizes the_ \(p\)_-periodic sequence_ \((\mathcal{Q}_{i};n_{i})_{i\in\mathbb{Z}/p\mathbb{Z}}\)_._ **Remark 10**.: We believe that the following finite version (**F**) also holds: "If \(I=\{0,\ldots,k\}\), there is a horizontal slab \(\widetilde{\mathcal{K}}\subset_{h}\widetilde{\mathcal{Q}}_{0}\) such that \(\mathcal{K}\)\(f\)-realizes the finite sequence \((\mathcal{Q}_{i};n_{i})_{i=0,\ldots,k}\).", but we have not found such statement in the literature. Therefore, we will not use it. We refer to Theorem 2.2 in [50] for a more general statement which deals with sequences of maps that are not some power iterates of a single map. Version (**T**) of Theorem 9 is the key tool to obtain orbits that follow prescribed itineraries, so that we can establish the existence of topological chaos and we can construct a suitable symbolic dynamics in Section 6. We will use version (**O**) of Theorem 9 to prove the existence of 'paths' of generic sliding billiard trajectories that approach the boundary asymptotically with optimal uniform speed in Section 7. Finally, we will establish several lower bounds on the number of periodic billiard trajectories from version (**P**) of Theorem 9 in Section 8. ## 4 The fundamental lemma for circular polygons To begin with, we list some notations used along this section. Let \(\Gamma\) be a circular \(k\)-gon with arcs \(\Gamma_{j}\), centers \(O_{j}\), radii \(r_{j}\), singularities \(a_{j}\), and central angles \(\delta_{j}\). Set \(b_{j}=a_{j+1}\) and \(\mu_{j}=\sqrt{r_{j}/r_{j+1}}\). Recall that index \(j\) is defined modulo \(k\). Let \(z:\mathbb{T}\to\Gamma\) be the polar parametrisation of \(\Gamma\) introduced in Definition 3. Let \(\mathcal{M}=\mathbb{T}\times[0,\pi]\), and let \((\varphi,\theta)\in\operatorname{Int}\mathcal{M}\). Write \(z=z(\varphi)\) and \(v=R_{\theta}z^{\prime}(\varphi)\) where \(R_{\theta}\) is the standard \(2\times 2\) counter-clockwise rotation matrix by an angle \(\theta\). The straight line \(L=L(\varphi,\theta)\) passing through \(z\) in the direction \(v\) has exactly two points of intersection with \(\Gamma\) since \(\theta\in(0,\pi)\). One of these is \(z\); denote by \(\bar{z}\) the other. Then there is a unique \(\bar{\varphi}\in\mathbb{T}\) such that \(\bar{z}=z(\bar{\varphi})\). Denote by \(\bar{\theta}\) the angle between \(L\) and \(z^{\prime}(\bar{\varphi})\) in the counter-clockwise direction. The _billiard map_\(f:\operatorname{Int}\mathcal{M}\to\operatorname{Int}\mathcal{M}\) is defined by \(f(\varphi,\theta)=(\bar{\varphi},\bar{\theta})\), see Figure 3. Note that \(f\) is continuous since \(\Gamma\) is \(C^{1}\) and strictly convex. The billiard map can be extended continuously to \(\partial\mathcal{M}\) by setting \(f(\varphi,0)=(\varphi,0)\) and \(f(\varphi,\pi)=(\varphi,\pi)\) for each \(\varphi\in\mathbb{T}\). The billiard map \(f:\mathcal{M}\to\mathcal{M}\) is a homeomorphism; indeed, the map \(f^{-1}=I\circ f\circ I\) is a continuous inverse where the involution \(I:\mathcal{M}\to\mathcal{M}\) is defined by \(I(\varphi,\theta)=(\varphi,\pi-\theta)\). A key geometric property of the billiard dynamics in the case of impacts in consecutive arcs was presented in [36]. Later on, a more detailed description was given in [4]. Both results follow from trigonometric arguments. Recall that \(\delta_{j}=b_{j}-a_{j}\) and \(b_{j}=a_{j+1}\). **Lemma 11**.: _The billiard map \(f:\mathcal{M}\to\mathcal{M}\) satisfies the following properties._ 1. _If_ \(a_{j}\leq\varphi\leq\varphi+2\theta\leq b_{j}\)_, then_ \((\bar{\varphi},\bar{\theta})=f(\varphi,\theta)=(\varphi+2\theta,\theta)\)_._ 2. _Let_ \(g(\theta;\mu)=\operatorname{cos}\bigl{(}(1-\mu^{2})+\mu^{2}\cos\theta\bigr{)}\)_. If_ \(0<\theta\leq\delta_{j}\) _and_ \(\bar{\theta}=g(\theta;\mu_{j})\leq\delta_{j+1}\)_, then_ \[f(b_{j}-\theta,\theta)=(a_{j+1}+\bar{\theta},\bar{\theta})\qquad\text{and} \qquad\begin{cases}\bar{\theta}<\mu_{j}\theta,&\text{when }\mu_{j}<1,\\ \bar{\theta}>\mu_{j}\theta,&\text{when }\mu_{j}>1.\end{cases}\] (5) 3. _Given any_ \(\epsilon>0\) _there exists_ \(\psi=\psi(\epsilon)>0\) _such that_ \[\begin{array}{l}f(\varphi,\theta)=(\bar{\varphi},\bar{\theta})\text{ with }0<\theta\leq\psi\\ \text{ and }a_{j}\leq\varphi\leq a_{j+1}\leq\bar{\varphi}\leq a_{j+2}\end{array} \rightrightarrows\begin{cases}\bar{\theta}>(\mu_{j}-\epsilon)\theta,&\text{ when }\mu_{j}<1,\\ \bar{\theta}<(\mu_{j}+\epsilon)\theta,&\text{when }\mu_{j}>1.\end{cases}\] Proof.: 1. If \(a_{j}\leq\varphi\leq\varphi+2\theta\leq b_{j}\), then \(z(\varphi),z(\varphi+2\theta)\in\Gamma_{j}\), so \(f\) behaves as a circular billiard map, in which case is well-known that \(f(\varphi,\theta)=(\varphi+2\theta,\theta)\). 2. Set \(\varphi=b_{j}-\theta\) and \((\bar{\varphi},\bar{\theta})=f(\varphi,\theta)\). Condition \(0<\theta\leq\delta_{j}\) implies that \(z(\varphi)\in\Gamma_{j}\). Identity \(\varphi+\theta=b_{j}\) implies that lines \(L=L(\varphi,\theta)\) and \(N_{j}\) are perpendicular, where \(N_{j}\) is the normal to \(\Gamma\) at \(z(b_{j})\). If, in addition, \(z(\bar{\varphi})\in\Gamma_{j+1}\), then Hubacher proved (5) in [36, page 486]. Finally, we note that \(\bar{\theta}\leq\delta_{j+1}\) implies that \(z(\bar{\varphi})\in\Gamma_{j+1}\). 3. Balint _et al._[4] proved the following generalization of Hubacher computation. Set \(f(\varphi,\theta)=(\bar{\varphi},\bar{\theta})\). If \(a_{j}\leq\varphi\leq a_{j+1}\leq\bar{\varphi}\leq a_{j+2}\), so that \(z(\varphi)\in\Gamma_{j}\) and \(z(\bar{\varphi})\in\Gamma_{j+1}\), then there exist angles \(\varphi^{+}\in[0,2\theta]\) and \(\varphi^{-}\in[0,2\theta]\) such that \[\varphi=b_{j}-\varphi^{+},\qquad\bar{\varphi}=a_{j+1}+\varphi^{-},\qquad \varphi^{+}+\varphi^{-}=\theta+\bar{\theta},\] \[\bar{\theta}=\,\mathrm{acos}\left((1-\mu_{j}^{2})\cos(\theta-\varphi^{+})+\mu_{j}^{2 }\cos\theta\right).\] Hubacher computation corresponds to the case \(\varphi^{+}=\theta\) and \(\varphi^{-}=\bar{\theta}\). A straightforward computation with Taylor expansions shows that \[\bar{\theta}=\Omega_{j}(s)\theta+\mathrm{O}(\theta^{3}),\quad\Omega_{j}(s)= \sqrt{\mu_{j}^{2}+(1-\mu_{j}^{2})s^{2}},\quad s=1-\varphi^{+}/\theta\in[-1,1], \tag{6}\] as \(\theta\to 0^{+}\). Function \(\Omega_{j}(s)\) is even and \(\Omega_{j}(0)=\mu_{j}\). Besides, \(\Omega_{j}^{2}(s)=\mu_{j}^{2}+(1-\mu_{j}^{2})s^{2}\). If \(\mu_{j}<1\), then \(\Omega_{j}(s)\) increases for \(s>0\) and decreases for \(s<0\), so \(\Omega_{j}(s)\geq\mu_{j}\) for all \(s\in[-1,1]\). If \(\mu_{j}>1\), then \(\Omega_{j}(s)\) decreases for \(s>0\) and increases for \(s<0\), so \(\Omega_{j}(s)\leq\mu_{j}\) for all \(s\in[-1,1]\). If \(a_{j}\leq\varphi<\varphi+2\theta=b_{j}\), then \(z(\varphi)\in\Gamma_{j}\) and \(z(\bar{\varphi})=b_{j}=a_{j+1}\in\Gamma_{j}\cap\Gamma_{j+1}\), but part (a) of Lemma 11 still applies, because the tangents to \(\Gamma_{j}\) and \(\Gamma_{j+1}\) agree at \(z(\bar{\varphi})\) by the definition of circular polygon. This fact will be used in Proposition 50 to construct some special periodic _nodal_ billiard trajectories in _rational_ circular polygons, which are introduced in Definition 49. Similar nodal periodic billiard trajectories were constructed in [10] to answer a question about length spectrum and rigidity. Lemma 11 and the above observation describe two rather different ways in which the angle \(\theta\) can vary as a sliding billiard trajectory jumps from one arc to the next. On the one hand, if the trajectory impacts at the corresponding node, there is no change: \(\bar{\theta}=\theta\). On the other hand, if the billiard trajectory is perpendicular at the normal line at the corresponding node, we have the largest possible change: \(\bar{\theta}<\mu_{j}\theta\) for \(\mu_{j}<1\) or \(\bar{\theta}>\mu_{j}\theta\) for \(\mu_{j}>1\). The great contrast between these two situations is the crucial fact behind the non-existence of caustics near the boundary obtained by Hubacher [36]. It is also the main ingredient to obtain all chaotic properties stated in the introduction. Next, we introduce the main geometric subsets of the phase space \(\mathcal{M}=\mathbb{T}\times[0,\pi]\). All of them are denoted with calligraphic letters. **Definition 12**.: The _\(j\)-singularity segment_ is the vertical segment \(\mathcal{L}_{j}=\{a_{j}\}\times[0,\pi]\subset\mathcal{M}\). Given any \(s>0\), the _\((j,\pm s)\)-singularity segment_ are the slanted segments \[\mathcal{L}_{j}^{-s}=\big{\{}(\varphi,\theta)\in\mathcal{M}:a_{j-1}\leq \varphi=a_{j}-2\theta s\big{\}},\quad\mathcal{L}_{j}^{s}=\big{\{}(\varphi, \theta)\in\mathcal{M}:\varphi=a_{j}+2\theta s\leq a_{j+1}\big{\}}.\] The _\(j\)-fundamental domain_ is the triangular domain \[\mathcal{D}_{j}=\{(\varphi,\theta)\in\mathcal{M}:a_{j}\leq\varphi\leq a_{j}+ 2\theta\leq a_{j+1}\}\,.\] Finally, \(\mathcal{L}=\bigcup_{j=1}^{k}\left(\mathcal{L}_{j}\cup\mathcal{L}_{j}^{1/2} \cap\mathcal{L}_{j}^{1}\right)\) is the _extended singularity set_. Note that \(\mathcal{L}_{j}^{n}\subset f^{n}(\mathcal{L}_{j})\) for all \(n\in\mathbb{Z}\), so \(\mathcal{L}_{j}^{s}\) is a generalization of the forward and backward iterates under the billiard map of the \(j\)-singularity segments when \(s\not\in\mathbb{Z}\). We will only need the segments \(\mathcal{L}_{j}^{s}\) for values \(s=n\) and \(s=n+1/2\) with \(n\in\mathbb{Z}\). The left (respectively, right) side of the triangle \(\mathcal{D}_{j}\) is contained in the vertical segment \(\mathcal{L}_{j}\) (respectively, coincides with the slanted segment \(\mathcal{L}_{j}^{1}\)). We have used the term'sliding' in a clumsy way until now. Let us clarify its precise meaning. Let \(\Pi_{\varphi}:\mathcal{M}\to\mathbb{T}\) and \(\Pi_{\theta}:\mathcal{M}\to[0,\pi]\) be the projections \(\Pi_{\varphi}(\varphi,\theta)=\varphi\) and \(\Pi_{\theta}(\varphi,\theta)=\theta\). Let \(J:\mathcal{M}\setminus\mathcal{L}\to\mathbb{Z}/k\mathbb{Z}\) be the piece-wise constant map defined by \(a_{j}<\Pi_{\varphi}(x)<b_{j}\Rightarrow J(x)=j\). This map is well-defined since \(\Pi_{\varphi}(x)\not\in\{a_{1},\dots,a_{k}\}=\{b_{1},\dots,b_{k}\}\) when \(x\not\in\mathcal{L}\). **Definition 13**.: A billiard orbit is _(counter-clockwise) sliding_ when any consecutive impact points are either in the same arc or in consecutive arcs in the counter-clockwise direction. An orbit is _generic_ when it avoids the extended singularity set. We denote by \(\mathcal{S}_{0}\) the set of all initial conditions that give rise to generic counter-clockwise sliding orbits. That is, \[\mathcal{S}_{0}=\big{\{}x\in\mathcal{M}:J(f^{n+1}(x))-J(f^{n}(x))\in\{0,1\}\text { and }f^{n}(x)\not\in\mathcal{L}\text{ for all }n\in\mathbb{Z}\big{\}}\,.\] The _(counter-clockwise) generic sliding set_\(\mathcal{S}_{0}\) is \(f\)-invariant. The term _glancing_ --see, for instance, [46]-- is also used in the literature, but sliding is the most widespread term. A consequence of part (a) of Lemma 11 is that any generic sliding billiard orbit has exactly one point in \(\mathrm{Int}\,\mathcal{D}_{j}\) on each turn around \(\Gamma\). This fact establishes the _fundamental_ character of \(\mathcal{D}_{j}\). Following the notation used in the introduction, \(\mathcal{S}_{\pi}\) is the clockwise generic sliding set, but we are not going to deal with it. **Remark 14**.: If \(x\in\mathcal{M}\) is a point such that \(x_{i}=(\varphi_{i},\theta_{i})=f^{i}(x)\in\mathcal{L}\) for some \(i\in\mathbb{Z}\), then its billiard trajectory \(\big{(}z_{n}=z(\Pi_{\varphi}(f^{n}(x)))\big{)}_{n\in\mathbb{Z}}\) has some impact point \(z_{m}\in\Gamma_{*}\), where \(\Gamma_{*}\) is the set of nodes (1), or has two consecutive impact points \(z_{m}\in\Gamma_{j}\) and \(z_{m+1}\in\Gamma_{j+1}\) such that the segment from \(z_{m}\) to \(z_{m+1}\) is perpendicular to the normal to \(\Gamma\) at the node \(\Gamma_{j}\cap\Gamma_{j+1}\). **Lemma 15**.: _Let \(s,t\geq 0\) and \(j\mod k\) such that \(s+t\geq\delta_{j}/2\pi\). Then \(\mathcal{L}_{j}^{s}\cap\mathcal{L}_{j+1}^{-t}\neq\emptyset\) and_ \[\Pi_{\varphi}\left(\mathcal{L}_{j}^{s}\cap\mathcal{L}_{j+1}^{-t}\right)=a_{j} +\frac{s\delta_{j}}{s+t},\qquad\Pi_{\theta}\left(\mathcal{L}_{j}^{s}\cap \mathcal{L}_{j+1}^{-t}\right)=\frac{\delta_{j}}{2s+2t}.\] Proof.: By definition, \((\varphi,\theta)\in\mathcal{L}_{j}^{s}\cap\mathcal{L}_{j+1}^{-t}\Leftrightarrow a _{j}\leq a_{j}+2\theta s=\varphi=a_{j+1}-2\theta t\leq a_{j+1}\). Identity \(a_{j}+2\theta s=a_{j+1}-2\theta t\) implies that \(2\theta=\delta_{j}/(s+t)\). Then inequality \(s+t\geq\delta_{j}/2\pi\) implies that \(\theta\leq\pi\). Finally, \(a_{j}+2\theta s\leq a_{j+1}\) and \(a_{j+1}-2\theta t\geq a_{j}\) because \(s/(s+t),t/(s+t)\leq 1\). This lemma implies that segments \(\mathcal{L}_{j+1}^{-n+1}\) and \(\mathcal{L}_{j+1}^{-n}\) intersect segments \(\mathcal{L}_{j}\) and \(\mathcal{L}_{j}^{1}\) for any integer \(n\geq 2>1+\delta_{j}/2\pi\), so the following definition makes sense. See Figure 4. **Definition 16**.: Let \(n\) be an integer such that \(n\geq 2\). The \((j,n)\)-_fundamental quadrilateral_ is the oriented cell \(\widetilde{\mathcal{Q}}_{j,n}\subset\mathcal{M}\) bounded by \(\mathcal{L}_{j}\) (left side), \(\mathcal{L}_{j+1}^{-n}\) (base side), \(\mathcal{L}_{j}^{1}\) (right side) and \(\mathcal{L}_{j+1}^{-n+1}\) (top side). We split \(\mathcal{Q}_{j,n}\) in two by means of the segment \(\mathcal{L}_{j}^{-n+1/2}\), which gives rise to two smaller oriented cells: \(\widetilde{\mathcal{Q}}_{j,n}^{-}\) (the lower one) and \(\widetilde{\mathcal{Q}}_{j,n}^{+}\) (the upper one), whose left and right sides are still contained in \(\mathcal{L}_{j}\) and \(\mathcal{L}_{j}^{1}\), respectively. We say that \(\widetilde{\mathcal{Q}}_{j,n}^{+}\) is the \((\pm,j,n)\)-_fundamental quadrilateral_. In order to find sufficient conditions for \(f^{n}:\widetilde{\mathcal{Q}}_{j,n}^{c}\rightsquigarrow\widetilde{\mathcal{Q} }_{j+1,n^{\prime}}^{\times}\), we need the extreme values of \(\Pi_{\theta}(f^{n}(x))\) when \(x\) moves on the horizontal sides of \(\widetilde{\mathcal{Q}}_{j,n}^{c}\) and the extreme values of \(\Pi_{\theta}(x)\) when \(x\in\mathcal{Q}_{j+1,n^{\prime}}=\mathcal{Q}_{j+1,n^{\prime}}^{-}\cup \mathcal{Q}_{j+1,n^{\prime}}^{+}\). These extreme values are estimated below. **Lemma 17**.: _Fix any \(j\mod k\). With the above notations, if \(\chi_{j}\geq 2\) is a large enough integer, then the following properties hold for all \(n\geq\chi_{j}\)._ 1. \(\nu_{j,n}:=\min_{x\in\mathcal{Q}_{j,n}}\Pi_{\theta}(x)=\delta_{j}/(2n+2)\) _and_ \(\omega_{j,n}:=\max_{x\in\mathcal{Q}_{j,n}}\Pi_{\theta}(x)=\delta_{j}/(2n-2)\)_._ 2. _If_ \(\chi_{j}\geq 2\) _and_ \(\omega_{j,n}\) _is a large enough integer, then the following properties hold for all_ \(n\geq\chi_{j}\)_._ Proof.: Let \(n\geq\chi_{j}\). Then \(\chi_{j}\geq 2\) and \(\omega_{j,n}\) (the upper one), whose left and right sides are all \(\chi_{j}\), respectively. We say that \(\chi_{j}\geq 2\) and \(\omega_{j,n}\) (the upper one), whose left and right sides are all \(\chi_{j}\), respectively. _._ 2. _If_ \(\nu^{0}_{j,n}:=\min_{x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1}}\Pi_{\theta }(f^{n}(x))\) _and_ \(\omega^{s}_{j,n}:=\max_{x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1}}\Pi_{ \theta}(f^{n}(x))\)_, then_ 1. \(\nu^{0}_{j,n}=\delta_{j}/(2n+2)\)_,_ \(\nu^{1}_{j,n}=\delta_{j}/2n=\omega^{0}_{j,n}\) _and_ \(\omega^{1}_{j,n}=\delta_{j}/(2n-2)\)_;_ 2. \(\omega^{1/2}_{j,n}<\mu_{j}\delta_{j}/(2n-1)\) _when_ \(\mu_{j}<1\)_; and_ 3. \(\nu^{1/2}_{j,n}>\mu_{j}\delta_{j}/(2n+1)\) _when_ \(\mu_{j}>1\)_._ Proof.: The fundamental domain \(\mathcal{Q}_{j,n}\) is only well-defined for \(n\geq 2\). The reader must keep in mind Lemmas 11 and 15. See Figure 4 for a visual guide. 1. The minimum and maximum values are attained at the intersections \(\mathcal{L}_{j}^{1}\cap\mathcal{L}_{j+1}^{-n}\) and \(\mathcal{L}_{j}\cap\mathcal{L}_{j+1}^{-n+1}\), respectively. 2. If \(x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n}\) or \(x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1}\), then \(\Pi_{\theta}(f^{n}(x))=\Pi_{\theta}(x)\) by part (a) of Lemma 11. Therefore, the four extreme values \(\nu^{0}_{j,n}\), \(\nu^{1}_{j,n}\), \(\omega^{0}_{j,n}\) and \(\omega^{1}_{j,n}\) are attained at the four intersections \(\mathcal{L}_{j}^{1}\cap\mathcal{L}_{j+1}^{-n}\), \(\mathcal{L}_{j}^{1}\cap\mathcal{L}_{j+1}^{-n+1}\), \(\mathcal{L}_{j}\cap\mathcal{L}_{j+1}^{-n+1}\) and \(\mathcal{L}_{j}\cap\mathcal{L}_{j+1}^{n+1}\), respectively. 2. First, the value \(\max_{x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1/2}}\Pi_{\theta}(x)\) is attained at \(\mathcal{L}_{j}\cap\mathcal{L}_{j+1}^{-n+1/2}\). Second, if \(x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1/2}\) and \(\mu_{j}<1\), then \(\Pi_{\theta}(f^{n}(x))<\mu_{j}\Pi_{\theta}(x)\) by part (b) of Lemma 11. We need hypotheses \(0<\theta\leq\delta_{j}\) and \(\bar{\theta}=g(\theta;\mu_{j})\leq\delta_{j+1}\) to apply Lemma 11. In order to guarantee them, it suffices to take \(n\geq\chi_{j}\) with \[\chi_{j}\geq 1+[\mu_{j}\delta_{j}/2\delta_{j+1}],\] (7) since then \(\chi_{j}\geq 2\) and \(2\chi_{j}-2\geq\mu_{j}\delta_{j}/\delta_{j+1}\), so \(\theta\leq\omega_{j,n}\leq\delta_{j}/(2\chi_{j}-2)\leq\delta_{j}/2<\delta_{j}\) and \(\bar{\theta}<\mu_{j}\bar{\theta}\leq\mu_{j}\delta_{j}/(2\chi_{j}-2)\leq\delta _{j+1}\). Here \([\cdot]\) denotes the _ceil_ function. 3. First, the value \(\min_{x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1/2}}\Pi_{\theta}(x)\) is attained at \(\mathcal{L}_{j}^{1}\cap\mathcal{L}_{j+1}^{-n+1/2}\). Second, if \(x\in\mathcal{Q}_{j,n}\cap\mathcal{L}_{j+1}^{-n+1/2}\) and \(\mu_{j}>1\), then \(\Pi_{\theta}(f^{n}(x))>\mu_{j}\Pi_{\theta}(x)\) by part (b) of Lemma 11. We still need hypotheses \(0<\theta\leq\delta_{j}\) and \(\bar{\theta}=g(\theta;\mu_{j})\leq\delta_{j+1}\) in Lemma 11. In order to guarantee them, it suffices to take \(n\geq\chi_{j}\) for some large enough integer \(\chi_{j}\), since \(\lim_{n\to+\infty}\omega_{j,n}=0\) and \(\lim_{\theta\to 0^{+}}g(\theta;\mu_{j})=0\). The following lemma (which we refer to as the _fundamental lemma_) is the key step in constructing generic sliding billiard trajectories that approach the boundary in optimal time, and in constructing symbolic dynamics. It describes which fundamental quadrilaterals in \(\mathcal{D}_{j+1}\) we can 'nicely' visit if we start in a given fundamental quadrilateral in \(\mathcal{D}_{j}\). See Figure 5 for a visual guide. **Lemma 18** (Fundamental Lemma).: _With the above notations, let_ \[\Upsilon_{j}=\left\{(n,n^{\prime})\in\mathbb{N}^{2}:\alpha_{j}^{-}n+\beta_{j}^ {-}\leq n^{\prime}\leq\alpha_{j}^{+}n-\beta_{j}^{+},\,n\geq\chi_{j},\,n^{ \prime}\geq\chi_{j+1}\right\}, \tag{8}\] _where \(\alpha_{j}^{-}=\delta_{j+1}/\delta_{j}\max\{1,\mu_{j}\}\), \(\alpha_{j}^{+}=\delta_{j+1}/\delta_{j}\min\{1,\mu_{j}\}\) and \(\beta_{j}^{\pm}=\alpha_{j}^{\pm}+1\) for all \(j\mod k\). Then \(f^{n}:\widetilde{\mathcal{Q}}_{j,n}^{c}\leadsto\widetilde{\mathcal{Q}}_{j+1,n^{ \prime}}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ The above results and identities \(\nu_{j+1,n^{\prime}}=\delta_{j+1}/(2n^{\prime}+2)\) and \(\omega_{j+1,n^{\prime}}=\delta_{j+1}/(2n^{\prime}-2)\) imply that if inequalities \[\mu_{j}\delta_{j}/(2n-1)\leq\delta_{j+1}/(2n^{\prime}+2),\qquad\delta_{j+1}/(2n ^{\prime}-2)\leq\delta_{j}/(2n+2)\] (9) hold, then \(f^{n}(\gamma^{-}(a))\) and \(f^{n}(\gamma^{-}(b))\) are above and below \(\mathcal{Q}_{j+1,n^{\prime}}\), respectively, so they are in different connected components of \(\overline{\mathcal{D}_{j+1}\setminus\mathcal{Q}_{j+1,n^{\prime}}}\) for any path \(\gamma^{-}\) vertical in \(\widetilde{Q}_{j,n}^{-}\). Second, we consider the case \(\varsigma=+\), so \(\gamma^{+}(a)\in\mathcal{Q}_{j,n}^{+}\cap\mathcal{L}_{j+1}^{-n+1/2}\) and \(\gamma^{+}(b)\in\mathcal{Q}_{j,n}^{+}\cap\mathcal{L}_{j+1}^{-n+1}\). Similar arguments show that if inequalities \[\mu_{j}\delta_{j}/(2n-1)\leq\delta_{j+1}/(2n^{\prime}+2),\qquad\delta_{j+1}/( 2n^{\prime}-2)\leq\delta_{j}/2n\] (10) hold, then \(f^{n}(\gamma^{+}(a))\) and \(f^{n}(\gamma^{+}(b))\) are below and above \(\mathcal{Q}_{j+1,n^{\prime}}\), respectively, so they are in different connected components of \(\overline{\mathcal{D}_{j+1}\setminus\mathcal{Q}_{j+1,n^{\prime}}}\) for any path \(\gamma^{+}\) vertical in \(\widetilde{Q}_{j,n}^{+}\). Finally, after a straightforward algebraic manipulation, we check that inequalities (9) and (10) hold for any \((n,n^{\prime})\in\Upsilon_{j}\). This ends the proof for the case \(\mu_{j}<1\). The case \(\mu_{j}>1\) follows from similar arguments. We skip the details. No inequality in (8) is strict. However, we need some strict inequalities for a technical reason. Let us explain it. We will use the objects defined above and the fundamental lemma to construct our symbolic dynamics in Section 6. However, if we were to try to construct our symbolic dynamics directly with the symbol sets being the fundamental quadrilaterals \(Q_{j,n}^{\varsigma}\), we would run into problems at the boundaries, where neighboring quadrilaterals intersect. To be precise, \(\mathcal{Q}_{j,n}^{+}\) and \(\mathcal{Q}_{j,n}^{-}\) have a common side contained in \(\mathcal{L}_{j}^{-n+1/2}\), whereas \(\mathcal{Q}_{j,n}^{-}\) and \(\mathcal{Q}_{j,n+1}^{+}\) have a common side contained in \(\mathcal{L}_{j}^{-n}\). The following corollary to Lemma 18 solves this problem by establishing the existence of pairwise disjoint strict horizontal slabs \(\widetilde{\mathcal{K}}_{j,n}^{\varsigma}\subsetneqq\widetilde{\mathcal{Q}}_{ j,n}^{\varsigma}\) with _exactly_ the same stretching properties as the original fundamental quadrilaterals \(\widetilde{\mathcal{Q}}_{j,n}^{\varsigma}\). It requires some strict inequalities. **Corollary 19** (Fundamental Corollary).: _With the above notations, let_ \[\Xi_{j}=\left\{(n,n^{\prime})\in\mathbb{N}^{2}:\alpha_{j}^{-}n+\beta_{j}^{-}<n^{ \prime}<\alpha_{j}^{+}n-\beta_{j}^{+},\;n\geq\chi_{j},\;n^{\prime}\geq\chi_{j+1} \right\},\] _where \(\alpha_{j}^{-}=\delta_{j+1}/\delta_{j}\max\{1,\mu_{j}\}\), \(\alpha_{j}^{+}=\delta_{j+1}/\delta_{j}\min\{1,\mu_{j}\}\) and \(\beta_{j}^{\pm}=\alpha_{j}^{\pm}+1\) for all \(j\mod k\). There are pairwise disjoint strict horizontal slabs_ \[\widetilde{\mathcal{K}}^{\varsigma}_{j,n}\varsubsetneq_{\mathbb{N}}\widetilde {\mathcal{Q}}^{\varsigma}_{j,n},\qquad\forall j\mod k,\;n\geq\chi_{j},\;\varsigma \in\{-,+\},\] _such that:_ 1. \(f^{n}:\widetilde{\mathcal{K}}^{\varsigma}_{j,n}\rightsquigarrow\widetilde{ \mathcal{K}}^{\varsigma^{\prime}}_{j+1,n^{\prime}}\)_; and_ 2. \(f^{n}(\mathcal{K}^{\varsigma}_{j,n})\cap\mathcal{K}^{\varsigma^{\prime}}_{j+1,n^{\prime}}\cap\mathcal{L}=\emptyset\)_,_ _for all \(j\mod k\), \((n,n^{\prime})\in\Xi_{j}\) and \(\varsigma,\zeta^{\prime}\in\{-,+\}\). (See Definition 12 for the meaning of \(\mathcal{L}\).)_ Proof.: Inequalities (9) and (10) become _strict_ for any \((n,n^{\prime})\in\Xi_{j}\). We consider the oriented cells \(\widetilde{\mathcal{R}}_{j+1,n}\), where \[\mathcal{R}_{j+1,n}=\bigcup_{(n,n^{\prime})\in\Xi_{j}}\mathcal{Q}_{j+1,n^{ \prime}}\subset\mathcal{D}_{j+1},\] and orientations are chosen in such a way that the left and right sides of these big oriented cells are still contained in \(\mathcal{L}_{j+1}\) and \(\mathcal{L}^{1}_{j+1}\), respectively. Note that \(\mathcal{D}_{j+1}\setminus\mathcal{R}_{j+1,n}\) has a connected component above and other one below \(\mathcal{R}_{j+1,n}\). Fix an index \(j\mod k\) such that \(\mu_{j}<1\). Let \(n\geq\chi_{j}\). We refer to Figure 5 for a visual guide. The reader should imagine that the blue quadrilateral shown in that figure is our whole cell \(\mathcal{R}_{j+1,n}\). _Strict_ versions of inequalities (9) and (10) imply that the images by \(f^{n}\) of both the base side of \(\widetilde{\mathcal{Q}}^{-}_{j,n}\) and the top side of \(\widetilde{\mathcal{Q}}^{+}_{j,n}\) are _strictly_ above \(\mathcal{R}_{j+1,n}\); whereas the image by \(f^{n}\) of the top side of \(\widetilde{\mathcal{Q}}^{-}_{j,n}\), which coincides with the base side of \(\widetilde{\mathcal{Q}}^{+}_{j,n}\), is _strictly_ below \(\mathcal{R}_{j+1,n}\). Thus, there are strict horizontal slabs \(\widetilde{\mathcal{K}}^{\varsigma}_{j,n}\varsubsetneq_{\mathbb{N}}\widetilde {\mathcal{Q}}^{\varsigma}_{j,n}\), with \(\varsigma\in\{-,+\}\), such that the images by \(f^{n}\) of both the base side of \(\widetilde{\mathcal{K}}^{-}_{j,n}\) and the top side of \(\widetilde{\mathcal{K}}^{+}_{j,n}\) are strictly above \(\mathcal{R}_{j+1,n}\); whereas the image by \(f^{n}\) of both the top side of \(\widetilde{\mathcal{K}}^{-}_{j,n}\) and the base side of \(\widetilde{\mathcal{K}}^{+}_{j,n}\) are strictly below \(\mathcal{R}_{j+1,n}\). Consequently, \(f^{n}(\mathcal{K}^{\varsigma}_{j,n})\cap\mathcal{R}_{j+1,n}\cap\mathcal{L}=\emptyset\) and \(f^{n}:\widetilde{\mathcal{K}}^{\varsigma}_{j,n}\rightsquigarrow\widetilde{ \mathcal{R}}_{j+1,n}\), which implies that \(f^{n}(\mathcal{K}^{\varsigma}_{j,n})\cap\mathcal{K}^{\varsigma^{\prime}}_{j+1,n^{\prime}}\cap\mathcal{L}=\emptyset\) and \(f^{n}:\widetilde{\mathcal{K}}^{\varsigma}_{j,n}\rightsquigarrow\widetilde{ \mathcal{K}}^{\varsigma^{\prime}}_{j+1,n^{\prime}}\) for all \((n,n^{\prime})\in\Xi_{j}\) and \(\varsigma,\zeta^{\prime}\in\{-,+\}\), since \(\widetilde{\mathcal{K}}^{\varsigma}_{j+1,n^{\prime}}\varsubsetneq_{\mathbb{N}} \widetilde{\mathcal{R}}_{j+1,n}\) for all \((n,n^{\prime})\in\Xi_{j}\) and \(\varsigma^{\prime}\in\{-,+\}\). This ends the proof of the stretching and intersecting properties when \(\mu_{j}<1\). The case \(\mu_{j}>1\) follows from similar arguments. We omit the details. Finally, these strict horizontal slabs are necessarily pairwise disjoint because the original fundamental quadrilaterals \(\widetilde{\mathcal{Q}}^{\varsigma}_{j,n}\) only share some of their horizontal sides. ## 5 Symbols, shift spaces and shift maps In this section, we define an alphabet \(\mathbf{Q}\subset\mathbb{Z}^{k}\) with infinitely many symbols, then we consider two shift spaces \(\mathfrak{Q}^{+}<\mathbf{Q}^{N_{0}}\) and \(\mathfrak{Q}\subset\mathbf{Q}^{\mathbb{Z}}\) of admissible one-sided and two-sided sequences, and finally we study some properties of the shift map \(\sigma:\mathfrak{Q}\to\mathfrak{Q}\). We present these objects in a separate section, minimizing their relation with circular billiards, since we believe that they will be useful in future works about other problems. For brevity, we will use the term shift instead of subshift, but \(\mathfrak{Q}^{+}\varsubsetneq\mathbf{Q}^{N_{0}}\) and \(\mathfrak{Q}\varsubsetneq\mathbf{Q}^{\mathbb{Z}}\). The sets \(\mathbf{Q}\), \(\mathfrak{Q}^{+}\) and \(\mathfrak{Q}\) are defined in terms of some positive factors \(\alpha_{j}^{\pm}\), some positive addends \(\beta_{j}^{\pm}\) and some integers \(\chi_{j}\geq 2\) for \(j=1,\ldots,k\), with \(k\geq 1\). (There are interesting billiard problems that will require \(k<3\), or even \(k=1\).) We only assume three hypotheses about these factors and integers: 1. \(0<\alpha_{j}^{-}<\alpha_{j}^{+}\) for \(j=1,\ldots,k\); 2. \(\alpha:=\alpha^{+}>1\) and \(\alpha^{+}\alpha^{-}=1\), where \(\alpha^{\pm}=\prod_{j=1}^{k}\alpha_{j}^{\pm}\); and 3. Integers \(\chi_{2},\ldots,\chi_{k}\geq 2\) are large enough and \(\chi_{1}\gg\chi_{2},\ldots,\chi_{k}\). There is no assumption on the addends. Factors \(\alpha_{j}^{\pm}=\alpha_{\bmod k}^{\pm}\), addends \(\beta_{j}^{\pm}=\beta_{j\bmod k}^{\pm}\) and integers \(\chi_{j}=\chi_{j\mod k}\) are extended cyclically. Clearly, all arcs \(\Gamma_{1},\ldots,\Gamma_{k}\) of the circular polygon \(\Gamma\) are equally important, so \(\chi_{1}\gg\chi_{2},\ldots,\chi_{k}\) is a purely technical hypothesis. It is used only once, at the beginning of the proof of Lemma 24. It is needed just to establish the topological transitivity of the subshift map. The rest of Theorem A, as well as Theorems C, D and E do not need it. We remark two facts related to billiards in circular polygons, although we forget about billiards in the rest of this section. The first remark is a trivial verification. **Lemma 20**.: _The factors \(\alpha_{j}^{\pm}\) defined in Lemma 18 satisfy hypotheses_ (**A**) _and_ (**B**)_._ Proof.: Hypothesis (**A**) follows from properties \(\mu_{j}\neq 1\). Hypothesis (**B**) follows from the telescopic products \[\prod_{j=1}^{k}\frac{\delta_{j}}{\delta_{j+1}}=1,\qquad\prod_{j=1}^{k}\mu_{j} =\prod_{j=1}^{k}\sqrt{\frac{r_{j}}{r_{j+1}}}=1,\] which are easily obtained from the cyclic identities \(\delta_{k+1}=\delta_{1}\) and \(r_{k+1}=r_{1}\). The second remark is that we prefer to encode in a single symbol all information related to each complete turn around \(\Gamma\), although we may construct our symbolic dynamics directly with the disjoints sets \(\widetilde{\mathcal{K}}_{j,n}^{\varsigma}\) as symbols. That is, if a generic sliding orbit follows, along a complete turn around the boundary \(\Gamma\), the itinerary \[\mathcal{K}_{1,n_{1}}^{\varsigma_{1}}\subset\mathcal{D}_{1},\mathcal{K}_{2,n_ {2}}^{\varsigma_{2}}\subset\mathcal{D}_{2},\ldots,\mathcal{K}_{k,n_{k}}^{ \varsigma_{k}}\subset\mathcal{D}_{k},\] where \(\widetilde{\mathcal{K}}_{j,n}^{\varsigma}\) are the pairwise disjoint horizontal slabs described in Corollary 19, then we construct the symbol \[\boldsymbol{q}=(q_{1},\ldots,q_{k})\in\mathbb{Z}^{k},\qquad|q_{j}|=n_{j}, \qquad\text{sign}(q_{j})=\varsigma_{j},\] which motivates the following definition. **Definition 21**.: The _alphabet of admissible symbols_ is the set \[\boldsymbol{Q}=\left\{\boldsymbol{q}=(q_{1},\ldots,q_{k})\in\mathbb{Z}^{k}: \begin{array}{l}\alpha_{j}^{-}|q_{j}|+\beta_{j}^{-}<|q_{j+1}|<\alpha_{j}^{+} |q_{j}|-\beta_{j}^{+},\,\forall j=1,\ldots,k-1\\ |q_{j}|\geq\chi_{j},\,\forall j=1,\ldots,k\end{array}\right\}.\] This alphabet has infinitely many symbols by hypothesis (**A**). Thinking in the billiard motivation behind these symbols, we ask that symbols associated to consecutive turns around \(\Gamma\) satisfy the following admissibility condition. **Definition 22**.: We say that a finite, one-sided, or two-sided sequence of admissible symbols \(\mathfrak{q}=(\boldsymbol{q}^{i})_{i\in I}\subset\boldsymbol{Q}\), with \(\boldsymbol{q}^{i}=(q_{1}^{i},\ldots,q_{k}^{i})\), is _admissible_ if and only if \[\alpha_{k}^{-}|q_{k}^{i}|+\beta_{k}^{-}<|q_{1}^{i+1}|<\alpha_{k}^{+}|q_{k}^{i}| -\beta_{k}^{+},\qquad\forall i.\] Admissible sequences are written with Fraktur font: \(\mathfrak{q}\). Its vector symbols are written with boldface font and labeled with superscripts: \(\boldsymbol{q}^{i}\). Components of admissible symbols are written with the standard font and labeled with subscripts: \(q_{j}^{i}\) or \(q_{j}\). **Definition 23**.: The _shift spaces of admissible sequences_ are \[\mathfrak{Q}^{+} =\left\{\mathfrak{q}=(\boldsymbol{q}^{i})_{i\in\mathbb{N}_{0}} \in\boldsymbol{Q}^{\mathbb{N}_{0}}:\mathfrak{q}\text{ is admissible}\right\},\] \[\mathfrak{Q} =\left\{\mathfrak{q}=(\boldsymbol{q}^{i})_{i\in\mathbb{Z}}\in \boldsymbol{Q}^{\mathbb{Z}}:\mathfrak{q}\text{ is admissible}\right\}.\] If \(\boldsymbol{q}=(q_{1},\ldots,q_{k})\in\boldsymbol{Q}\), then we write \(|\boldsymbol{q}|=|q_{1}|+\cdots+|q_{k}|\). We equip \(\mathfrak{Q}\) with the topology defined by the metric \[d_{\mathfrak{Q}}:\mathfrak{Q}\times\mathfrak{Q}\to[0,+\infty),\qquad d_{ \mathfrak{Q}}(\mathfrak{p},\mathfrak{q})=\sum_{i\in\mathbb{Z}}\frac{1}{2^{|i|} }\frac{|\boldsymbol{p}^{i}-\boldsymbol{q}^{i}|}{1+|\boldsymbol{p}^{i}- \boldsymbol{q}^{i}|}.\] We want to estimate the size of the maxima (sometimes, the minima as well) of the sets \[\Xi_{j}^{i}(n)=\left\{n_{j}^{i}\in\mathbb{N}:\exists\mathfrak{q}\in\mathfrak{Q }\text{ such that }|q_{1}^{0}|=n\text{ and }|q_{j}^{i}|=n_{j}^{i}\right\},\quad i\in\mathbb{Z},\ j=1,\ldots,k, \tag{11}\] when \(n\geq\chi_{1}\) or \(n\gg 1\). We ask in (11) for the existence of some \(\mathsf{q}\in\mathfrak{Q}\) --that is, some two-sided infinite sequence--, but it does not matter. We would obtain exactly the same sets just by asking the existence of some finite sequence \(q_{1}^{0},\ldots,q_{k}^{0},q_{1}^{1},\ldots,q_{k}^{1},\ldots,q_{i}^{1},\ldots,q _{j}^{1}\) that satisfies the corresponding admissibility conditions. Several estimates about maxima and minima of sets (11) are listed below. Their proofs have been postponed to Appendix A. **Lemma 24**.: _We assume hypotheses_ (**A**), (**B**) and (**X**). Set \(\zeta_{j}^{i}(n)=\min\Xi_{j}^{i}(n)\), \(\xi_{j}^{i}(n)=\max\Xi_{j}^{i}(n)\). Let \(\rho^{0}(n)=n\) and \(\rho^{i}(n)=\sum_{j=1}^{k}\sum_{m=0}^{i-1}\xi_{j}^{m}(n)\) for all \(i\geq 1\). 1. There are positive constants \(\nu<\lambda\), \(\nu^{\prime}<\lambda^{\prime}\), \(\tau<1\) and \(\gamma^{\pm}\), which depend on factors \(\alpha_{j}^{\pm}\) and addends \(\beta_{j}^{\pm}\) but not on integers \(\chi_{j}\), such that the following properties hold. 1. \(\nu n\leq\xi_{0}^{0}(n)\leq\lambda n\) for all \(j=1,\ldots,k\) and \(n\geq\chi_{1}\); 2. \(\tau\alpha^{i|\xi_{j}^{0}(n)}\leq\xi_{j}^{i}(n)\leq\alpha^{i|\xi_{0}^{i}(n)}\) for all \(j=1,\ldots,k\), \(i\in\mathbb{Z}\) and \(n\geq\chi_{1}\); 3. \(\nu^{\prime}\xi_{j}^{i}(n)\leq\rho^{i}(n)\leq\rho^{i+1}(n)\leq\lambda^{\prime} \xi_{j}^{i}(n)\) for all \(j=1,\ldots,k\), \(i\geq 0\), and \(n\geq\chi_{1}\); 4. \(\zeta_{1}^{1}(n)\leq\max\{\chi_{1},n/\alpha+\gamma^{-}\}\leq n-1<n+1\leq\alpha n -\gamma^{+}\leq\xi_{1}^{1}(n)\) for all \(n>\chi_{1}\) and \(\zeta_{1}^{1}(n)=n<n+1\leq\alpha n-\gamma^{+}\leq\xi_{1}^{1}(n)\) for \(n=\chi_{1}\); and 5. Once fixed any \(N\in\mathbb{N}\), we have that \[\chi_{1}\leq\zeta_{1}^{1}(n)\leq n/\alpha+\gamma^{-}\leq n-N<n+N\leq\alpha n -\gamma^{+}\leq\xi_{1}^{1}(n),\quad\forall n\gg 1.\] 2. \(\Xi_{j}^{i}(n)=[\zeta_{j}^{i}(n),\xi_{j}^{i}(n)]\cap\mathbb{N}\) for all \(j\mod k\), \(i\in\mathbb{Z}\) and \(n\geq\chi_{1}\); that is, \(\Xi_{j}^{i}(n)\) has no gaps in \(\mathbb{N}\). Besides, \(\big{[}\max\{\chi_{1},n-|i|\},n+|i|\big{]}\cap\mathbb{N}\subset\Xi_{1}^{i}(n)\) for all \(i\in\mathbb{Z}\) and \(n\geq\chi_{1}\). **Corollary 25**.: _We assume hypotheses_ (**A**), (**B**) and (**X**)._ 1. _Given any_ \(\mathbf{q}^{-},\mathbf{q}^{+}\in\mathbf{Q}\) _there is an admissible sequence of the form_ \(\big{(}\mathbf{q}^{-},\mathbf{q}^{1},\ldots,\mathbf{q}^{l},\mathbf{q}^{+}\big{)}\) _for some_ \(l\in\mathbb{N}\) _and_ \(\mathbf{q}^{1},\ldots,\mathbf{q}^{l}\in\mathbf{Q}\)_._ 2. _Given any_ \(N\in\mathbb{N}\)_, there is a subset_ \(\mathbf{Q}_{N}\subset\mathbf{Q}\)_, with_ \(\#\mathbf{Q}_{N}=N\)_, such that the short sequence_ \((\mathbf{q},\mathbf{q}^{\prime})\) _is admissible for all_ \(\mathbf{q},\mathbf{q}^{\prime}\in\mathbf{Q}_{N}\)_._ 3. \(\mathfrak{Q}\neq\emptyset\)_._ Proof.: 1. Let \(l=|q_{1}^{-}-q_{1}^{+}|-1\). Part (b) of Lemma 24 implies that \(q_{1}^{+}\in\Xi_{1}^{l+1}(q_{1}^{-})\). Therefore, we can construct iteratively such a sequence \(\mathbf{q}^{1},\ldots,\mathbf{q}^{l}\). 2. Fix \(N\in\mathbb{N}\). Part (av) of Lemma 24 implies that if \(\mathbf{q},\mathbf{q}^{\prime}\in\mathbf{Q}\) with \(|q_{1}-q_{1}^{\prime}|\leq N\) and \(|q_{1}|,|q_{1}^{\prime}|\gg 1\), then \(\big{(}\mathbf{q},\mathbf{q}^{\prime}\big{)}\) is admissible. So, we can take any subset \(\mathbf{Q}_{N}=\{\mathbf{q}^{1},\ldots,\mathbf{q}^{N}\}\subset\mathbf{Q}\) such that \(|q_{1}^{n}|=|q_{1}^{1}|+n-1\) with \(|q_{1}^{1}|\gg 1\). Clearly, \(\#\mathbf{Q}_{N}=N\). 3. Let \(\mathsf{q}=(\mathbf{q}^{\prime})_{i\in\mathbb{Z}}\) with \(\mathbf{q}^{i}=(q_{1}^{i},\ldots,q_{k}^{i})\in\mathbf{Q}\) such that \(|q_{1}^{i+1}-q_{1}^{i}|\leq 1\). Then \(\mathsf{q}\in\mathfrak{Q}\). **Definition 26**.: The _shift map_\(\sigma:\mathfrak{Q}\to\mathfrak{Q}\), \(\mathsf{p}=\sigma(\mathsf{q})\), is given by \(\mathsf{p}^{i}=\mathsf{q}^{i+1}\) for all \(i\in\mathbb{Z}\). The following proposition tells us some important properties of the shift map. Note that by _topological transitivity_ we mean for any nonempty open sets \(U,V\subset\mathfrak{Q}\) there is \(n\in\mathbb{N}\) such that \(\sigma^{n}(U)\cap V\neq\emptyset\). If \(N\in\mathbb{N}\), \(\Sigma_{N}=\{1,\ldots,N\}^{\mathbb{Z}}\) and the shift map \(\sigma_{N}:\Sigma_{N}\to\Sigma_{N}\), \((t_{i})_{i\in\mathbb{Z}}=\sigma_{N}\big{(}(s_{i})_{i\in\mathbb{Z}}\big{)}\), is given by \(t_{i}=s_{i+1}\) for all \(i\in\mathbb{Z}\), then we say that \(\sigma_{N}:\Sigma_{N}\to\Sigma_{N}\) is the _full \(N\)-shift_. We denote by \(h_{\mathrm{top}}(f)\) the _topological entropy_ of a continuous self-map \(f\). **Proposition 27**.: _We assume hypotheses_ (**A**), (**B**) _and_ (**X**)_._ _The shift map \(\sigma:\mathfrak{Q}\to\mathfrak{Q}\) exhibits topological transitivity and sensitive dependence on initial conditions, has infinite topological entropy, and contains the full \(N\)-shift as a topological factor for any \(N\in\mathbb{N}\). Besides, the subshift space of periodic admissible sequences_ \[\mathfrak{P}=\{\mathsf{q}=\mathfrak{Q}:\exists p\in\mathbb{N}\text{ such that }\sigma^{p}(\mathsf{q})=\mathsf{q}\}\] _is dense in the shift space \(\mathfrak{Q}\)._ Proof.: On the one hand, part (a) of Corollary 25 implies that the shift map \(\sigma:\mathfrak{Q}\to\mathfrak{Q}\) is equivalent to a transitive topological Markov chain. It is well-known that such objects exhibit topological transitivity, sensitive dependence on initial conditions, and density of periodic points. See, for example, Sections 1.9 and 3.2 of [39]. On the other hand, let \(\boldsymbol{Q}_{N}=\{\boldsymbol{q}^{1},\ldots,\boldsymbol{q}^{N}\}\) be the set provided in part (b) of Corollary 25. Set \(\mathfrak{Q}_{N}=(\boldsymbol{Q}_{N})^{\mathbb{Z}}\). We consider the bijection \[g=(g^{i})_{i\in\mathbb{Z}}:\mathfrak{Q}_{N}\to\Sigma_{N},\quad g^{i}( \boldsymbol{q}^{n})=n,\qquad\forall i\in\mathbb{Z},\;\forall n\in\{1,\ldots,N\}.\] Then \(\mathfrak{Q}_{N}\) is a subshift space of \(\mathfrak{Q}\). That is, \(\sigma(\mathfrak{Q}_{N})=\mathfrak{Q}_{N}\). Besides, the diagram commutes, so \(h_{\mathrm{top}}(\sigma)\geq h_{\mathrm{top}}(\sigma_{|\mathfrak{Q}_{N}})=h_ {\mathrm{top}}(\sigma_{N})=\log N\) for all \(N\in\mathbb{N}\). This means that \(\sigma\) has infinite topological entropy. ## 6 Chaotic motions In this section, we detail the construction of a domain accumulating on the boundary of the phase space on which the dynamics is semiconjugate to a shift on infinitely many symbols, thus proving Theorem A; in fact, we reformulate Theorem A in the form of Theorem 31 below. The proof uses the method of _stretching along the paths_ summarized in Section 3, the fundamental corollary obtained in Section 4 and the shift map described in Section 5. First, we list all notations and conventions used along this section. Let \(\Gamma\) be a circular \(k\)-gon with arcs \(\Gamma_{j}\), radii \(r_{j}\) and central angles \(\delta_{j}\). Set \(\mu_{j}=\sqrt{r_{j}/r_{j+1}}\). Then \(\alpha_{j}^{\pm},\beta_{j}^{\pm}=\alpha_{j}^{\pm}+1\) and \(\chi_{j}\geq 2\) are the factors, addends and integers introduced in Lemma 17 and Corollary 19. These factors satisfy hypotheses (**A**) and (**B**). Besides, we assume hypothesis (**X**), so Proposition 27 holds. The index \(j\) is defined modulo \(k\). Let \(\mathcal{M}=\mathbb{T}\times[0,\pi]\) be the phase space of the billiard map \(f:\mathcal{M}\to\mathcal{M}\). Let \(\mathcal{S}_{0}\) be the generic sliding set, see Definition 13. Let \(\widetilde{\mathcal{K}}_{j,n}^{\varsigma}\), with \(\mathcal{K}_{j,n}^{\varsigma}\subset\mathcal{D}_{j}\), be the pairwise disjoint oriented cells introduced in Corollary 19, where \(j\mod k\), \(n\in\mathbb{N}\) with \(n\geq\chi_{j}\) and \(\varsigma\in\{-,+\}\). Let \(\sigma:\mathfrak{Q}\to\mathfrak{Q}\) be the shift map studied in Section 5. Finally, the reader should be aware of the conventions about calligraphic, Fraktur, boldface and standard fonts. **Definition 28**.: _Partial sums \(s^{i},s^{i}_{j}:\mathfrak{Q}\to\mathbb{Z}\) for \(i\in\mathbb{Z}\) and \(j=1,\ldots,k\) are defined by_ \[s^{i}(\mathfrak{q})=\begin{cases}\quad\sum_{m=0}^{i-1}\sum_{j=1}^{k}|q_{j}^{m} |,&\text{ for }i\geq 0,\\ -\sum_{m=i}^{-1}\sum_{j=1}^{k}|q_{j}^{m}|,&\text{ for }i<0,\end{cases} s^{i}_{j}(\mathfrak{q})=s^{i}(\mathfrak{q})+\sum_{m=1}^{j-1}|q_{m}^{i}|.\] Partial sums \(s^{i},s^{i}_{j}:\mathfrak{Q}^{+}\to\mathbb{N}_{0}\) are analogously defined for \(i\geq 0\) and \(j=1,\ldots,k\). The following proposition gives the relationship between some types of admissible sequences (two-sided: \(\mathfrak{q}\in\mathfrak{Q}\), one-sided: \(\mathfrak{q}\in\mathfrak{Q}^{+}\) and periodic: \(\mathfrak{q}\in\mathfrak{P}\)) and orbits of \(f\) with prescribed itineraries in the set of pairwise disjoint cells \(\mathcal{K}_{j,n}^{\varsigma}\) with \(j=1,\ldots,k\), \(n\geq\chi_{j}\) and \(\varsigma\in\{-,+\}\). It is the key step in obtaining chaotic properties. **Proposition 29**.: _We have the following three versions._ * _If_ \(\mathfrak{q}\in\mathfrak{Q}\)_, then there is_ \(x\in\mathcal{D}_{1}\) _such that_ \[f^{s^{i}_{j}(\mathfrak{q})}(x)\in\mathcal{K}_{j,|q^{i}_{j}|}^{\mathrm{sign}(q^ {i}_{j})},\qquad\forall i\in\mathbb{Z},\;\forall j=1,\ldots,k.\] (12) * _If_ \(\mathfrak{q}\in\mathfrak{Q}^{+}\)_, then there is a path_ \(\gamma\subset\mathcal{D}_{1}\) _such that:_ * \(f^{s^{i}_{j}(\mathfrak{q})}(\gamma)\subset\mathcal{K}_{j,|q^{i}_{j}|}^{\mathrm{ sign}(q^{i}_{j})}\) _for all_ \(i\geq 0\) _and_ \(j=1,\ldots,k\)_; and_ _._ 2. \(\gamma\) _is horizontal in_ \(\mathcal{D}_{1}\) _(that is,_ \(\gamma\) _connects the left side_ \(\mathcal{L}_{1}\) _with the right side_ \(\mathcal{L}_{1}^{1}\)_)._ 3. _If_ \(\mathfrak{q}\in\mathfrak{P}\) _has period_ \(p\)_, then there is a point_ \(x\in\mathcal{D}_{1}\) _such that:_ 1. \(f^{s_{j}^{\mathfrak{q}}(\mathfrak{q})}(x)\in\mathcal{K}^{\mathrm{sign}(q_{j}^{ \mathfrak{q}})}_{j,|q_{j}^{\mathfrak{q}}|}\) _for all_ \(i\in\mathbb{Z}\) _and_ \(j=1,\ldots,k\)_; and_ 2. \(f^{s^{p}(\mathfrak{q})}(x)=x\)_, so_ \(x\) _is a_ \((p,s^{p}(q))\)_-periodic point of_ \(f\) _with period_ \(s^{p}(\mathfrak{q})\) _and rotation number_ \(p/s^{p}(\mathfrak{q})\)_._ _All these billiard orbits are contained in the generic sliding set \(\mathcal{S}_{0}\). In particular, they have no points in the extended singularity set \(\mathcal{L}=\bigcup_{j}\left(\mathcal{L}_{j}\cup\mathcal{L}_{j}^{1/2}\cup \mathcal{L}_{j}^{1}\right)\). Obviously, these claims only hold for forward orbits in version (**O**)._ Proof.: It is a direct consequence of Theorem 9, Corollary 19, the definitions of admissible symbols and admissible sequences, and the definition of rotation number. To adapt the language of [49, 50, 52] to our setting, one could say that Proposition 29 implies that the billiard map _induces chaotic dynamics on infinitely many symbols_. **Remark 30**.: The partial sum \(s^{i}(\mathfrak{q})\), with \(i\geq 0\), introduced in Definition 28 counts the number of impacts that any of its corresponding sliding billiard trajectories have after the first \(i\) turns around \(\Gamma\). Analogously, \(s^{i}_{j}(\mathfrak{q})\), with \(i\geq 0\), adds to the previous count the number of impacts in the first \(j-1\) arcs at the \((i+1)\)-th turn. There is no ambiguity in these counts, because generic sliding billiard trajectories have no impacts on the set of nodes \(\Gamma_{*}\), see Remark 14. The partial sums with \(i<0\) store information about the backward orbit. Let us introduce four subsets of the first fundamental domain that will be invariant under a return map \(F\) yet to be defined. First, we consider the _fundamental generic sliding set_ \[\mathcal{R}=\mathcal{S}_{0}\cap\mathcal{D}_{1}\subset\mathrm{Int}\,\mathcal{ D}_{1}.\] Any \(f\)-orbit that begins in \(\mathcal{R}\) returns to \(\mathcal{R}\) after a finite number of iterations of the billiard map \(f\). Let \(\tau:\mathcal{R}\to\mathbb{N}\) be the _return time_ defined as \(\tau(x)=\min\{n\in\mathbb{N}:f^{n}(x)\in\mathcal{R}\}\). Then \(F:\mathcal{R}\to\mathcal{R}\), \(F(x)=f^{\tau(x)}(x)\), is the promised _return map_. The return map \(F:\mathcal{R}\to\mathcal{R}\) is a homeomorphism since the billiard map \(f:\mathcal{M}\to\mathcal{M}\) is a homeomorphism and \(\mathcal{R}\) is contained in the interior of the fundamental set \(\mathcal{D}_{1}\). Next, we define the sets \[\mathcal{I} =\big{\{}x\in\mathcal{M}:\exists\mathfrak{q}\in\mathfrak{Q}\text { such that the prescribed itinerary (\ref{eq:12}) takes place}\big{\}},\] \[\mathcal{P} =\big{\{}x\in\mathcal{I}:\exists p\in\mathbb{N}\text{ such that }F^{p}(x)=x\big{\}}\] and the map \(h:\mathcal{I}\to\mathfrak{Q}\), \(h(x)=\mathfrak{q}\), where \(\mathfrak{q}\) is the unique admissible sequence such that the prescribed itinerary (12) takes place. It is well-defined because cells \(\mathcal{K}^{\varsigma}_{j,n}\) are pairwise disjoint. This is the topological semiconjugacy we were looking for. Clearly, \[\tau(x)=s^{1}(\mathfrak{q})=|q_{1}^{0}|+\cdots+|q_{k}^{0}|,\qquad\forall x\in \mathcal{I},\ \mathfrak{q}=h(x) \tag{13}\] where \(\tau(x)\) is the return time and the partial sum \(s^{1}(\mathfrak{q})\) counts the number of impacts after the first turn around \(\Gamma\) of the billiard orbit starting at \(x\). **Theorem 31**.: _The sets \(\mathcal{P}\), \(\mathcal{J}:=\overline{\mathcal{P}}\), \(\mathcal{I}\) and \(\mathcal{R}\) are \(F\)-invariant:_ \[F(\mathcal{P})=F(\mathcal{P}),\qquad F(\mathcal{J})=F(\mathcal{J}),\qquad F( \mathcal{I})\subset\mathcal{I},\qquad F(\mathcal{R})=F(\mathcal{R}).\] _Besides, \(\emptyset\neq\mathcal{P}\subsetneq\mathcal{J}\subset\mathcal{I}\subset \mathcal{R}\). The maps \(h:\mathcal{I}\to\mathfrak{Q}\), \(h_{|\mathcal{J}}:\mathcal{J}\to\mathfrak{Q}\) and \(h_{|\mathcal{P}}:\mathcal{P}\to\mathfrak{P}\) are continuous surjections, and the three diagrams_ (14) _commute. Periodic points of \(F_{|\mathcal{J}}\) are dense in \(\mathcal{J}\). Given any \(\mathfrak{q}\in\mathfrak{P}\) with period \(p\), there is at least one \(x\in(h_{|\mathcal{P}})^{-1}(\mathfrak{q})\in\mathcal{P}\) such that \(f^{s^{p}(\mathfrak{q})}(x)=F^{p}(x)=x\)._ Proof.: Properties \(F(\mathcal{P})=\mathcal{P}\), \(F(\mathcal{R})=\mathcal{R}\) and \(\mathcal{P}\subset\mathcal{I}\) are trivial, by construction. Inclusion \(\mathcal{I}\subset\mathcal{R}\) follows from the definitions of both sets and property (b) of Corollary 19. Let us prove that \(h:\mathcal{I}\to\mathfrak{Q}\) is continuous and surjective. Surjectivity follows directly from version (**T**) of Proposition 29. Choose any \(x\in\mathcal{I}\) and \(\epsilon>0\). Choose \(l\in\mathbb{N}\) such that \(\sum_{|i|>l}2^{-|i|}<\epsilon\). Let \(\mathfrak{q}=(\boldsymbol{q}^{i})_{i\in\mathbb{Z}}=h(x)\) with \(\boldsymbol{q}^{i}=(q_{1}^{i},\ldots,q_{k}^{i})\). Using that the compact sets \(\mathcal{K}_{j,n}^{\varsigma}\) are mutually disjoint, \(F\) is a homeomorphism and condition (12), we can find \(\delta_{j}^{i}>0\) for each \(|i|\leq l\) and \(j=1,\ldots,k\) such that \[f^{s_{j}^{i}(\mathfrak{q})}\big{(}\mathcal{B}_{\delta_{j}^{i}}(x)\cap\mathcal{ I}\big{)}\subset\mathcal{K}_{j,|q_{j}^{i}|}^{\mathrm{sign}(q_{j}^{i})},\qquad\forall|i| \leq l,\;\forall j=1,\ldots,k.\] Here, \(\mathcal{B}_{\delta}(x)\) is the disc of radius \(\delta\) centered at \(x\). If \(\delta=\min\{\delta_{j}^{i}:|i|\leq l,\;j=1,\ldots,k\}\), then \[d(x,y)<\delta\text{ and }\mathfrak{p}=(\boldsymbol{p}^{i})_{i\in\mathbb{Z}}=h(y )\Longrightarrow\boldsymbol{p}^{i}=\boldsymbol{q}^{i}\text{ for each }|i|\leq l.\] Therefore, \[d_{\mathfrak{Q}}\big{(}h(y),h(x)\big{)}=d_{\mathfrak{Q}}(\mathfrak{p}, \mathfrak{q})=\sum_{|i|>l}\frac{1}{2^{|i|}}\frac{|\boldsymbol{p}^{i}- \boldsymbol{q}^{i}|}{1+|\boldsymbol{p}^{i}-\boldsymbol{q}^{i}|}<\sum_{|i|>l} \frac{1}{2^{|i|}}<\epsilon,\] which implies that \(h:\mathcal{I}\to\mathfrak{Q}\) is continuous. Next, we prove simultaneously that \(F(\mathcal{I})\subset\mathcal{I}\) and that \(\sigma\circ h=h\circ F_{|\mathcal{I}}\). Let \(x\in\mathcal{I}\), \(y=F(x)\in\mathcal{R}\), \(\mathfrak{q}=(\boldsymbol{q}^{i})_{i\in\mathbb{Z}}=h(x)\) with \(\boldsymbol{q}^{i}=(q_{1}^{i},\ldots,q_{k}^{i})\), and \(\mathfrak{p}=(\boldsymbol{p}^{i})_{i\in\mathbb{Z}}=\sigma(\mathfrak{q})\in \mathfrak{Q}\) with \(\boldsymbol{p}^{i}=(p_{1}^{i},\ldots,p_{k}^{i})\), so \(\boldsymbol{p}^{i}=\boldsymbol{q}^{i+1}\) and \(p_{j}^{i}=q_{j}^{i+1}\). The prescribed itinerary (12) and relation (13) imply that \[f^{s_{j}^{i}(\mathfrak{p})}(y)=f^{s_{j}^{i}(\sigma(\mathfrak{q}))}(F(x))=f^{s _{j}^{i+1}(\mathfrak{q})-s^{1}(\mathfrak{q})}\big{(}f^{s^{1}(\mathfrak{q})}( x)\big{)}=f^{s_{j}^{i+1}(\mathfrak{q})}(x)\in\mathcal{K}_{j,|q_{j}^{i+1}|}^{ \mathrm{sign}(q_{j}^{i+1})}=\mathcal{K}_{j,|p_{j}^{i}|}^{\mathrm{sign}(p_{j}^ {i})}\] for all \(i\in\mathbb{Z}\) and \(j=1,\ldots,k\), so \(\sigma(h(x))=\sigma(\mathfrak{q})=\mathfrak{p}=h(y)=\sigma(F(x))\) and \(F(x)=y\in h^{-1}(\mathfrak{p})\subset\mathcal{I}\) for all \(x\in\mathcal{I}\), as we wanted to prove. Hence, the first diagram in (14) defines a topological semiconjugacy. Let us check that \(\mathcal{J}:=\overline{\mathcal{P}}\subset\mathcal{I}\) and \(F(\mathcal{J})=\mathcal{J}\). We have \(\mathcal{J}\subset\mathcal{I}\), because \(\mathcal{I}=h^{-1}(\mathfrak{Q})\) is closed (continuous preimage of a closed set). Besides, on one hand we have \(\mathcal{P}=F(\mathcal{P})\subset F(\overline{\mathcal{P}})=F(\mathcal{J})\) implying that \(\mathcal{J}=\overline{\mathcal{P}}\subset\overline{F(\mathcal{J})}=F(\mathcal{ J})\), while on the other we have \(F(\mathcal{J})=F(\overline{\mathcal{P}})\subset\overline{F(\mathcal{P})}= \overline{\mathcal{P}}=\mathcal{J}\). To establish that the second diagram in (14) is still a topological semiconjugacy, we must prove that \(h(\mathcal{J})=\mathfrak{Q}\). We clearly have \(h(\mathcal{J})\subset\mathfrak{Q}\) since \(\mathcal{J}\subset\mathcal{I}\), and since \(h_{|\mathcal{I}}\) is a semiconjugacy. Meanwhile, since \(\mathcal{J}\) is compact (closed by definition and contained in the bounded set \(\mathcal{D}_{1}\)) so too is \(h(\mathcal{J})\); moreover \(h(\mathcal{J})\) contains \(h(\mathcal{P})=\mathfrak{P}\) which is dense in \(\mathfrak{Q}\) by Proposition 27, and so we obtain \(\mathfrak{Q}=\overline{\mathfrak{P}}\subset h(\mathcal{J})\). Therefore \(h(\mathcal{J})=\mathfrak{Q}\). To complete the proof of the theorem, notice that periodic points of \(F_{|\mathcal{J}}\) are dense in \(\mathcal{J}\) by construction and the last claim of the Theorem 31 follows from version (**P**) of Proposition 29, which also implies \(\mathcal{P}\neq\emptyset\). Proposition 27 and Theorem 31 imply Theorem A as stated in the introduction and it is the first step in proving Theorems C, D and E. For instance, upon combining Theorem 31 with the topological transitivity of the shift map guaranteed by Proposition 27, we already obtain the existence of trajectories approaching the boundary asymptotically. It remains to determine the _optimal_ rate of diffusion. This is done in Section 7 by analyzing the sequences \(\mathfrak{q}=(\boldsymbol{q}^{i})_{i\geq 0}\in\mathfrak{Q}^{+}\) for which \(s^{i}(\mathfrak{q})\) increases in the _fastest_ possible way as \(i\to+\infty\). Lemma 24 plays a role in that analysis. We end this section with three useful corollaries. First, we prove Corollary B on final sliding motions. Proof of Corollary B.: The clockwise case is a by-product of the counter-clockwise one, because if we concatenate the arcs \(\Gamma_{1},\ldots,\Gamma_{k}\) of the original circular polygon \(\Gamma\) in the reverse order \(\Gamma_{k},\ldots,\Gamma_{1}\), then we obtain the reversed circular polygon \(\Gamma^{\prime}\) with the property that counter-clockwise sliding billiard trajectories in \(\Gamma^{\prime}\) are in 1-to-1 correspondence with clockwise sliding billiard trajectories in \(\Gamma\). Thus, it suffices to consider the counter-clockwise case. Symbols \(\boldsymbol{q}=(q_{1},\ldots,q_{k})\in\boldsymbol{Q}\subset\mathbb{Z}^{k}\) keep track of the proximity of the fundamental quadrilaterals \(\mathcal{Q}_{j,|q_{j}|}\) to the inferior boundary of \(\mathcal{M}\). That is, the larger the absolute value \(|q_{j}|\), the smaller the angle of reflection \(\theta\) for any \(x=(\varphi,\theta)\in\mathcal{Q}_{j,|q_{j}|}\). For this reason, by construction, if one considers a bounded sequence in \(\mathfrak{Q}\), (respectively, a sequence \(\mathfrak{q}\in\mathfrak{Q}\) such that \(\chi_{j}\leq\min_{i\in\mathfrak{Z}}q_{j}^{i}<\limsup_{|\mathfrak{q}|+\to \infty}q_{j}^{i}=+\infty\) for all \(j=1,\ldots,k\)) (respectively, a sequence \(\mathfrak{q}\in\mathfrak{Q}\) such that \(\lim_{|\mathfrak{q}|+\to\infty}q_{j}^{i}=+\infty\) for all \(j=1,\ldots,k\)), the corresponding sliding orbit in \(\mathcal{J}\subset\mathcal{M}\) belongs to \(\mathcal{B}_{0}^{-}\cap\mathcal{B}_{0}^{+}\) (respectively, \(\mathcal{O}_{0}^{-}\cap\mathcal{O}_{0}^{+}\)) (respectively, \(\mathcal{A}_{0}^{-}\cap\mathcal{A}_{0}^{+}\)). By considering two-sided sequences \(\mathfrak{q}\in\mathfrak{Q}\) which have different behaviors at each side, one can construct trajectories which belong to \(\mathcal{X}_{0}^{-}\cap\mathcal{Y}_{0}^{+}\neq\emptyset\) for any prescribed choice \(\mathcal{X},\mathcal{Y}=\mathcal{B},\mathcal{O},\mathcal{A}\) such that \(\mathcal{X}\neq\mathcal{Y}\). The existence of all these sequences comes from part (b) of Lemma 24, since we can control the size of \(|q_{j}^{i}|\) just from the size of \(|q_{1}^{i}|\). **Corollary 32**.: _With the notation as in Theorem 31, the following properties are satisfied._ 1. _The return map_ \(F|_{\mathcal{J}}\) _has infinite topological entropy._ 2. _There is a compact_ \(F\)_-invariant set_ \(\mathcal{K}\subset\mathcal{J}\) _such that_ \(F|_{\mathcal{K}}\) _is topologically semiconjugate to the shift_ \(\sigma:\mathfrak{Q}\to\mathfrak{Q}\) _via the map_ \(h|_{\mathcal{K}}\) _in the sense of (_14_); it is topologically transitive; and it has sensitive dependence on initial conditions._ Proof.: 1. It follows from the fact that \(\sigma:\mathfrak{Q}\to\mathfrak{Q}\) has infinite topological entropy and it is a topological factor of \(F:\mathcal{J}\to\mathcal{J}\). 2. It is a direct consequence of our Theorem 31 and a theorem of Auslander and Yorke. See [52, Item (v) of Theorem 2.1.6] for details. Given any integers \(1\leq p<q\), let \(\Pi(p,q)\) be the set of \((p,q)\)-periodic billiard trajectories in the circular \(k\)-gon \(\Gamma\). That is, the set of periodic trajectories that close after \(p\) turns around \(\Gamma\) and \(q\) impacts in \(\Gamma\), so they have rotation number \(p/q\). The symbol \(\#\) denotes the _cardinality_ of a set. Let \(2^{\mathbb{R}^{n+1}}\) be the power set of \(\mathbb{R}^{n+1}\). Let \(G_{q}:2^{\mathbb{R}^{n+1}}\to\mathbb{N}_{0}\) be the function \[G_{q}(K)=\#\left\{\boldsymbol{x}=(x_{1},\ldots,x_{n+1})\in K\cap\mathbb{Z}^{n +1}:x_{1}+\cdots+x_{n+1}=q\right\}\] that counts the integer points in any subset \(K\subset\mathbb{R}^{n+1}\) whose coordinates sum \(q\in\mathbb{N}\). **Corollary 33**.: _Let \(\alpha_{j}^{\pm}\), \(\beta_{j}^{\pm}=\alpha_{j}^{\pm}+1\) and \(\chi_{j}\) be the quantities defined in Lemma 17 and Corollary 19. If \(p,q\in\mathbb{N}\) with \(1\leq p<q\), then_ \[\#\Pi(p,q)\geq 2^{n+1}G_{q}\big{(}P^{(p)}\big{)}, \tag{15}\] _where \(n+1=kp\) and_ \[P^{(p)}=\left\{\boldsymbol{x}\in\mathbb{R}^{n+1}:\begin{array}{ll}\alpha_{j }^{-}x_{j}+\beta_{j}^{-}<x_{j+1}<\alpha_{j}^{+}x_{j}-\beta_{j}^{+},\quad\forall j =1,\ldots,n\\ \alpha_{n+1}^{-}x_{n+1}+\beta_{n+1}^{-}<x_{1}<\alpha_{n+1}^{+}x_{n+1}-\beta_{n +1}^{+},\\ x_{j}\geq\chi_{j},\quad\forall j=1,\ldots,n+1\end{array}\right\} \tag{16}\] _is an unbounded convex polytope of \(\mathbb{R}^{n+1}\)._ Proof.: Let \(p,q\in\mathbb{N}\) such that \(1\leq p<q\). Set \(n+1=kp\). Let \(\mathfrak{P}_{p}\) be the set of admissible periodic sequences of period \(p\). We consider the map \(\psi_{p}:\mathfrak{P}_{p}\to\mathbb{N}^{n+1}\) defined by \[\psi_{p}(\mathfrak{q})=\boldsymbol{x}=(x_{1},\ldots,x_{n+1})=\left(|q_{1}^{0} |,\ldots,|q_{k}^{0}|,|q_{1}^{1}|,\ldots,|q_{k}^{1}|,\ldots,|q_{1}^{p-1}|, \ldots,|q_{k}^{p-1}|\right),\] where \(\mathfrak{q}=(\boldsymbol{q}^{i})_{i\in\mathbb{Z}}\in\mathfrak{P}_{p}\) and \(\boldsymbol{q}^{i}=(q_{1}^{i},\ldots,q_{k}^{i})\in\boldsymbol{Q}\). Note that \(s^{p}(\mathfrak{q})=x_{1}+\cdots+x_{n+1}\) when \(x=\psi_{p}(\mathfrak{q})\). Besides, \(\psi_{p}(\mathfrak{P}_{p})\subset P^{(p)}\cap\mathbb{Z}^{n+1}\) and the map \(\psi_{p}:\mathfrak{P}_{p}\to P^{(p)}\cap\mathbb{Z}^{n+1}\) is \(2^{n+1}\)-to-\(1\) by construction. Therefore, each point \(\boldsymbol{x}\in P^{(p)}\cap\mathbb{Z}^{n+1}\) whose coordinates sum \(q\) gives rise to, at least, \(2^{n+1}\) different generic sliding \((p,q)\)-periodic billiard trajectories, see version (**P**) of Proposition 29. Lower bound (15) is far from optimal, since it does not take into account the periodic billiard trajectories that are not generic or not sliding. But we think that it captures with great accuracy the growth rate of \(\#\Pi(p,q)\) when \(p/q\) is relatively small and \(q\to+\infty\). It will be the first step in proving Theorem D in Section 8. Optimal linear speed for asymptotic sliding orbits In this section we establish the existence of uncountably many _points_ in the fundamental domain \(\mathcal{D}_{1}\) that give rise to generic asymptotic sliding billiard trajectories (that is, those trajectories in the intersection \(\mathcal{A}_{0}^{-}\cap\mathcal{A}_{0}^{+}\subset\mathcal{S}_{0}\) described in the introduction) that approach the boundary asymptotically with optimal uniform linear speed as \(|n|\to+\infty\). We also look for trajectories just in \(\mathcal{A}_{0}^{+}\subset\mathcal{S}_{0}\), in which case we obtain uncountably many _horizontal paths_ (not points) in \(\mathcal{D}_{1}\). The dynamic feature that distinguishes such trajectories in that they approach the boundary in the fastest way possible among all trajectories that give rise to admissible sequences of symbols. We believe that the union of all these horizontal paths (respectively, all these points) is a Cantor set times an interval (respectively, the product of two Cantor sets). However, in order to prove rigorously it, we would need to prove that our semiconjugacy \(h_{|\mathcal{F}}:\mathcal{J}\to\mathfrak{Q}\), see (14), is, indeed, a full conjugacy. Both sets are \(F\)-invariant and they accumulate on the first node of the circular polygon. Obviously, there are similar sets for each one of the other nodes. The reader must keep in mind the notations listed at the beginning of Section 6, the estimates in Lemma 24, and the interpretation of the partial sums \(s^{i},s^{i}_{j}:\mathfrak{Q}\to\mathbb{N}_{0}\), with \(i\in\mathbb{Z}\) and \(j=1,\ldots,k\), presented in Remark 30. **Definition 34**.: The uncountably infinite _sign spaces_ are \[\mathfrak{T}^{+} =\left\{\mathfrak{t}=(\boldsymbol{t}^{i})_{i\geq 0}:\boldsymbol{t} ^{i}=(t^{i}_{1},\ldots,t^{i}_{k})\in\{-,+\}^{k}\right\},\] \[\mathfrak{T} =\left\{\mathfrak{t}=(\boldsymbol{t}^{i})_{i\in\mathbb{Z}}: \boldsymbol{t}^{i}=(t^{i}_{1},\ldots,t^{i}_{k})\in\{-,+\}^{k}\right\}.\] To avoid any confusion, be aware that the dynamical index of the iterates of asymptotic generic sliding trajectories was called \(n\in\mathbb{Z}\) in Theorem C, but it is called \(l\in\mathbb{Z}\) in Theorem 35 below. **Theorem 35**.: _There are constants \(0<d_{-}<d_{+}\) such that the following properties hold._ 1. _There are pairwise disjoint paths_ \(\gamma^{\mathfrak{t}}_{n}\subset\mathcal{K}^{t^{1}_{1}}_{1,n}\subset\mathcal{ D}_{1}\) _for any_ \(n\geq\chi_{1}\) _and_ \(\mathfrak{t}\in\mathfrak{T}^{+}\)_, 'horizontal' since they connect the left side_ \(\mathcal{L}_{1}\) _with the right side_ \(\mathcal{L}^{1}_{1}\)_, such that_ \[\Pi_{\theta}\big{(}f^{l}(x)\big{)}=\Pi_{\theta}(x),\quad\forall l=0,\ldots,n-1\] \[nd_{-}\Pi_{\theta}(x)\leq\Pi_{\theta}\big{(}f^{l}(x)\big{)}\leq nd _{+}\Pi_{\theta}(x),\quad\forall l\geq n\;\bigg{\}}\quad\forall x\in\gamma^{ \mathfrak{t}}_{n},\;\forall n\geq\chi_{1},\;\forall\mathfrak{t}\in\mathfrak{T }^{+}.\] 2. _There are pairwise distinct points_ \(x^{\mathfrak{t}}_{n}\in\mathcal{K}^{t^{1}_{1}}_{1,n}\subset\mathcal{D}_{1}\) _for any_ \(n\geq\chi_{1}\) _and_ \(\mathfrak{t}\in\mathfrak{T}\) _such that_ \[\Pi_{\theta}\big{(}f^{l}(x^{\mathfrak{t}}_{n})\big{)}=\Pi_{\theta}\big{(}x^{ \mathfrak{t}}_{n}\big{)},\quad\forall l=0,\ldots,n-1\] \[\Pi_{\theta}\big{(}f^{l}(x^{\mathfrak{t}}_{n})\big{)}=\Pi_{\theta} \big{(}f^{-1}(x^{\mathfrak{t}}_{n})\big{)},\quad\forall l=-1,\ldots,-m\] \[nd_{-}\Pi_{\theta}(x^{\mathfrak{t}}_{n})\leq|l|\Pi_{\theta}\big{(} f^{l}(x^{\mathfrak{t}}_{n})\big{)}\leq nd_{+}\Pi_{\theta}(x^{\mathfrak{t}}_{n} ),\quad\forall l\geq n\;\text{or}\;l<-m\;\bigg{\}}\;\forall n\geq\chi_{1},\; \forall\mathfrak{t}\in\mathfrak{T},\] _where_ \(m=-\xi^{-1}_{k}(n)\in\mathbb{N}\)_._ Proof.: 1. Identity \(\Pi_{\theta}\big{(}f^{l}(x)\big{)}=\Pi_{\theta}(x)\) for all \(x\in\mathcal{Q}_{1,n}\) and \(l=0,\ldots,n-1\) is trivial, because these first impacts are all over the first arc \(\Gamma_{1}\), so the angle of reflection remains constant. Henceforth, we just deal with the case \(l\geq n\). Fix \(n\geq\chi_{1}\) and \(\mathfrak{t}=(\boldsymbol{t}^{i})_{i\geq 0}\in\mathfrak{T}^{+}\) with \(\boldsymbol{t}^{i}=(t^{i}_{1},\ldots,t^{i}_{k})\). Let \(\mathfrak{n}=(\boldsymbol{n}^{i})_{i\geq 0}\in\big{(}\mathbb{N}^{k}\big{)}^{ \mathbb{N}_{0}}\) with \(\boldsymbol{n}^{i}=(n^{i}_{1},\ldots,n^{i}_{k})\in\mathbb{N}^{k}\) be the sequence given by \[n^{i}_{j}:=\xi^{i}_{j}(n)=\max\Xi^{i}_{j}(n),\] where \(\Xi^{i}_{j}(n)\subset\mathbb{N}\) is the set (11). We view \(n^{0}_{1}=n\) as the'starting' value, since sequence \(\mathfrak{n}\) is completely determined by \(n\). However, we do not explicit this dependence on \(n\) for the sake of brevity. Let \(\rho^{0}=n\), \[\rho^{i}=s^{i}(\mathfrak{n})=\sum_{m=0}^{i-1}\sum_{j=1}^{k}n^{m}_{j}, \quad\forall i>0,\] \[\rho^{i}_{j}=s^{i}_{j}(\mathfrak{n})=s^{i}(\mathfrak{n})+\sum_{m=1}^ {j-1}n^{i}_{m},\quad\forall j\mod k,\;\forall i\geq 0.\] Note that \(\rho_{1}^{i}=\rho^{i}\). We use the convention \(\rho_{k+1}^{i}=\rho^{i+1}\). There is \(\mathfrak{q}=(\mathbf{q}^{i})_{i\geq 0}\in\mathfrak{Q}^{+}\) with \(\mathbf{q}^{i}=(q_{1}^{i},\ldots,q_{k}^{i})\) such that \(\operatorname{sign}(\mathfrak{q})=\mathfrak{t}\) and \(|\mathfrak{q}|=\mathfrak{n}\) by definition. Note that \(s_{j}^{i}(\mathfrak{q})=\rho_{j}^{i}\) for any \(i\geq 0\) and \(j=1,\ldots,k\). Version **(O)** of Proposition 29 implies that there is a path \(\gamma_{n}^{\mathfrak{t}}\in\mathcal{D}_{1}\), horizontal in the sense that it connects the left side \(\mathcal{L}_{1}\) with the right side \(\mathcal{L}_{1}^{\mathfrak{t}}\), such that \[f^{\rho_{j}^{i}}(x)\subset\mathcal{K}_{j,n_{j}^{\mathfrak{t}}}^{t_{j}},\qquad \forall x\in\gamma_{n}^{\mathfrak{t}},\quad\forall i\geq 0,\quad\forall j=1, \ldots,k.\] In particular, \(\gamma_{n}^{\mathfrak{t}}\subset\mathcal{K}_{1,n}^{\mathfrak{t}^{1}}\). Paths \(\gamma_{n}^{\mathfrak{t}}\) are pairwise disjoint, because so are cells \(\mathcal{K}_{j,n}^{\mathfrak{c}}\). Fix \(x=(\varphi,\theta)\in\gamma_{n}^{\mathfrak{t}}\) and \(l\geq n\). Set \((\varphi_{l},\theta_{l})=f^{l}(\varphi,\theta)\). Our goal is to prove that \[nd_{-}\leq l\theta_{l}/\theta\leq nd_{+}, \tag{17}\] for some constants \(0<d_{-}<d_{+}\) that do no depend on the choices of the starting value \(n\geq\chi_{1}\), the sign sequence \(\mathfrak{t}\in\mathbb{Z}^{+}\), the point \(x\in\gamma_{n}^{\mathfrak{t}}\) or the forward iterate \(l\geq n\). Let \(i\geq 0\) be the number of complete turns around \(\Gamma\) that this billiard trajectory performs from the \(0\)-th impact to the \(l\)-th impact, and let \(j\in\{1,\ldots,k\}\) be the arc index where the \(l\)-th impact lands, so \(\rho^{i}\leq\rho_{j}^{i}\leq l<\rho_{j+1}^{i}\leq\rho^{i+1}\). Set \(r=\rho_{j}^{i}\). Then \((\varphi_{r},\theta_{r})\in\mathcal{K}_{j,n_{j}^{\mathfrak{t}}}^{t_{j}^{i}} \subset\mathcal{Q}_{j,n_{j}^{\mathfrak{t}}}\), and so, since the orbit segment \((\varphi_{r},\theta_{r}),(\varphi_{r+1},\theta_{r+1}),\ldots,(\varphi_{l-1}, \theta_{l-1}),(\varphi_{l},\theta_{l})\) remains in the circular arc \(\Gamma_{j}\) without crossing the singularity segment \(\mathcal{L}_{j+1}\), we have \[\frac{\delta_{j}}{2n_{j}^{i}+2}=\min_{y\in Q_{j,n_{j}^{i}}}\Pi_{\theta}(y)\leq \theta_{l}=\theta_{r}\leq\max_{y\in Q_{j,n_{j}^{i}}}\Pi_{\theta}(y)=\frac{ \delta_{j}}{2n_{j}^{i}-2},\] see Lemma 17. From \(x=(\varphi,\theta)\in\gamma_{n}^{\mathfrak{t}}\subset\mathcal{K}_{1,n}^{t_{1 }^{1}}\subset\mathcal{Q}_{1,n}\), we also have \[\frac{\delta_{1}}{2n+2}=\min_{y\in Q_{1,n}}\Pi_{\theta}(y)\leq\theta\leq\max_ {y\in Q_{1,n}}\Pi_{\theta}(y)=\frac{\delta_{1}}{2n-2}.\] By combining the last three displayed sets of inequalities, we get that \[\frac{\delta_{j}}{\delta_{1}}\frac{n-1}{n_{j}^{i}+1}\rho^{i}\leq l\theta_{l}/ \theta\leq\frac{\delta_{j}}{\delta_{1}}\frac{n+1}{n_{j}^{i}-1}\rho^{i+1}. \tag{18}\] Let \(\nu^{\prime}<\lambda^{\prime}\) be the positive constants that appear in Part (aiii) of Lemma 24, so \[\nu^{\prime}n_{j}^{i}\leq\rho^{i}\leq\rho^{i+1}\leq\lambda^{\prime}n_{j}^{i}. \tag{19}\] Bound (17) follows from (18) and (19) if we take \[d_{+} =\frac{\lambda^{\prime}}{\delta_{1}}\max\{\delta_{1},\ldots,\delta _{k}\}\max\left\{\frac{(n+1)n_{j}^{i}}{(n_{j}^{i}-1)n}:n\geq\chi_{1},\;n_{j}^{ i}\geq\chi_{j},\;j=1,\ldots,k\right\}\] \[=\frac{\lambda^{\prime}}{\delta_{1}}\max\{\delta_{1},\ldots,\delta _{k}\}\max\left\{\frac{(\chi_{1}+1)\chi_{j}}{(\chi_{j}-1)\chi_{1}}:j=1,\ldots,k \right\},\] \[d_{-} =\frac{\nu^{\prime}}{\delta_{1}}\min\{\delta_{1},\ldots,\delta_{k }\}\min\left\{\frac{(n-1)n_{j}^{i}}{(n_{j}^{i}+1)n}:n\geq\chi_{1},\;n_{j}^{i} \geq\chi_{j},\;j=1,\ldots,k\right\}\] \[=\frac{\nu^{\prime}}{\delta_{1}}\min\{\delta_{1},\ldots,\delta_{k }\}\min\left\{\frac{(\chi_{1}-1)\chi_{j}}{(\chi_{j}+1)\chi_{1}}:j=1,\ldots,k \right\}.\] 2. The proof is similar, but using version **(T)** of Proposition 29. We omit the details. We just stress that if \(x\in\mathcal{Q}_{1,n}\), \(h(x)=\mathfrak{q}=(\mathbf{q}^{i})_{i\in\mathbb{Z}}\) and \(m:=|q_{k}^{-1}|=-\xi_{k}^{-1}(n)\), then the first \(m\) backward iterates of the point \(x\) impact on the last arc \(\Gamma_{k}\). Constants \(0<a<b\) in Theorem C are directly related to constants \(0<d_{-}<d_{+}\) in Theorem 35. To be precise, we can take \[a =\min_{n\geq\chi_{1}}\frac{1}{nd_{+}\max_{x\in\mathcal{Q}_{1,n}}\Pi _{\theta}(x)}=\min_{n\geq\chi_{1}}\frac{2n-2}{nd_{+}\delta_{1}}=\frac{2\chi_{1}- 2}{\chi_{1}\delta_{1}d_{+}}>0,\] \[b =\max_{n\geq\chi_{1}}\frac{1}{nd_{-}\min_{x\in\mathcal{Q}_{1,n}} \Pi_{\theta}(x)}=\max_{n\geq\chi_{1}}\frac{2n+2}{nd_{-}\delta_{1}}=\frac{2\chi_ {1}+2}{\chi_{1}\delta_{1}d_{-}}>a.\] The sequences \(\left(\cup_{t\in\mathfrak{T}^{+}}\gamma_{n}^{*}\right)_{n\geq\chi_{1}}\) and \(\left(\cup_{t\in\mathfrak{T}^{+}}x_{n}^{*}\right)_{n\geq\chi_{1}}\) are composed by uncountable sets of horizontal paths and points, respectively, with the desired optimal uniform linear speed. The index \(n\geq\chi_{1}\) of the sequence counts the number of impacts that the corresponding billiard trajectories have in the first arc \(\Gamma_{1}\) at the beginning. The fundamental quadrilaterals \(\mathcal{Q}_{1,n}\) tend to the first node as \(n\to+\infty\): \(\lim_{n\to+\infty}\mathcal{Q}_{1,n}=(a_{1},0)\), so we conclude that both sequences accumulate on that node when \(n\to+\infty\). Let us justify the optimality of linear speed. **Proposition 36**.: _There is no billiard trajectory in a circular polygon such that_ \[\lim_{n\to+\infty}n\theta_{n}=0.\] Proof.: We have already proved that all asymptotic billiard trajectories that give rise to admissible sequences of symbols satisfy an upper bound of the form \[1/\theta_{n}\leq b|n|,\qquad\forall|n|\gg 1\] for some uniform constant \(b>0\) The problem is that there could be some _slightly faster_ billiard trajectories that _do not_ give rise to admissible sequences. For instance, if we look at the fundamental quadrilateral \(\mathcal{Q}_{j,n}\) displayed in Figure 4 and its image \(f^{n}(Q_{j,n})\) displayed in Figure 5, we see that all points \(x\in\mathcal{Q}_{j,n}\) close enough to \(\mathcal{L}_{j+1}^{-n+1/2}\) have an image \(f^{n}(x)\) below the lowest admissible fundamental quadrilateral \(\mathcal{Q}_{j+1,m}\) with \(m=\max\{n^{\prime}\geq\chi_{1}:(n,n^{\prime})\in\Xi_{j}\}\). Therefore, since we only deal with admissible sequences of symbols, we have 'lost' the lower non-admissible portion of the red quadrilateral with parabolic shape in Figure 5. However, part (c) of Lemma 11 shows that, once fixed any \(\epsilon\in\big{(}0,\min\{\mu_{1},\ldots,\mu_{k}\}\big{)}\), we have \[\Pi_{\theta}\big{(}f^{n}(x)\big{)}\geq(\mu_{j}-\epsilon)\Pi_{\theta}(x), \qquad\forall x\in\mathcal{Q}_{j,n},\quad\forall j\mod k,\quad\forall n\gg 1,\] provided \(\mu_{j}<1\), so these lower non-admissible portions can not be much lower that the ones that we have already taken into account. This means that if we repeat the computations of all constants that appear along our proofs, but replacing \(\mu_{j}\) with \(\mu_{j}-\epsilon\) provided \(\mu_{j}<1\), then we obtain a new uniform constant \(\hat{b}\in(b,+\infty)\) such that \[1/\theta_{n}\leq\hat{b}|n|,\qquad\forall|n|\gg 1\] for all billiard trajectories, with no exceptions. ## 8 On the number of periodic trajectories In this section, we construct exponentially large (in \(q\)) lower bounds on the number of periodic trajectories of period \(q\), thus proving Theorem D. The strategy of the proof is to use the lower bound (15) provided in Corollary 33. In Section 8.1 we state the main results. Then Section 8.2 contains the proof of a general polynomial lower bound from which we deduce the asymptotic exponential lower bound is Section 8.3. ### Statement of the results Let \(\Gamma\) be a circular \(k\)-gon with arcs \(\Gamma_{j}\), radii \(r_{j}\) and central angles \(\delta_{j}\). Set \(\mu_{j}=\sqrt{r_{j}/r_{j+1}}\). Factors \(\alpha_{j}^{\pm}\), addends \(\beta_{j}^{\pm}=\alpha_{j}^{\pm}+1\) and integers \(\chi_{j}\geq 2\) were introduced in Lemma 17 and Corollary 19. All quantities computed from \(\Gamma\) depend on \(j\) modulo \(k\) by the cyclic nature of the problem: \(\alpha_{j}^{\pm}=\alpha_{j\mod k}^{\pm}\), \(\beta_{j}^{\pm}=\beta_{j\mod k}^{\pm}\), \(\chi_{j}=\chi_{j\mod k}^{\pm}\) and so on. Let \(P^{(p)}\subset\mathbb{R}^{n+1}\), with \(p\in\mathbb{N}\) and \(n+1=kp\), be the unbounded convex polytope introduced in Corollary 33. Factors \(\alpha_{j}^{*}\) satisfy hypotheses (A) and (B), see Lemma 20. Along this section we do not increase the size of \(\chi_{j}\). Indeed, we no longer need the estimates contained in Lemma 24, although we still need the ones contained in Lemma 17. So, we may consider significantly smaller integers \(\chi_{j}\). For instance, we may take (7) when \(\mu_{j}<1\). Let \(\Pi(p,q)\) be the set of \((p,q)\)-periodic billiard trajectories for any \(1\leq p<q\). Let \(\Pi(q)=\cup_{1\leq p<q}\Pi(p,q)\) be the set of all periodic trajectories with period \(q\). We state three lower bounds on the number of periodic billiard trajectories in the theorem below. First, a polynomial general lower bound of \(\#\Pi(p,q)\). Second, an exponential asymptotic lower bound of \(\#\Pi(q)\) as \(q\to+\infty\). Third, a polynomial asymptotic lower bound of \(\#\Pi(p,q)\) as \(q\to+\infty\), for any fixed \(p\in\mathbb{N}\). The symbol \(\#\) denotes the _cardinality_ of a set. _Floor_ and _ceil_ functions are denoted with symbols \(\lfloor\cdot\rfloor\) and \(\lceil\cdot\rceil\). **Theorem 37**.: _If \(\Gamma\) is a circular \(k\)-gon and \(p\in\mathbb{N}\), there are constants \(a_{\star},b_{\star},h_{\star},x_{\star},M_{\star},c_{\star}(p)>0\) such that the following three lower bounds hold:_ 1. \(\#\Pi(p,q)\geq 2\left(a_{\star}q/kp-b_{\star}\right)^{kp-1}/kp\) _for all_ \(q>b_{\star}kp/a_{\star}\)_._ 2. \(\#\Pi(q)\geq\#\Pi(p,q)\geq M_{\star}\mathrm{e}^{h_{\star}q}/q\) _when_ \(p=\lfloor x_{\star}q/k\rfloor\) _and_ \(q\to+\infty\)_._ 3. \(\#\Pi(p,q)\geq c_{\star}q^{kp-1}+\mathrm{O}(q^{kp-2})\) _as_ \(q\to+\infty\) _for any fixed_ \(p\in\mathbb{N}\)_._ **Remark 38**.: We give explicit expressions for all involved constants. We can take \[a_{\star} =4\min\left\{\frac{(\alpha_{1}-\alpha_{1}^{-})A_{1}}{(1+\alpha_{1 }^{-})A},\frac{(\alpha_{1}^{+}-\alpha_{1})A_{1}}{(1+\alpha_{1}^{+})A},\dots, \frac{(\alpha_{k}-\alpha_{k}^{-})A_{k}}{(1+\alpha_{k}^{-})A},\frac{(\alpha_{k} ^{+}-\alpha_{k})A_{k}}{(1+\alpha_{k}^{+})A}\right\},\] \[b_{\star} =6+4\max\{\chi_{1},\dots,\chi_{k}\},\] \[h_{\star} =a_{\star}W_{0}(b_{\star}/\mathrm{e})/b_{\star},\] \[x_{\star} =a_{\star}W_{0}(b_{\star}/\mathrm{e})/((1+W_{0}(b_{\star}/ \mathrm{e}))b_{\star}),\] \[M_{\star} =2(a_{\star}/x_{\star}-b_{\star})^{-k-1}/x_{\star},\] \[c_{\star}(p) =2(a_{\star})^{kp-1}/(kp)^{kp},\] where \(\alpha_{j}=\sqrt{\alpha_{j}^{-}\alpha_{j}^{+}}\), \(A_{j}=\prod_{i=1}^{j-1}\alpha_{i}\), \(A=\frac{1}{k}\sum_{j=1}^{k}A_{j}\) and \(W_{0}:[-1/\mathrm{e},+\infty)\to[-1,+\infty)\) is the real part of the principal branch of the Lambert \(W\) function. Note that \(A_{k+1}=A_{1}=1\) by hypothesis (**B**). Therefore, \(A_{j}=A_{j\mod k}\). Function \(W_{0}(x)\) is implicitly determined by relations \(W_{0}(x\mathrm{e}^{\mathcal{I}})=x\) for all \(x\geq-1\) and \(W_{0}(x)\mathrm{e}^{W_{0}(x)}=x\) for all \(x\geq-1/\mathrm{e}\), see [21]. The exponent \(h_{\star}=a_{\star}W_{0}(b_{\star}/\mathrm{e})/b_{\star}>0\) in the exponentially small lower bound is the most important constant in Theorem 37. It is 'proportional' to \(a_{\star}\). We note that there is \(i\in\{1,\dots,k\}\) such that \(A=\frac{1}{k}\sum_{j=1}^{k}A_{j}\geq A_{i}\), so \(a_{\star}<4\). Exponent \(h_{\star}\) also depends on \(b_{\star}\) through Lambert function \(W_{0}\). It is known that \(W_{0}(x)/x\) is decreasing for \(x>0\), \(\lim_{x\to 0^{+}}W_{0}(x)/x=W^{\prime}(0)=1\) and \(W_{0}(x)/x\) is asymptotic to \(\frac{\log x}{x}\) as \(x\to+\infty\). Hence, \(h_{\star}<a_{\star}/\mathrm{e}<4/\mathrm{e}\) for any \(\Gamma\). We conclude that expression \(h_{\star}=a_{\star}W_{0}(b_{\star}/\mathrm{e})/b_{\star}\) is, by no means, optimal. If \(\Gamma\) tends to a circle, then \(\alpha_{j}^{-}\) and \(\alpha_{j}^{+}\) become closer and closer, so \(h_{\star}\) tends to zero. The optimal constant \(c_{\star}(p)\) that satisfies the third bound can be way bigger than the crude value \(c_{\star}(p)=2(a_{\star})^{kp-1}/(kp)^{kp}\) obtained directly from the first bound. We give a way to compute the optimal value \(c_{\star}(p)=2^{kp}\lim_{q\to+\infty}q^{1-kp}G_{q}\big{(}P^{(p)}\big{)}\) in Proposition 39, whose proof is postponed to Appendix B. If \(P\) is a Jordan measurable set of \(\mathbb{R}^{n}\), let \(\mathrm{V}(P)\) be its _\(n\)-dimensional volume_. Let \(H_{n+1}=\big{\{}\boldsymbol{x}\in\mathbb{R}^{n+1}:x_{1}+\dots+x_{n+1}=1\big{\}}\). Let \(\Pi_{n+1}:\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) be the projection \[\boldsymbol{x}=(x_{1},\dots,x_{n+1})\mapsto\tilde{\boldsymbol{x}}=(x_{1},\dots,x_{n}).\] Projected objects onto \(\mathbb{R}^{n}\) are distinguished with a tilde. Recall that \(n+1=pk\). **Proposition 39**.: 1. _If_ \(\Gamma\) _is a circular_ \(k\)_-gon and_ \(p\in\mathbb{N}\)_, then_ \[\#\Pi(p,q)\geq 2^{kp}G_{q}\big{(}P^{(p)}\big{)}\geq 2^{kp}V\big{(}\tilde{K}_{ \infty}^{(p)}\big{)}q^{kp-1}+\mathrm{O}(q^{kp-2})\quad\text{ as }q\to+\infty,\] _where_ \(\tilde{K}_{\infty}^{(p)}=\overline{\lim_{q\to+\infty}\tilde{P}_{q}^{(p)}}\) _is the closure of the limit of the bounded convex polytopes_ \[\tilde{P}_{q}^{(p)}=\Pi_{n+1}\big{(}P_{q}^{(p)}\big{)},\qquad P_{q}^{(p)}=P^{(p) }/q\cap H_{n+1},\qquad P^{(p)}/q=\{\boldsymbol{x}/q:\boldsymbol{x}\in P^{(p) }\},\] _which are computed by_ \(q\)_-contraction, section with hyperplane_ \(H_{n+1}\) _and projection by_ \(\Pi_{n+1}\) _of the unbounded convex polytope_ \(P^{(p)}\) _defined in (_16_)._ 2. _This lower bound is optimal in the sense that_ \[\lim_{q\to+\infty}q^{1-kp}G_{q}\big{(}P^{(p)}\big{)}=\mathrm{V}\,\big{(}\tilde{K}_ {\infty}^{(p)}\big{)}.\] _._ 3. _The half-space representation of the limit compact convex polytope is_ \[\tilde{K}_{\infty}^{(p)}=\overline{\lim_{q\to+\infty}\tilde{P}_{q}^{(p)}}=\left\{ \begin{array}{ll}&\alpha_{j}^{-}x_{j}\leq x_{j+1}\leq\alpha_{j}^{+}x_{j},\quad \forall j=1,\ldots,n-1\\ &\alpha_{n}^{-}x_{n}\leq 1-\varsigma(\tilde{\mathbf{x}})\leq\alpha_{n}^{+}x_{n} \\ &\alpha_{n+1}^{-}(1-\varsigma(\tilde{\mathbf{x}}))\leq x_{1}\leq\alpha_{n+1}^{+}(1- \varsigma(\tilde{\mathbf{x}}))\\ &x_{j}\geq 0,\quad\forall j=1,\ldots,n\\ &\varsigma(\tilde{\mathbf{x}})\leq 1\end{array}\right\},\] (20) _where_ \(\varsigma(\tilde{\mathbf{x}})=x_{1}+\cdots+x_{n}\)_._ There exist several algorithms to compute the volume of compact convex polytopes from their half-space representations, so expression (20) can be used to compute \(V\big{(}\tilde{K}_{\infty}^{(p)}\big{)}\). ### Proof of the polynomial general lower bound Recall that \(P^{(p)}\) is the unbounded convex polytope (16). We will introduce a cube \[K=\{\mathbf{x}\in\mathbb{R}^{n+1}:|\mathbf{x}-\mathbf{o}|_{\infty}\leq t\}, \tag{21}\] which is the ball centered at the point \(\mathbf{o}\in\mathbb{R}^{n+1}\) of radius \(t\) in the infinity norm \(|\cdot|_{\infty}\). Its center \(\mathbf{o}=(o_{1},\ldots,o_{n+1})\) will have three key properties: 1) \(\mathbf{o}\in P^{(p)}\), 2) \(\sum_{j=1}^{n+1}o_{j}=q\), and 3) \(o_{j}=o_{j\mod k}\). Then, radius \(t\) is taken as the largest value such that \(K\subset P^{(p)}\). For convenience, we will not explicit the dependence of \(K\) on integers \(1\leq p<q\). **Lemma 40**.: _Let \(k,n,p,q\in\mathbb{N}\) such that \(1\leq p<q\) and \(n+1=kp\). Recall constants listed in Remark 38. If \(\kappa_{\star}=a_{\star}/4\), \(\tau_{\star}=\max\{\chi_{1},\ldots,\chi_{k}\}\), \(0\leq t<t_{\star}=\kappa_{\star}q/(n+1)-\tau_{\star}\) and_ \[\mathbf{o}=(o_{1},\ldots,o_{n+1})\in\mathbb{R}^{n+1},\qquad o_{j}=\frac{qA_{j}\mod k }{(n+1)A},\] _then \(o_{1}+\cdots+o_{n+1}=q\) and \(K=\{\mathbf{x}\in\mathbb{R}^{n+1}:|\mathbf{x}-\mathbf{o}|_{\infty}\leq t\}\subset P^{(p)}\)._ Proof.: Clearly, \(o_{1}+\cdots+o_{n+1}=\frac{q}{(n+1)A}\sum_{j=1}^{n+1}A_{j}=\frac{qp}{(n+1)A} \sum_{j=1}^{k}A_{j}=\frac{kp}{n+1}q=q\). If \(\mathbf{x}\in K\), then \(\mathbf{x}=\mathbf{o}+t\mathbf{u}\) for some \(\mathbf{u}\in\mathbb{R}^{n+1}\) such that \(|\mathbf{u}|_{\infty}\leq 1\). With the suitable choice of the radius \(t\), the point \(\mathbf{x}\) satisfies the following three sets of inequalities that define the unbounded convex polytope \(P^{(p)}\) given in (16): * _First set (with \(2n\) inequalities)._ Since \(o_{j+1}=\alpha_{j}o_{j}\) for all \(j=1,\ldots,n\), we see that \[\alpha_{j}^{-}x_{j}+\beta_{j}^{-}<x_{j+1}<\alpha_{j}^{+}x_{j}-\beta_{j}^{+} \Leftrightarrow\left\{\begin{array}{ll}(\alpha_{j}^{-}u_{j}-u_{j+1})t<( \alpha_{j}-\alpha_{j}^{-})o_{j}-\beta_{j}^{-}\\ (u_{j+1}-\alpha_{j}^{+}u_{j})t<(\alpha_{j}^{+}-\alpha_{j})o_{j}-\beta_{j}^{+} \end{array}\right.\] * _Second set (with \(2\) inequalities)._ Since \(A_{1}=1\) and \(A_{n+1}=A_{k}=\prod_{j=1}^{k-1}\alpha_{j}=1/\alpha_{k}\), we get that \(o_{1}=\alpha_{k}o_{n+1}=\alpha_{k}o_{k}\). Besides, \(\beta_{n+1}^{\pm}=\beta_{k}^{\pm}\) and \(\alpha_{n+1}^{\pm}=\alpha_{k}^{\pm}\). Hence, \[\alpha_{n+1}^{-}x_{n+1}+\beta_{n+1}^{-}<x_{1}<\alpha_{n+1}^{+}x_{n+1}-\beta_{n +1}^{+}\Leftrightarrow\left\{\begin{array}{ll}(\alpha_{k}^{-}u_{n+1}-u_{1} )t<(\alpha_{k}-\alpha_{k}^{-})o_{k}-\beta_{k}^{-}\\ (u_{1}-\alpha_{k}^{+}u_{n+1})t<(\alpha_{k}^{+}-\alpha_{k})o_{k}-\beta_{k}^{+} \end{array}\right.\] * _Third set (with \(n+1\) inequalities)._\(x_{j}\geq\chi_{j}\Leftrightarrow-u_{j}t\leq o_{j}-\chi_{j}\). Let us analyze the RHS and LHS of the above \(3n+3\) inequalities. Coordinates \(o_{j}\) can be as big as needed if we take \(q/(n+1)\gg 1\), because quotients \(A_{j\mod k}/A\) do not depend on \(p\), \(q\) or \(n\). Thus, using that \(\alpha_{j}^{-}<\alpha_{j}<\alpha_{j}^{+}\) for all \(j=1,\ldots,k\), all RHS can be made positive if we take \(q/(n+1)\gg 1\). On the other hand, we can bound the LHS as follows: \[(\alpha_{j}^{-}u_{j}-u_{j+1})t\leq(1+\alpha_{j}^{-})t,\qquad(u_{j+1}-\alpha_{j }^{+}u_{j})t\leq(1+\alpha_{j}^{+})t,\qquad-u_{j}t\leq t,\] because \(|u_{j}|\leq|\mathbf{u}|_{\infty}\leq 1\) for all \(j=1,\ldots,n+1\) and \(t\geq 0\). Therefore, these \(3n+3\) inequalities hold when we take any \(t\in[0,t_{\star})\) with \[t_{\star} =\min\left\{\frac{(\alpha_{j}-\alpha_{j}^{-})o_{j}-\beta_{j}^{-}} {1+\alpha_{j}^{-}},\frac{(\alpha_{j}^{+}-\alpha_{j})o_{j}-\beta_{j}^{+}}{1+ \alpha_{j}^{+}},o_{j}-\chi_{j}:j=1,\ldots,n+1\right\}\] \[=\min\left\{\kappa_{j}^{-}q/(n+1)-\tau_{j}^{-},\kappa_{j}^{+}q/(n+1 )-\tau_{j}^{+},\kappa_{j}q/(n+1)-\tau_{j}:j=1,\ldots,k\right\},\] where \[\kappa_{j}^{\pm}=\frac{|\alpha_{j}-\alpha_{j}^{\pm}|A_{j}}{(1+\alpha_{j}^{\pm})A },\qquad\kappa_{j}=\frac{A_{j}}{A},\qquad\tau_{j}^{\pm}=\frac{\beta_{j}^{\pm}}{ 1+\alpha_{j}^{\pm}}=1,\qquad\tau_{j}=\chi_{j}.\] All these arguments imply that \(K\subset P^{(p)}\) provided that \(0\leq t<t_{\star}:=\kappa_{\star}q/(n+1)-\tau_{\star}\), where \[\kappa_{\star}=\min\left\{\kappa_{1}^{-},\kappa_{1}^{+},\kappa_{1},\ldots, \kappa_{k}^{-},\kappa_{k}^{+},\kappa_{k}\right\}>0,\quad\tau_{\star}=\max\left\{ \tau_{1}^{-},\tau_{1}^{+},\tau_{1},\ldots,\tau_{k}^{-},\tau_{k}^{+},\tau_{k} \right\}>0.\] These constants \(\kappa_{\star}\) and \(\tau_{\star}\) do not depend on \(p\), \(q\) or \(n\). We note that \((\alpha_{j}^{+}-\alpha_{j})/(1+\alpha_{j}^{+})<1\). Hence, \(\kappa_{j}^{+}<\kappa_{j}\) and we can take \[\kappa_{\star}=\min\left\{\kappa_{1}^{-},\kappa_{1}^{+},\ldots,\kappa_{k}^{-},\kappa_{k}^{+}\right\}=a_{\star}/4,\] see Remark 38. We look for a lower bound on \(G_{q}(K)\), where \(K\) is a cube of the form (21) such that \(\sum_{j=1}^{n+1}o_{j}=q\). Note that \(G_{q}\big{(}(0,1)^{n}\big{)}=\binom{n}{q}\), so \(G_{q}\big{(}(0,1)^{2q}\big{)}=\binom{2q}{q}\geq 4^{q}/(2q+1)\) grows exponentially fast as \(q\to+\infty\). We want to generalize this idea. Since there is no standard notation for the generalized binomial coefficients that we need --for instance, symbols \(\binom{n,m}{q}\) and \(\binom{n}{q}^{(m)}\) can be found in [48, 43]--, we use our own notation. Set \[[0..m]:=\mathbb{Z}\cap[0,m]=\{0,1,\ldots,m-1,m\}.\] Then \(G_{q}\big{(}[0..m]^{n}\big{)}\) counts the number of _weak compositions_ of \(q\) into \(n\) parts with no part exceeding \(m\). Note that \(G_{q}\big{(}[0..m]^{n}\big{)}=0\) for any \(q\not\in[0..nm]\). It is well know [26, section I.3] that \[\sum_{q=0}^{\infty}G_{q}\big{(}[0..m]^{n}\big{)}x^{q}=(1+x+x^{2}+\cdots+x^{m}) ^{n}.\] Using this polynomial identity, Andrews [2] deduced that, once fixed \(m,n\in\mathbb{N}\), the sequence \(G_{q}\big{(}[0..m]^{n}\big{)}\) is unimodal on \(q\) and reaches its maximum at \(q=\lfloor nm/2\rfloor\). **Lemma 41**.: \(G_{\lfloor nm/2\rfloor}\big{(}[0..m]^{n}\big{)}\geq\frac{(m+1)^{n}}{nm+1}\geq \frac{(m+1)^{n-1}}{n}\) _for all \(m,n\in\mathbb{N}\)._ Proof.: It follows from \(\#[0..nm]=nm+1\), \(\sum_{q=0}^{nm}G_{q}\big{(}[0..m]^{n}\big{)}=\#\left([0..m]^{n}\right)=(m+1)^ {n}\), and inequalities \(G_{q}\big{(}[0..m]^{n}\big{)}\leq G_{\lfloor nm/2\rfloor}\big{(}[0..m]^{n} \big{)}\) for all \(q\in[0..mn]\). Now we are ready to establish the lower bound on \(G_{q}(K)\) that we are looking for. **Lemma 42**.: _Let \(n,q\in\mathbb{N}\) and \(t>0\). If \(K\) is a cube of the form (21) such that \(\sum_{j=1}^{n+1}o_{j}=q\) and \(t\geq 3/2\), then_ \[G_{q}(K)\geq\frac{(2t-3)^{n}}{n+1}. \tag{22}\] Proof.: There exists an integer point \(\mathbf{o}^{\prime}\in\mathbb{Z}^{n+1}\) such that \(|\mathbf{o}-\mathbf{o}^{\prime}|_{\infty}\leq 1\) and \(\sum_{j=1}^{n+1}o_{j}^{\prime}=q\). If \(\mathbf{o}\in\mathbb{Z}^{n+1}\), we take \(\mathbf{o}^{\prime}=\mathbf{o}\). If \(\mathbf{o}\not\in\mathbb{Z}^{n+1}\), we can take, for instance, \[o_{j}^{\prime}=\begin{cases}\lfloor o_{j}\rfloor+1,&\text{for }j\leq i,\\ \lfloor o_{j}\rfloor,&\text{otherwise},\end{cases}\] where \(i=q-\sum_{j=1}^{n+1}\lfloor o_{j}\rfloor\in[1..n]\), so that \(\sum_{j=1}^{n+1}o_{j}^{\prime}=i+\sum_{j=1}^{n+1}\lfloor o_{j}\rfloor=q\). Set \(m=\lfloor t\rfloor-1\in\mathbb{N}\cup\{0\}\) and \(v_{j}=o_{j}^{\prime}-m\). Clearly, \([v_{j},v_{j}+2m]\subset[o_{j}-t,o_{j}+t]\). Hence, given any \(\mathbf{y}\in[0..2m]^{n+1}\) such that \(\sum_{j=1}^{n+1}y_{j}=(n+1)m\), the sum of the components of the vector \(\mathbf{x}=\mathbf{y}+\mathbf{v}\in K\cap\mathbb{Z}^{n+1}\) is equal to \[\sum_{j=1}^{n+1}x_{j}=(n+1)m+\left(\sum_{j=1}^{n+1}o_{j}^{\prime}\right)-(n+1)m=q.\] Besides, the correspondence \([0..2m]^{n+1}\ni\mathbf{y}\mapsto\mathbf{x}=\mathbf{y}+\mathbf{v}\in K\cap\mathbb{Z}^{n+1}\) is injective, which implies that \[G_{q}(K)\geq G_{(n+1)m}\big{(}[0..2m]^{n+1}\big{)}=G_{\lfloor(n+1)2m/2\rfloor} \big{(}[0..2m]^{n+1}\big{)}\geq\frac{(2m+1)^{n}}{n+1}\geq\frac{(2t-3)^{n}}{n+1}.\] We have used Lemma 41 and \(m=\lfloor t\rfloor-1\geq t-2\) in the last two inequalities. To end, we prove the first lower bound stated in Theorem 37. _Proof of the polynomial general lower bound._ This bound follows from bound (15), inclusion \(K\subset P^{(p)}\), bound (22), condition \(0\leq t<t_{\star}:=\kappa_{\star}q/(n+1)-\tau_{\star}\) required in Lemma 40, and identities \(a_{\star}=4\kappa_{\star}\), \(b_{\star}=4\tau_{\star}+6\) and \(n+1=kp\). Namely, \[\#\Pi(p,q) \geq 2^{n+1}G_{q}(P^{(p)})\geq\max_{t\in[3/2,t_{\star})}\big{\{}2^{ n+1}G_{q}(K)\big{\}}\geq\max_{t\in[3/2,t_{\star})}\frac{2(4t-6)^{n}}{n+1}= \frac{2(4t_{\star}-6)^{n}}{n+1}\] \[=\frac{2}{n+1}\left(\frac{4\kappa_{\star}q}{n+1}-4\tau_{\star}-6 \right)^{n}=\frac{2}{n+1}\left(\frac{a_{\star}q}{n+1}-b_{\star}\right)^{n}= \frac{2}{kp}\left(\frac{a_{\star}q}{kp}-b_{\star}\right)^{kp-1}.\] Note that \([3/2,t_{\star})\neq\emptyset\) since \(q>b_{\star}kp/a_{\star}\) implies that \(t_{\star}=\kappa_{\star}q/(n+1)-\tau_{\star}>3/2\). ### Proof of the two asymptotic lower bounds We describe the exponentially fast growth of \(\#\Pi(p,q)\) when \(p=\lfloor xq/k\rfloor\) and \(q\to+\infty\) for some fixed limit ratio \(x>0\). We shall also determine the limit ratio \(x_{\star}>0\) that gives the largest exponent \(h_{\star}\) in the exponential bound. **Lemma 43**.: _Let \(0<a<b\) and \(k\in\mathbb{N}\). If_ * \(M(x)=2(a/x-b)^{-k-1}/x\) _for_ \(0<x<a/b\)_;_ * \(h(x)=x\log(a/x-b)\) _for_ \(0<x<a/b\)_; and_ * \(G(p,q)=2(aq/kp-b)^{kp-1}/kp\) _for_ \(p,q\in\mathbb{N}\) _such that_ \(0<kp/q<a/b\)_,_ _then_ \[G(\lfloor xq/k\rfloor,q)\geq M(x)\mathrm{e}^{h(x)q}/q,\qquad\forall q\geq(1+b )kp/a,\ \forall x\in\big{(}0,a/(b+1)\big{]}.\] _The exponent \(h:(0,a/b)\to\mathbb{R}\) reaches its maximum value \(h_{\star}=h(x_{\star})=aW_{0}(b/\mathrm{e})/b>0\) at the point \(x_{\star}=aW_{0}(b/\mathrm{e})/((1+W_{0}(b/\mathrm{e}))b)\)._ Proof.: If \(q\in\mathbb{N}\), \(x\in(0,a/(b+1)]\) and \(p=\lfloor xq/k\rfloor\), then \(aq/kp-b\geq 1\), \(xq-k<kp\leq xq\) and \[G(p,q)=\frac{2}{kp}\left(\frac{aq}{kp}-b\right)^{kp-1}\geq\frac{2}{kp}\left( \frac{aq}{kp}-b\right)^{xq-k-1}\geq\frac{2}{xq}\left(\frac{a}{x}-b\right)^{xq- k-1}=\frac{1}{q}M(x)\mathrm{e}^{h(x)q}.\] Next, we look for the global maximum of \(h(x)\). After the changes of variable \[(0,+\infty) \ni\hat{x}\leftrightarrow x=\frac{a}{b+\mathrm{e}^{\hat{x}}}\in \big{(}0,a/(b+1)\big{)},\] \[(0,+\infty) \ni\hat{h}\leftrightarrow h=a\hat{h}\in(0,+\infty),\] we get that \(h(x)=x\log(a/x-b)=x\hat{x}=a\hat{x}/(b+\mathrm{e}^{\hat{x}})\), so \(\hat{h}(\hat{x})=\hat{x}/(b+\mathrm{e}^{\hat{x}})\). We have reduced the search of the global maximum point \(x_{\star}\in(0,a/(b+1))\) of \(h(x)\) to the search of the global maximum point \(\hat{x}_{\star}>0\) of \(\hat{h}(\hat{x})\). Since \[\frac{\mathrm{d}\hat{h}}{\mathrm{d}\hat{x}}(\hat{x})=\frac{b+(1-\hat{x}) \mathrm{e}^{\hat{x}}}{(b+\mathrm{e}^{\hat{x}})^{2}}=0\Leftrightarrow(\hat{x}-1 )\mathrm{e}^{\hat{x}-1}=b/\mathrm{e}\Leftrightarrow\hat{x}=\hat{x}_{\star}:=1+ W_{0}(b/\mathrm{e}),\] we deduce that \(\hat{h}(\hat{x})\) reaches its maximum value \[\hat{h}_{\star}=\hat{h}(\hat{x}_{\star})=\frac{\hat{x}_{\star}}{b+\mathrm{e} ^{\hat{x}_{\star}}}=\frac{1}{\mathrm{e}^{\hat{x}_{\star}}}=\frac{1}{\mathrm{e} }\frac{1}{\mathrm{e}^{W_{0}(b/\mathrm{e})}}=\frac{W_{0}(b/\mathrm{e})}{b}\] at the point \(\hat{x}=\hat{x}_{\star}\). In order to compute \(\hat{x}_{\star}\), we have used that \(W_{0}(b/\mathrm{e})\mathrm{e}^{W_{0}(b/\mathrm{e})}=b/\mathrm{e}\). Expressions for \(x_{\star}\) and \(h_{\star}\) are obtained by undoing both changes of variable. We can prove now the second and third lower bounds stated in Theorem 37. _Proof of both asymptotic lower bounds._ The second bound of Theorem 37 follows from the first one by applying Lemma 43 with \(a=a_{\star}\) and \(b=b_{\star}\). Analogously, the third bound of Theorem 37 follows from the first one by taking \(c_{\star}(p)=2(a_{\star})^{kp-1}/(kp)^{kp}\) The length spectrum of circular polygons The purpose of this section is to prove Theorem E, which shows an unusual feature of the length spectrum of billiards in circular polygons. In particular, it shows that the well-known results of Marvizi-Melrose [45] fail to hold for circular polygons. This was expected because there are so many periodic billiard trajectories inside circular polygons --as we have seen in the previous section--that we can construct sequences of them whose lengths have rather different asymptotic behaviors. Let \(|\Gamma|\) be the length of \(\Gamma\). Let \(\kappa(s)\) be the curvature of \(\Gamma\) as a function of an arc-length parameter \(s\in[0,|\Gamma|)\). If \(z,z^{\prime}\in\Gamma\) are any two consecutive impact points of a billiard trajectory \(g\), then the segment \([z,z^{\prime}]\subset\mathbb{R}^{2}\) is a _link_ of \(g\) and \(\int_{z}^{z^{\prime}}\mathrm{d}s\) is the distance from \(z\) to \(z^{\prime}\) along \(\Gamma\). Note that \(|z^{\prime}-z|<\int_{z}^{z^{\prime}}\mathrm{d}s\) by convexity. If \(g=\{z_{0},\ldots,z_{q-1}\}\subset\Gamma\) is a \(q\)-periodic billiard trajectory, let \(L(g)=|z_{1}-z_{0}|+\cdots+|z_{q-1}-z_{0}|\) be its _length_. Let \[\underline{L}_{q}=\inf\{L(g):g\in\Pi(1,q)\},\qquad\overline{L}_{q}=\sup\{L(g ):g\in\Pi(1,q)\}.\] To begin with, let us recall the Marvizi-Melrose results for smooth ovals. A _smooth oval_ is a regular, simple, closed, oriented \(C^{\infty}\) curve with positive curvature everywhere. **Theorem 44** (Marvizi & Melrose [45]).: _Let \(\Gamma\) be any smooth oval._ 1. \(\lim_{q\to+\infty}q^{i}\big{(}\overline{L}_{q}-\underline{L}_{q}\big{)}=0\) _for all_ \(i\in\mathbb{N}\)_._ 2. _There are asymptotic coefficients_ \(c_{i}\in\mathbb{R}\) _such that if_ \(g_{q}\in\Pi(1,q)\)_, then_ \[L(g_{q})\asymp|\Gamma|+\sum_{i=1}^{\infty}\frac{c_{i}}{q^{2i}},\quad\text{as }q\to+\infty.\] 3. \(c_{1}=-\frac{1}{24}\left[\int_{\Gamma}\kappa^{2/3}(s)\,\mathrm{d}s\right]^{3}<0\)_._ 4. _If_ \([z,z^{\prime}]\) _is a link of_ \(g_{q}\in\Pi(1,q)\)_, then_ \[\int_{z}^{z^{\prime}}\mathrm{d}s\asymp\frac{1}{q}|\Gamma|,\quad\text{uniformly as }q\to+\infty.\] The symbol \(\asymp\) means that the RHS is asymptotic to the LHS. The first property implies that the Melrose-Marvizi asymptotic coefficients \(c_{i}\) do not depend on the choice of the sequence of periodic trajectories \((g_{p})_{q}\). All of them can be explicitly written in terms of the curvature. For instance, the formulas for \(c_{1}\), \(c_{2}\), \(c_{3}\), and \(c_{4}\) can be found in [56]. Property (d) means that not only are the lengths of \(g_{q}\) asymptotically well-behaved, but as \(q\to+\infty\), the distribution of the points in \(g_{q}\) is asymptotically well-behaved with respect to any one point. Hence, property (d) is like a weak local version of property (b). There is also a strong local version in [45, Theorem 5.9]. We will check that none of properties (a)-(d) of Theorem 44 hold for circular polygons. From now on, we assume that \(\Gamma\) is a circular polygon with arcs \(\Gamma_{j}\), radii \(r_{j}\), central angles \(\delta_{j}\) and polar parametrisation \(z(\varphi)\) introduced in Definition 3. As usual, factors \(\alpha_{j}^{\pm}\), addends \(\beta_{j}^{\pm}=\alpha_{j}^{\pm}+1\) and integers \(\chi_{j}\geq 2\) were introduced in Lemma 17 and Corollary 19, although we are now only interested in \((1,q)\)-periodic trajectories, so \(1\leq j\leq k\) and we no longer need to consider \(j\) modulo \(k\). Recall that \(p=1\) along this section. **Remark 45**.: Corollary 33 implies that there are at least \(2^{k}\) generic sliding periodic billiard trajectories \(g_{q}\in\Pi(1,q)\) with _exactly_\(x_{j}\in\mathbb{N}\) impacts in the arc \(\Gamma_{j}\), \(j=1,\ldots,k\), for any integer point \(\mathbf{x}=(x_{1},\ldots,x_{k})\in P^{(1)}\cap\mathbb{Z}^{k}=P^{(1)}\cap\mathbb{N} ^{k}\) such that \(x_{1}+\cdots+x_{k}=q\). Here, \(P^{(1)}\) is the unbounded convex polytope of \(\mathbb{R}^{k}\) defined in (16) for \(p=1\). We need a couple of technical results before tackling the proof of Theorem E. First, we compute the lengths of generic sliding \((1,q)\)-periodic billiard trajectories. By definition, they impact all arcs but no nodes. Angles similar to \(\varphi_{j}^{\pm}\) and \(\psi_{j}\) below were considered in the proof of part (c) in Lemma 11. **Lemma 46**.: _If \(g_{q}\in\Pi(1,q)\) is a generic sliding periodic billiard trajectory inside \(\Gamma\), then_ \[L(g_{q})=\sum_{j=1}^{k}\big{(}\ell_{j}^{-}+(x_{j}-1)\ell_{j}+\ell_{j}^{+} \big{)},\qquad\ell_{j}=2r_{j}\sin\psi_{j},\qquad\ell_{j}^{\pm}=\frac{r_{j} \sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \tag{23}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})},\qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}- \varphi_{j}^{\pm})}, \tag{24}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})},\qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}- \varphi_{j}^{\pm})}, \tag{25}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})},\qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}- \varphi_{j}^{\pm})}, \tag{26}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})},\qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}- \varphi_{j}^{\pm})}, \tag{27}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})},\qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}- \varphi_{j}^{\pm})}, \tag{28}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})}, \tag{29}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})}, \tag{30}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})}, \tag{31}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})}, \tag{32}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{ \pm})}, \tag{33}\] _where_ \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm})}, \qquad\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}- 1. \(x_{j}\in\mathbb{N}\) _is the number of impact points in_ \(\Gamma_{j}\)_;_ 2. \(\psi_{j}>0\) _is the constant angle of reflection along the_ \(x_{j}\) _impacts in_ \(\Gamma_{j}\)_; and_ 3. \(\varphi_{j}^{\pm}\in(0,2\psi_{j})\) _are the impact angles such that_ \([z(b_{j}-\varphi_{j}^{+}),z(a_{j+1}+\varphi_{j+1}^{-})]\) _is the transition link connecting_ \(\Gamma_{j}\) _and_ \(\Gamma_{j+1}\)_._ _Besides, \(\varphi_{j}^{-}+2(x_{j}-1)\psi_{j}+\varphi_{j}^{+}=\delta_{j}\) for all \(j=1,\ldots,k\)._ Proof.: If we apply the law of sinus to the three triangles \(\Delta POQ\), \(\Delta POR\) and \(\Delta ROQ\) displayed in Figure 6 (see its caption for the definition of each quantity), we get that \[\frac{\ell}{2\sin\psi\cos\psi}=\frac{\ell}{\sin 2\psi}=\frac{r}{\sin\beta}= \frac{r}{\cos\psi},\qquad\frac{\ell^{\pm}}{\sin\varphi^{\pm}}=\frac{r}{\sin( \pi-\beta-\varphi^{\pm})}=\frac{r}{\cos(\psi-\varphi^{\pm})},\] since \(\beta=\pi/2-\psi\), \(\angle ORP=\pi-\beta-\varphi^{+}\) and \(\angle QRO=\pi-\beta-\varphi^{-}\). Therefore, \[\ell=2r\sin\psi,\qquad\ell^{\pm}=\frac{r\sin\varphi^{\pm}}{\cos(\psi-\varphi^ {\pm})}.\] We deduce (23) from those three formulas. If \(g_{q}\) has \(x_{j}\) impacts in \(\Gamma_{j}\) with constant angle of reflection \(\psi_{j}\), then it has \(x_{j}-1\) circular links with a certain constant length \(\ell_{j}\). Each one of these circular links \([P,Q]\) is the base of an isosceles triangle \(\Delta OPQ\) like the one displayed in Figure 6, with \(O=O_{j}\), \(r=r_{j}\) and \(\psi=\psi_{j}\). Hence \(\ell_{j}=2r_{j}\sin\psi_{j}\). Let us consider the transition link \([z(b_{j}-\varphi_{j}^{+}),z(a_{j+1}+\varphi_{j+1}^{-})]\) connecting \(\Gamma_{j}\) and \(\Gamma_{j+1}\) and the isosceles triangle \(\Delta OPQ\) with \[O=O_{j},\qquad P=z(b_{j}-\varphi_{j}^{+})\in\Gamma_{j},\qquad Q=O_{j}+r_{j} \mathrm{e}^{i(b_{j}-\varphi_{j}^{+}+2\psi_{j})}.\] We stress that \(Q\) is an auxiliary point: \(Q\not\in\Gamma\). Let \(R=[P,Q]\cap[O,z(b_{j})]\). Then \(r=r_{j}\), \(\varphi^{+}=\varphi_{j}^{+}\) and \(\ell^{+}=\ell_{j}^{+}\). Therefore, \(\ell_{j}^{+}=r_{j}\sin\varphi_{j}^{+}/\cos(\psi_{j}-\varphi_{j}^{+})\). The formula for \(\ell_{j+1}^{-}\) is deduced in a similar way, but taking \[O=O_{j+1},\qquad P=O_{j+1}+r_{j+1}\mathrm{e}^{i(a_{j+1}-\varphi_{j+1}^{-}-2 \psi_{j+1})},\qquad Q=z(a_{j+1}+\varphi_{j+1}^{-})\in\Gamma_{j+1},\] \(R=[P,Q]\cap[O,z(a_{j+1})]\), \(r=r_{j+1}\), \(\varphi^{-}=\varphi_{j+1}^{-}\) and \(\ell^{-}=\ell_{j+1}^{-}\). (In this case, the auxiliary point is \(P\): \(P\not\in\Gamma\)). By construction, the transition link \([z(b_{j}-\varphi_{j}^{+}),z(a_{j+1}+\varphi_{j+1}^{-})]\) has length \(\ell_{j}^{+}+\ell_{j+1}^{+}\). This proves (23). Finally, relation \(\varphi_{j}^{-}+2(x_{j}-1)\psi_{j}+\varphi_{j}^{+}=\delta_{j}\) is geometrically evident. Next, we need a technical result about the extreme values of the differentiable strict concave function (25) over a bounded convex polytope \(P_{\infty}^{(1)}\) related to the unbounded convex polytope \(P^{(1)}\) of \(\mathbb{R}^{k}\) defined in (16) for \(p=1\). Recall that \(\Delta_{k-1}=\{\boldsymbol{x}\in\mathbb{R}^{k}:\boldsymbol{x}>0,\;x_{1}+\dots+ x_{k}=1\}\) is the open \((k-1)\)-simplex and \(H_{k}=\{\boldsymbol{x}\in\mathbb{R}^{k}:x_{1}+\dots+x_{k}=1\}\). **Lemma 47**.: _The bounded convex polytope \(P_{\infty}^{(1)}=\lim_{q\to\infty}\big{(}\{\boldsymbol{x}/q:\boldsymbol{x}\in P ^{(1)}\}\cap H_{k}\big{)}\) is given by_ \[P_{\infty}^{(1)}=\left\{\begin{array}{ll}&\alpha_{j}^{-}x_{j}<x_{j+1}<\alpha_ {j}^{+}x_{j},\quad\forall j=1,\dots,k-1\\ \boldsymbol{x}\in\mathbb{R}^{k}:&\alpha_{k}^{-}x_{k}<x_{1}<\alpha_{k}^{+}x_{ k}\\ &x_{j}>0,\quad\forall j=1,\dots,k\\ &x_{1}+\dots+x_{k}=1\end{array}\right\} \tag{24}\] _and its compact closure \(K_{\infty}^{(1)}\) is contained in the open simplex \(\Delta_{k-1}\). Let_ \[h:\Delta_{k-1}\to(-\infty,0),\qquad h(\boldsymbol{y})=-\frac{1}{24}\sum_{j=1} ^{k}\frac{\delta_{j}^{3}r_{j}}{y_{j}^{2}}. \tag{25}\] _Set \(I_{1}=h\big{(}P_{\infty}^{(1)}\big{)}\), \(c_{1}^{-}=\inf I_{1}\) and \(c_{1}^{+}=\sup I_{1}\). Then \(c_{1}^{+}\in I_{1}\) and_ \[-\infty<c_{1}^{-}\leq-\pi^{2}|\Gamma|/6<c_{1}^{+}=\frac{1}{24}\left[\int_{ \Gamma}\kappa^{2/3}(s)\,\mathrm{d}s\right]^{3}<0. \tag{26}\] Proof.: Expression (24) is trivial. We check that \(K_{\infty}^{(1)}\subset\Delta_{k-1}\) by a reductio ad absurdum argument. Let us assume that \(\boldsymbol{x}=(x_{1},\dots,x_{k})\in K_{\infty}^{(1)}\) and \(x_{i}=0\) for some \(i\). Then inequalities \(\alpha_{j}^{-}x_{j}\leq x_{j+1}\leq\alpha_{j}^{+}x_{j}\) for \(j=1,\dots,k-1\) and \(\alpha_{k}^{-}x_{k}\leq x_{1}\leq\alpha_{k}^{+}x_{k}\) imply that \[x_{i+1}=\dots=x_{k}=x_{1}=\dots=x_{i-1}=0,\] so identity \(x_{1}+\dots+x_{k}=1\) fails. Contradiction. The image of a compact convex set by a continuous function that only takes negative values is a compact interval of \((-\infty,0)\), so \(\overline{I_{1}}=h\big{(}K_{\infty}^{(1)}\big{)}=[c_{1}^{-},c_{1}^{+}]\) for some numbers \(-\infty<c_{1}^{-}\leq c_{1}^{+}<0\). Let us estimate the minimum value \(c_{1}^{-}\), compute exactly the maximum value \(c_{1}^{+}\), prove that \(c_{1}^{-}<c_{1}^{+}\), and check that \(c_{1}^{+}\in I_{1}\). We claim that function (25) attains its maximum value only at \(\boldsymbol{y}=\boldsymbol{w}(1/3)\), where \[\boldsymbol{w}(\xi)=\frac{1}{S(\xi)}\big{(}s_{1}(\xi),\dots,s_{k}(\xi)\big{)} \in\Delta_{k-1},\qquad s_{j}(\xi)=\delta_{j}r_{j}^{\xi},\qquad S(\xi)=\sum_{j =1}^{k}\delta_{j}r_{j}^{\xi}\] for all \(\xi\in\mathbb{R}\). On the one hand, the gradient of \(\mathbb{R}_{+}^{k}\ni\boldsymbol{y}\mapsto\sum_{j=1}^{k}\delta_{j}^{3}r_{j}/y _{j}^{2}\) is the vector with components \(-2\delta_{j}^{3}r_{j}/y_{j}^{3}\), so \(\boldsymbol{y}\in\Delta_{k-1}\) is a critical point of (25) if and only if \[\big{(}s_{i}(1/3)/y_{i}\big{)}^{3}=\delta_{i}^{3}r_{i}/y_{i}^{3}=\delta_{j}^{3} r_{j}/y_{j}^{3}=\big{(}s_{j}(1/3)/y_{j}\big{)}^{3},\qquad\forall i\neq j.\] This means that \(\boldsymbol{w}(1/3)\) is the only critical point of (25). On the other hand, \(\sum_{j=1}^{k}\delta_{j}^{3}r_{j}/y_{j}^{2}\) is a nonnegative weighted sum of convex terms \(1/y_{j}^{2}\), so \(-\frac{1}{24}\sum_{j=1}^{k}\delta_{j}^{3}r_{j}/y_{j}^{2}\) is strict concave function on \(\mathbb{R}_{+}^{k}\) and (25) is a differentiable strict concave function. Hence, the local maximum \(\boldsymbol{w}(1/3)\) is a strict global maximum. This proves the claim. Besides, \[h(\boldsymbol{w}(\xi))=-\frac{1}{24}\sum_{j=1}^{k}\frac{\delta_{j}^{3}r_{j}}{ \big{(}s_{j}(\xi)/S(\xi)\big{)}^{2}}=-\frac{1}{24}S(\xi)^{2}\sum_{j=1}^{k} \delta_{j}r_{j}^{1-2\xi}=-\frac{1}{24}S(\xi)^{2}S(1-2\xi).\] In particular, \(h(\boldsymbol{w}(0))<h(\boldsymbol{w}(1/3))\) and \[h(\boldsymbol{w}(0))=-S(0)^{2}S(1)/24=-(2\pi)^{2}|\Gamma|/24=-\pi^{2}|\Gamma|/6,\] \[h(\boldsymbol{w}(1/3))=-S(1/3)^{3}/24=-\frac{1}{24}\left[\sum_{j=1}^{k}\delta_ {j}r_{j}^{1/3}\right]^{3}=-\frac{1}{24}\left[\int_{\Gamma}\kappa^{2/3}(s)\, \mathrm{d}s\right]^{3}.\] Here we have used that \(|\Gamma_{j}|=\delta_{j}r_{j}\) and \(\int_{\Gamma_{j}}\kappa^{2/3}(s)\,\mathrm{d}s=|\Gamma_{j}|/r_{j}^{2/3}=\delta_{j} r_{j}^{1/3}\) since \(\Gamma_{j}\) is a circular arc of radius \(r_{j}\) and central angle \(\delta_{j}\). Hence, property \(c_{1}^{+}\in I_{1}\) and inequalities (26) hold provided \(\mathbf{w}(0)\in K_{\infty}^{(1)}\) and \(\mathbf{w}(1/3)\in P_{\infty}^{(1)}\). It turns out that \(\mathbf{w}(\xi)\) satisfies the \(3k+1\) conditions listed in (24), so that \(\mathbf{w}(\xi)\in P_{\infty}^{(1)}\), for all \(\xi\in(0,1/2]\). For instance, \(\mathbf{w}(\xi)\) satisfies the first \(2k-2\) inequalities: \[\xi\in(0,1/2] \Rightarrow r_{j}^{\xi}\min\left\{1,\sqrt{r_{j+1}/r_{j}}\right\}<r_{j+1}^{ \xi}<r_{j}^{\xi}\max\left\{1,\sqrt{r_{j+1}/r_{j}}\right\}\] \[\Rightarrow \alpha_{j}^{-}w_{j}(\xi)<w_{j+1}(\xi)<\alpha_{j}^{+}w_{j}(\xi) \text{ for }j=1,\ldots,k-1.\] Inequalities \(\alpha_{k}^{-}w_{k}(\xi)<w_{1}(\xi)<\alpha_{k}^{+}w_{k}(\xi)\) are proved in a similar way. Inequalities \(w_{j}(\xi)>0\) and identity \(w_{1}(\xi)+\cdots+w_{k}(\xi)=1\) are trivial. Finally, \(\mathbf{w}(0)=\lim_{\xi\to 0^{+}}\mathbf{w}(\xi)\in K_{\infty}^{(1)}\) The relation between the left endpoint \(c_{1}^{-}\) of interval \(I_{1}\) and \(-\pi^{2}|\Gamma|/6\) is an open problem. For instance, both quantities coincide for squared pseudo-ellipses. To be precise, let \(\Gamma=E_{\pi/2,r,R}\) be a squared pseudo-ellipse of radii \(r\) and \(R>r\), see Section 2. That is, \(\Gamma=E_{\pi/2,r,R}\) is the circular \(4\)-gon with radii \(r_{1}=r_{3}=r\) and \(r_{2}=r_{4}=R\), and central angles \(\delta_{1}=\delta_{2}=\delta_{3}=\delta_{4}=\pi/2\). A tedious computation that we omit for the sake of brevity shows that \[c_{1}^{-} =-\pi^{2}|E_{\pi/2,r,R}|/6=-\pi^{3}(R+r)/6,\] \[c_{1}^{+} =-\frac{1}{24}\left[\int_{E_{\pi/2,r,R}}\kappa^{2/3}(s)\,\mathrm{ d}s\right]^{3}=-\frac{1}{24}\left[\sum_{j=1}^{k}\delta_{j}\sqrt{r_{j}}\right]^{3 }=-\frac{\pi^{3}}{24}\left[\sqrt{R}+\sqrt[3]{r}\right)^{3}.\] These two expressions above coincide when \(R=r\). In general, \(c_{1}^{+}-c_{1}^{-}\) tends to zero when \(\Gamma\) tends to a circle of finite radius. The main result of this section is nothing more than a reformulation of Theorem E. **Theorem 48**.: _Let \(P_{\infty}^{(1)}\subset\Delta_{k-1}\), \(I_{1}=h\big{(}P_{\infty}^{(1)}\big{)}\subset(-\infty,0)\), \(c_{1}^{-}=\inf I_{1}\) and \(c_{1}^{+}=\max I_{1}\) be the open bounded convex polytope of \(\mathbb{R}^{k}\), the image interval, and the extreme values introduced in Lemma 47, respectively. Extreme values \(c_{1}^{\pm}\) satisfy inequalities (26). For any fixed \(c\in[c_{1}^{-},c_{1}^{+}]\) there exist a period \(q_{0}\in\mathbb{N}\) and a sequence \((g_{q})_{q\geq q_{0}}\) of generic sliding periodic billiard trajectories \(g_{q}\in\Pi(1,q)\) such that_ \[L(g_{q})=|\Gamma|+c/q^{2}+\mathrm{O}(1/q^{3}),\quad\text{as }q\to+\infty.\] _Consequently, there exist a sequence \((h_{q})_{q}\), with \(h_{q}\in\Pi(1,q)\), such that_ \[c_{-}=\liminf_{q\to+\infty}\big{(}(L(h_{q})-|\Gamma|)q^{2}\big{)}<\limsup_{q \to+\infty}\big{(}(L(h_{q})-|\Gamma|)q^{2}\big{)}=c_{+},\quad\text{as }q\to+\infty.\] Proof.: If \(c\in(c_{1}^{-},c_{1}^{+}]\), then \(c=h(\mathbf{y})\) for some \(\mathbf{y}\in P_{\infty}^{(1)}\). If \(q\in\mathbb{N}\) is big enough, then there exists a point \(\mathbf{x}=(x_{1},\ldots,x_{k})\in\mathbb{N}^{k}\) such that \(|q\mathbf{y}-\mathbf{x}|_{\infty}\leq 1\) and \(\mathbf{x}\in P^{(1)}\cap qH_{k}\), where \(P^{(1)}\) is the unbounded convex polytope defined in (16) for \(p=1\). Let us prove this claim. First, we observe that \(y_{j}>0\), so \(qy_{j}\geq 1\) when \(q\gg 1\). If \(q\mathbf{y}\in\mathbb{N}^{k}\), then we take \(\mathbf{x}=q\mathbf{y}\). If \(q\mathbf{y}\not\in\mathbb{N}^{k}\), then we can take, for instance, \[x_{j}=\begin{cases}\lfloor qy_{j}\rfloor+1,&\text{for }j\leq i,\\ \lfloor qy_{j}\rfloor,&\text{otherwise},\end{cases}\] where \(i=q-\sum_{j=1}^{k}\lfloor qy_{j}\rfloor\in\{1,\ldots,k-1\}\), so that \(\sum_{j=1}^{k}x_{j}=i+\sum_{j=1}^{k}\lfloor qy_{j}\rfloor=q\). This means that \(\mathbf{x}\in qH_{k}\). To end the proof of the claim, we deduce that \(\mathbf{x}\in P^{(1)}\) from limits \(\lim_{q\to+\infty}\mathbf{x}/q=\mathbf{y}\in P_{\infty}^{(1)}\) and \(P_{\infty}^{(1)}=\lim_{q\to\infty}\big{(}\big{\{}\mathbf{x}/q:\mathbf{x}\in P^{(1)} \big{\}}\cap H_{k}\big{)}\). Recall that \(P_{\infty}^{(1)}\) is an open set in \(H_{k}\). As we have explained before, see Remark 45, if \(q\gg 1\) then there are at least \(2^{k}\) generic sliding periodic billiard trajectories \(g_{q}\in\Pi(1,q)\) with exactly \(x_{j}\in\mathbb{N}\) impacts on the arc \(\Gamma_{j}\) and length (23). The numbers \(x_{j}\in\mathbb{N}\), the constant angles of reflection \(\psi_{j}>0\) and the impact angles \(\varphi_{j}^{\pm}\in(0,2\psi_{j})\) described in Lemma 46 satisfy identity \(\varphi_{j}^{-}+2(x_{j}-1)\psi_{j}+\varphi_{j}^{+}=\delta_{j}\) and uniform estimates \(x_{j}=qy_{j}+\mathrm{O}(1)\), \(\varphi_{j}^{\pm}=\mathrm{O}(1/q)\) and \(\psi_{j}=\delta_{j}/2(x_{j}-1)+\mathrm{O}(1/q^{2})=\mathrm{O}(1/q)\) as \(q\to+\infty\). Therefore, \[\ell_{j}^{\pm}=\frac{r_{j}\sin\varphi_{j}^{\pm}}{\cos(\psi_{j}-\varphi_{j}^{\pm })}=r_{j}\varphi_{j}^{\pm}+\mathrm{O}\left((\varphi_{j}^{\pm})^{3},\varphi_{j} ^{\pm}|\psi_{j}-\varphi_{j}^{\pm}|^{2}\right)=r_{j}\varphi_{j}^{\pm}+\mathrm{O }(1/q^{3})\] and \[(x_{j}-1)\ell_{j} =2r_{j}(x_{j}-1)\sin\psi_{j}=2r_{j}(x_{j}-1)\Big{(}\psi_{j}-\psi_ {j}^{3}/6+\mathrm{O}\left(\psi_{j}^{5}\right)\Big{)}\] \[=2r_{j}(x_{j}-1)\psi_{j}-r_{j}(x_{j}-1)\psi_{j}^{3}/3+\mathrm{O} (1/q^{4})\] \[=2r_{j}(x_{j}-1)\psi_{j}-\frac{\delta_{j}^{3}r_{j}}{24(x_{j}-1)^{ 2}}+\mathrm{O}(1/q^{3})\] \[=2r_{j}(x_{j}-1)\psi_{j}-\frac{\delta_{j}^{3}r_{j}}{24y_{j}^{2}} \frac{1}{q^{2}}+\mathrm{O}(1/q^{3}).\] Finally, we estimate the total length (23) as follows: \[L(g_{q}) =\sum_{j=1}^{k}\left(\ell_{j}^{-}+(x_{j}-1)\ell_{j}+\ell_{j}^{+}\right)\] \[=\sum_{j=1}^{k}r_{j}(\varphi_{j}^{-}+2(x_{j}-1)\psi_{j}+\varphi_{ j}^{+})+h(\mathbf{y})/q^{2}+\mathrm{O}(1/q^{3})\] \[=|\Gamma|+c/q^{2}+\mathrm{O}(1/q^{3}).\] We have used that \(\varphi_{j}^{-}+2(x_{j}-1)\psi_{j}+\varphi_{j}^{+}=\delta_{j}\), \(|\Gamma|=\sum_{j=1}^{k}\delta_{j}r_{j}\) and \(c=h(\mathbf{y})\) in the last line. Function \(h(\mathbf{y})\) was defined in (25). This ends the proof of the case \(c\in(c_{1}^{-},c_{1}^{+}]\). The case \(c=c_{1}^{-}\) can be obtained from the case \(c\in(c_{1}^{-},c_{1}^{+}]\) by using a classical diagonalization argument about sequences of sequences of lengths. Finally, sequence \((h_{q})_{q}\) is constructed by interleaving two sequences of generic sliding \((1,q)\)-periodic billiard trajectories associated with the asymptotic coefficients \(c_{1}^{-}\) and \(c_{1}^{+}\) respectively. We observe that generic sliding \((1,q)\)-periodic billiard trajectories inside circular polygons are _asymptotically shorter_ than the ones inside smooth ovals, since \(c_{1}^{+}\) in (26) has the same formula that constant \(c_{1}\) in part (c) of Theorem 44. The generic sliding periodic billiard trajectories analyzed in the proof of Theorem 48 do not impact the set of nodes. Next we consider other sliding periodic billiard trajectories with the opposite property. They impact _all nodes_ of the circular polygon in such a way that the angle of reflection remains _constant_ along the whole trajectory. After thinking about it a bit, the reader will realize that these _nodal_ sliding periodic trajectories can only take place for certain circular polygons, that we call _rational_. **Definition 49**.: We say that a circular polygon \(\Gamma\) is _rational_ when all its central angles are rational multiples of \(\pi\), so \[\delta_{j}=m_{j}\delta,\] for some \(\delta=\gcd(\delta_{1},\dots,\delta_{k})\) and \(m_{j}=\delta_{j}/\delta\in\mathbb{N}\). Set \(M=\sum_{j=1}^{k}m_{j}\). Then \(M\delta=2\pi\). A billiard trajectory inside a rational circular polygon is _nodal_ when it impacts all nodes (interspersed with possibly many other non-nodal impacts) in the counter-clockwise ordering. Squared pseudo-ellipses and Moss's eggs are rational circular polygons, see Section 2. Any nodal orbit in a rational circular polygon has constant angle of reflection, is sliding, and is periodic with a rotation number of the form \(1/q\), \(q\) being the period. It is easy to compute the length of a nodal trajectory. Recall that \(\delta_{j}=b_{j}-a_{j}\), \(b_{j}=a_{j+1}\) and \(f:\mathcal{M}\to\mathcal{M}\) is the billiard map. **Proposition 50**.: _Let \(\Gamma\) be a rational circular \(k\)-gon with arcs \(\Gamma_{j}\), radii \(r_{j}\) and central angles \(\delta_{j}\). Set \(\delta=\gcd(\delta_{1},\dots,\delta_{k})\). Fix some \(\psi=\delta/2i\) with \(i\in\mathbb{N}\). Let \(g_{q}\) be the billiard trajectory generated by_ \[(\varphi_{n},\theta_{n})=f^{n}(a_{1},\psi),\qquad\forall n\in\mathbb{Z}.\] 1. _The billiard trajectory_ \(g_{q}\) _is nodal and_ \(g_{q}\in\Pi(1,q)\) _with period_ \(q=Mi\)_._ 2. \(L(g_{q})=|\Gamma|-\pi^{2}|\Gamma|/6q^{2}+\mathrm{O}(1/q^{4})\) _as_ \(q=Mi\to+\infty\)_._ 3. _If_ \([z,z^{\prime}]\) _is a circular link of_ \(g_{q}\) _associated to the arc_ \(\Gamma_{j}\)_, then_ \[\int_{z}^{z^{\prime}}\,\mathrm{d}s=\frac{1}{q}2\pi r_{j}\neq\frac{1}{q}|\Gamma|.\] Proof.: 1. Once fixed the index \(i\in\mathbb{N}\), we deduce that \(\varphi_{im_{1}}=a_{1}+2im_{1}\psi=a_{1}+\delta_{1}=b_{1}=a_{2}\). That is, the first \(im_{1}\) links of the billiard trajectory connect both nodes of the first arc. In particular, the angle of reflection does no change when we enter the second arc, so the next \(im_{2}\) links connect its both nodes, and so on. This means that the orbit is nodal and periodic with rotation number \(1/q\) and period \(q=Mi\). 2. The length of each link in the \(j\)-th arc is equal to \(\ell_{j}=2r_{j}\sin\psi\), so \[L(g_{q}) =\sum_{j=1}^{k}im_{j}\ell_{j}=\sum_{j=1}^{k}2im_{j}r_{j}\left[ \psi-\frac{1}{6}\psi^{3}+\mathrm{O}\left(\psi^{5}\right)\right]\] \[=\sum_{j=1}^{k}m_{j}\delta r_{j}-\frac{\delta^{2}}{24}\left[\sum_{ j=1}^{k}m_{j}\delta r_{j}\right]\,\frac{1}{i^{2}}+\mathrm{O}\left(i^{-4} \right)=|\Gamma|-\frac{\pi^{2}|\Gamma|}{6q^{2}}+\mathrm{O}(1/q^{4}),\] where we have used that \(\delta_{j}=m_{j}\delta\), \(|\Gamma|=\sum_{j=1}^{k}\delta_{j}r_{j}\), \(q=Mi\), and \(M\delta=2\pi\). 3. If two consecutive impact points \(z\) and \(z^{\prime}\) belong to the arc \(\Gamma_{j}\), then \[\int_{z}^{z^{\prime}}\,\mathrm{d}s=2\psi r_{j}=\frac{1}{i}\delta r_{j}=\frac{ 1}{q}M\delta r_{j}=\frac{1}{q}2\pi r_{j}\neq\frac{1}{q}|\Gamma|.\qed\] Nodal billiard trajectories give the simplest samples of sequences of sliding periodic billiard trajectories in circular polygons where properties (c) and (d) of Theorem 44 fail. They can be obtained without the heavy machinery developed in this paper, but they only take place on rational circular polygons. If property \(c_{1}^{-}=-\pi^{2}|\Gamma|/6\) were true, nodal billiard trajectories would be the _asymptotically shortest_ sliding \((1,q)\)-periodic billiard trajectories as \(q\to+\infty\). This is the reason to take them into account. ## Acknowledgments A. C. has received funding for this project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No 757802). R. R.-R. was supported in part by the grant PID-2021-122954NB-100 which was funded by MCIN/AEI/10.13039/501100011033 and "ERDF: A way of making Europe". Thanks to Aida Chaikh and Pau Martin for useful and stimulating conversations. ## Appendix A Proof of Lemma 24 Along this proof we shall freely use the natural convention that objects \(\Xi_{k+1}^{i},\zeta_{k+1}^{i},\xi_{k+1}^{i}\) should be identified with \(\Xi_{1}^{i+1},\zeta_{1}^{i+1},\xi_{1}^{i+1}\), respectively. 1. The key observation is that functions \(\zeta_{j}^{i}(n)\) and \(\xi_{j}^{i}(n)\) can be recursively bounded. To be precise, since \(\zeta_{j+1}^{i}(n)\) is the smallest integer such that \(\zeta_{j+1}^{i}(n)>\alpha_{j}^{-}\zeta_{j}^{i}(n)+\beta_{j}^{-}\), \(\xi_{j+1}^{i}(n)\) is the largest integer such that \(\xi_{j+1}^{i}(n)<\alpha_{j}^{+}\xi_{j}^{i}(n)-\beta_{j}^{+}\) and \(\beta_{j}^{+}=\alpha_{j}^{+}+1>1\), we deduce that \[\left.\begin{array}{l}\zeta_{j+1}^{0}(n)\leq\alpha_{j}^{-}\zeta_{j}^{0}(n)+ \beta_{j}^{-}+1,\qquad\forall j=1.\ldots,k\\ \alpha_{j}^{+}\xi_{j}^{i}(n)-\beta_{j}^{+}-1\leq\xi_{j+1}^{i}(n)\leq\alpha_{j} ^{+}\xi_{j}^{i}(n),\qquad\forall j=1,\ldots,k\;\forall i\geq 0.\end{array}\right\}\] (27) (A comment is in order. The careful reader may notice that, by definition of alphabet \(\mathbf{Q}\), \(\zeta_{j}^{0}(n)\geq\chi_{j}\). Thus, it looks like we should have written bound \[\zeta_{j+1}^{0}(n)\leq\max\big{\{}\chi_{j+1},\alpha_{j}^{-}\zeta_{j}^{0}(n)+ \beta_{j}^{-}+1\big{\}},\qquad\forall j=1,\ldots,k\] instead of the first bound in (27). However, we do not need it when \(\chi_{1}\gg\chi_{2},\ldots,\chi_{k}\). Under this assumption, which is the second part of hypothesis (X), we know that the first \(k-1\) minima \(\zeta_{2}^{0}(n),\ldots,\zeta_{k}^{0}(n)\) are not affected by restrictions \(\xi_{j}^{0}(n)\geq\chi_{j}\), and, if necessary, we replace \(\zeta_{1}^{1}(n)=\zeta_{k+1}^{0}(n)\) --which is the last minimum that we need to take care of-- by \(\chi_{1}\).) If we apply recursively \(k\) times bounds (27), we get the cyclic bounds \[\begin{array}{l}\zeta_{1}^{1}(n)\leq\max\{\chi_{1},\zeta_{1}^{0}(n)/\alpha+ \gamma_{1}^{-}\}\\ \alpha\xi_{j}^{i}(n)-\gamma_{j}^{+}\leq\xi_{j}^{i+1}(n)\leq\alpha\xi_{j}^{i}( n),\quad\forall j=1,\ldots,k,\forall i\geq 0\end{array}\bigg{\}} \tag{28}\] where \(\alpha=\prod_{j=1}^{k}\alpha_{j}^{+}\), \(1/\alpha=\prod_{j=1}^{k}\alpha_{j}^{-}\) and \[\gamma_{j}^{\pm}=\sum_{m=1}^{k}\left(\prod_{l=1}^{m-1}\alpha_{j-l}^{\pm} \right)\left(\beta_{j-m}^{\pm}+1\right),\qquad\forall j=1,\ldots,k.\] * If we apply recursively \(j-1\) times the bounds for the maxima in the second line of equation (27), we get \[\lambda_{j}n-\gamma_{j}=\lambda_{j}\xi_{1}^{0}(n)-\gamma_{j}\leq\xi_{j}^{0}(n )\leq\lambda_{j}\xi_{1}^{0}(n)=\lambda_{j}n,\qquad\forall j=1,\ldots,k,\] (29) where \(\lambda_{1}=1\), \(\gamma_{1}=0\), and \[\lambda_{j}=\prod_{l=1}^{j-1}\alpha_{l}^{+},\qquad\gamma_{j}=\sum_{m=1}^{j-1 }\left(\prod_{l=1}^{m-1}\alpha_{j-l}^{\pm}\right)\left(\beta_{j-m}^{\pm}+1 \right),\qquad\forall j=2,\ldots,k.\] If \(\lambda=\max\{\lambda_{1},\ldots,\lambda_{k}\}\) and we choose any \(\nu\) such that \(0<\nu<\min\{\lambda_{1},\ldots,\lambda_{k}\}\), then (29) implies that \(\nu n\leq\xi_{j}^{0}(n)\leq\lambda n\) for all \(j=1,\ldots,k\) provided that \(\chi_{1}\) is large enough, as it is assumed in hypothesis (X). To be precise, if we assume that \(n\geq\chi_{1}\gg 1\), then \(\nu n\leq\lambda_{j}n-\gamma_{j}\) for all \(j=1,\ldots,k\). It suffices to take \[\chi_{1}\geq\max\big{\{}\gamma_{j}/(\lambda_{j}-\nu):j=1,\ldots,k\big{\}}.\] * We assume that \(i\geq 0\). The upper bound \(\xi_{j}^{i}(n)\leq\alpha^{i}\xi_{j}^{0}(n)\) follows directly from (28). The lower bound \(\xi_{j}^{i}(n)\geq\tau\alpha^{i}\xi_{j}^{0}(n)\) for some \(\tau\in(0,1)\) is more tricky. First, we realize that if we choose any \(\kappa\in(1,\alpha)\), then (28) implies the weaker lower bound \(\xi_{j}^{i}(n)\geq\kappa^{i}\xi_{j}^{0}(n)\) provided \(n\geq\chi_{1}\geq\max\big{\{}\gamma_{1}^{+},\ldots,\gamma_{k}^{+}\big{\}}/( \alpha-\kappa)\). This means that \(\xi_{j}^{i}(n)\) grows geometrically as \(i\to+\infty\). Second, we know that \[\xi_{j}^{i}(n)\geq\alpha\xi_{j}^{i-1}(n)-\gamma_{j}^{+}=\left(1-\frac{\gamma_{ j}^{+}}{\alpha\xi_{j}^{i-1}(n)}\right)\alpha\xi_{j}^{i-1}(n)\geq\cdots \geq\tau_{i,j}\alpha^{i}\xi_{j}^{0}(n),\] where \[0<\prod_{l=0}^{+\infty}\left(1-\frac{\gamma_{j}^{+}}{\alpha\xi_{j}^{l}(n)} \right)=:\tau_{j}<\tau_{i,j}=\prod_{l=0}^{i-1}\left(1-\frac{\gamma_{j}^{+}}{ \alpha\xi_{j}^{l}(n)}\right)<1,\qquad\forall i\geq 0.\] The above infinite product converges to a non-zero value \(\tau_{j}\) because \[\sum_{l=0}^{+\infty}\frac{\gamma_{j}^{+}}{\alpha\xi_{j}^{l}(n)}\leq\frac{ \gamma_{j}^{+}}{\alpha\xi_{j}^{0}(n)}\sum_{l=0}^{+\infty}\kappa^{-l}<+\infty.\] If we set \(\tau=\min\{\tau_{1},\ldots,\tau_{k}\}\), then \(\xi_{j}^{i}(n)\geq\tau\alpha^{i}\xi_{0}^{i}(n)\). This ends the proof for the forward case \(i\geq 0\). The backward case \(i<0\) is proved in a similar way. * Inequality \(\rho^{i}(n)\leq\rho^{i+1}(n)\) is trivial. Using the already proved parts (ai) and (aii) of this lemma and the formula for geometric sums, we get \[\frac{\rho^{i+1}(n)}{\xi_{j}^{i}(n)} \leq\frac{\sum_{j=1}^{k}\sum_{m=0}^{i}\alpha^{m}\xi_{j}^{0}(n)}{ \tau\alpha^{i}\xi_{j}^{0}(n)}\leq\frac{\sum_{j=1}^{k}\sum_{m=0}^{i}\alpha^{m} \lambda n}{\tau\alpha^{i}\nu n}=\frac{k\lambda(\alpha^{i+1}-1)}{\tau\nu(\alpha- 1)\alpha^{i}}\] \[\leq\frac{k\lambda\alpha}{\tau\nu(\alpha-1)}=:\lambda^{\prime},\] \[\frac{\rho^{i}(n)}{\xi_{j}^{i}(n)} \geq\frac{\sum_{j=1}^{k}\sum_{m=0}^{i-1}\tau\alpha^{m}\xi_{j}^{0} (n)}{\alpha^{i}\xi_{j}^{0}(n)}\geq\frac{\sum_{j=1}^{k}\sum_{m=0}^{i-1}\tau \alpha^{m}\nu n}{\alpha^{i}\lambda n}=\frac{k\tau\nu(\alpha^{i}-1)}{\lambda( \alpha-1)\alpha^{i}}\] \[\geq\frac{k\tau\nu}{\lambda\alpha}=:\nu^{\prime}.\] 4. Inequalities \(n/\alpha+\gamma^{-}\leq n-1<n+1\leq\alpha n-\gamma^{+}\) for all \(n\geq\chi_{1}\) follow from hypotheses **(B)** and **(X)**. It suffices to take \[\chi_{1}\geq\max\{(1+\gamma^{+})/(\alpha-1),(1+\gamma^{-})/(1-1/\alpha)\}.\] Set \(\gamma^{\pm}=\gamma_{1}^{\pm}\). Inequalities \(\zeta_{1}^{1}(n)\leq\max\{\chi_{1},n/\alpha+\gamma^{-}\}\) and \(\alpha n-\gamma^{+}\leq\xi_{1}^{1}(n)\) follow directly by taking \(i=0\) in (28), because \(\zeta_{1}^{0}(n)=n=\xi_{1}^{0}(n)\) by definition. 5. If we take \(n\geq\max\{(\chi_{1}-\gamma^{-})\alpha,(N+\gamma^{+})/(\alpha-1),(N+\gamma^{- })/(1-1/\alpha)\}\), then \(\chi_{1}\leq n/\alpha+\gamma^{-}\leq n-N<n+N\leq\alpha n-\gamma^{+}\). 2. Let us check that sets \(\Xi_{j}^{i}(n)\) have no gaps in \(\mathbb{N}\). That is, we want to check that \([n^{-},n^{+}]\cap\mathbb{N}\subset\Xi_{j}^{i}(n)\) for all \(n^{\pm}\in\Xi_{j}^{i}(n)\) such that \(n^{-}\leq n^{+}\). First, we consider the forward case \(i\geq 0\). We prove it by induction in the ordering \[\Xi_{1}^{0},\ldots,\Xi_{k}^{0},\Xi_{1}^{1}=\Xi_{k+1}^{0},\ldots,\Xi_{k}^{1}, \Xi_{1}^{2}=\Xi_{k+1}^{1},\ldots,\Xi_{k}^{2},\ldots,\Xi_{1}^{i}=\Xi_{k+1}^{i- 1},\ldots,\Xi_{j}^{i},\Xi_{j+1}^{i},\ldots.\] The base case is trivial: \(\Xi_{1}^{0}(n)=\{n\}\). Let us perform now the inductive step. We assume that \(\Xi_{j}^{i}(n)\) has no holes in \(\mathbb{N}\) for some \(i\geq 0\) and \(1\leq j\leq k\). The next set is \[\Xi_{j+1}^{i}(n)=\left\{n^{\prime\prime}\in\mathbb{N}:n^{\prime\prime}\geq\chi _{j+1},\ \exists n^{\prime}\in\Xi_{j}^{i}(n)\text{ s. t. }\alpha_{j}^{-}n^{\prime}+\beta_{j}^{-}<n^{\prime\prime}< \alpha_{j}^{+}n^{\prime}-\beta_{j}^{+}\right\}.\] If \(\Xi_{j+1}^{i}(n)\) has a hole in \(\mathbb{N}\), there is \(n^{\prime}\geq\chi_{j}\) such that \(\alpha_{j}^{+}n^{\prime}-\beta_{j}^{+}\leq\alpha_{j}^{-}(n^{\prime}+1)+\beta_ {j}^{-}\), which is impossible by hypotheses **(A)** and **(X)**. It suffices to take \[\chi_{j}>(\alpha_{j}^{-}+\beta_{j}^{-}+\beta_{j}^{+})/(\alpha_{j}^{+}-\alpha_{ j}^{-}).\] Property \(\left[\,\max\{\chi_{1},n-|i|\},n+|i|\right]\cap\mathbb{N}\subset\Xi_{1}^{i}(n)\) for all \(i\in\mathbb{Z}\) and \(n\geq\chi_{1}\) follows by induction from part (aiv) of this lemma and the fact that \(\Xi_{1}^{i}(n)\) has no gaps in \(\mathbb{N}\). This ends the proof for the forward case \(i\geq 0\). The backward case \(i<0\) is similar. ## Appendix B Proof of Proposition 39 Fix any \(p\in\mathbb{N}\). We look for the optimal value of \(c_{*}(p)>0\) such that \[\#\Pi(p,q)\geq 2^{kp}G_{q}\big{(}P^{(p)}\big{)}\geq c_{*}(p)q^{n}+\mathrm{O}(q^{n- 1})\quad\text{ as }q\to+\infty.\] Therefore, we want to count as many integer points in \(P^{(p)}\subset\mathbb{R}^{n+1}\) whose coordinates sum \(q\in\mathbb{N}\) as possible. We shall put these points in a 1-to-1 correspondence with the integer points of a \(q\)-dilated bounded convex polytope of \(\mathbb{R}^{n}\) by means of a projection. We shall use a lower bound established by Wills [58]. Let us briefly describe it. If \(t>0\) and \(P\subset\mathbb{R}^{n}\), then \(tP=\{yx:\boldsymbol{x}\in P\}\) and \(P/t=\{\boldsymbol{x}/t:\boldsymbol{x}\in P\}\) are the \(t\)_-dilation_ and \(t\)_-contraction_ of \(P\). The _inradius_\(\varrho(K)\) of a proper compact convex set \(K\subset\mathbb{R}^{n}\) is the biggest number \(\varrho>0\) such that \(K\) contains a ball of radius \(\varrho\). Note that \(0<\varrho(K)<\infty\) for any proper compact \(K\). **Lemma 51**.: _If \(K\) is a proper compact convex subset of \(\mathbb{R}^{n}\), then_ \[\#(tK\cap\mathbb{Z}^{n})\geq\mathrm{V}(K)\big{(}t-\sqrt{n}/2\varrho(K)\big{)}^{ n},\qquad\forall t\geq\sqrt{n}/2\varrho(K).\] Proof.: The case \(t=1\) is proved in [58], assuming that \(\varrho(K)\geq\sqrt{n}/2\). The general case follows directly from this case since \(tK\) is a proper compact convex subset of \(\mathbb{R}^{n}\), \(\operatorname{V}(tK)=t^{n}\operatorname{V}(K)\), and \(\varrho(tK)=t\varrho(K)\geq 2/\sqrt{n}\) if \(t\geq\sqrt{n}/2\varrho(K)\). The convex polytope (16) is not closed, so the convex polytopes \(\tilde{P}_{q}^{(p)}\) defined in Proposition 39 are not closed either. However, they are the projection of some convex polytopes contained in the open simplex \(\Delta_{n}=\{\boldsymbol{x}\in\mathbb{R}^{n+1}:\boldsymbol{x}>0,\;x_{1}+\dots+ x_{n}=1\}\), which implies that they are bounded. Hence we need to extend Lemma 51 to proper bounded convex subsets of \(\mathbb{R}^{n}\). **Corollary 52**.: _If \(P\) is a proper bounded convex subset of \(\mathbb{R}^{n}\) and \(K=\bar{P}\), then_ \[\#\big{(}tP\cap\mathbb{Z}^{n}\big{)}\geq\operatorname{V}(K)\big{(}s-\sqrt{n} /2\varrho(K)\big{)}^{n},\qquad\forall t>s\geq\sqrt{n}/2\varrho(K).\] Proof.: The closure \(K=\bar{P}\) is compact. Let \(\bar{B}\) be a closed ball of radius \(\varrho(K)>0\) contained in \(K\). Let \(B=\operatorname{Int}\bar{B}\). Given any point \(-\boldsymbol{x}\in B\), we have that \(s(\boldsymbol{x}+K)\subset t(\boldsymbol{x}+P)\) for all \(t>s>0\). If \(t>\sqrt{n}/2\varrho(K)\), then there is a point \(-\boldsymbol{x}_{t}\in B\) such that \(t\boldsymbol{x}_{t}\in\mathbb{Z}^{n}\). Then \[\#\big{(}tP\cap\mathbb{Z}^{n}\big{)} =\#\big{(}(t\boldsymbol{x}_{t}+tP)\cap\mathbb{Z}^{n}\big{)}=\# \big{(}t(\boldsymbol{x}_{t}+P)\cap\mathbb{Z}^{n}\big{)}\geq\#\big{(}s( \boldsymbol{x}_{t}+K)\cap\mathbb{Z}^{n}\big{)}\] \[\geq\operatorname{V}(\boldsymbol{x}_{t}+K)\big{(}s-\sqrt{n}/2 \varrho(\boldsymbol{x}_{t}+K)\big{)}^{n}=\operatorname{V}(K)\big{(}s-\sqrt{n} /2\varrho(K)\big{)}^{n},\] for all \(t>s\geq\sqrt{n}/2\varrho(K)\). Proof of Proposition 39.: 1. Let \(H_{n+1}=\big{\{}\boldsymbol{x}\in\mathbb{R}^{n+1}:x_{1}+\dots+x_{n+1}=1\big{\}}\). The cardinality of a finite set is invariant under \(q\)-dilations, \(q\)-contractions, and 1-to-1 projections. Thus, \[\#\Pi(p,q) \geq 2^{kp}G_{q}\big{(}P^{(p)}\big{)}\] \[=2^{kp}\#\left\{\boldsymbol{x}=(x_{1},\dots,x_{n+1})\in P^{(p)} \cap\mathbb{Z}^{n+1}:x_{1}+\dots+x_{n+1}=q\right\}\] \[=2^{kp}\#\big{(}P^{(p)}\cap\mathbb{Z}^{n+1}\cap qH_{n+1}\big{)}\] \[=2^{kp}\#\big{(}(P^{(p)}/q)\cap(\mathbb{Z}^{n+1}/q)\cap H_{n+1} \big{)}\] \[=2^{kp}\#\big{(}qP_{q}^{(p)}\cap\mathbb{Z}^{n+1}\big{)}\] \[=2^{kp}\#\big{(}q\tilde{P}_{q}^{(p)}\cap\mathbb{Z}^{n}\big{)}\] \[\geq 2^{kp}\operatorname{V}\big{(}\tilde{K}_{q}^{(p)}\big{)} \Big{(}q-1-\sqrt{n}/2\varrho\big{(}\tilde{K}_{q}^{(p)}\big{)}\Big{)}^{n}\] \[\geq 2^{kp}V\big{(}\tilde{K}_{\infty}^{(p)}\big{)}q^{kp-1}+ \operatorname{O}(q^{kp-2})\quad\text{ as }q\to+\infty,\] where \(\tilde{K}_{q}^{(p)}\) is the closure of \(\tilde{P}_{q}^{(p)}\). We have used Corollary 52 with \(t=q\) and \(s=q-1\) in the second to last inequality. In the last inequality, we have used estimates \[V\big{(}\tilde{K}_{q}^{(p)}\big{)}=V\big{(}\tilde{K}_{\infty}^{(p)}\big{)}+ \operatorname{O}(1/q),\qquad\varrho\big{(}\tilde{K}_{q}^{(p)}\big{)}=\varrho \big{(}\tilde{K}_{\infty}^{(p)}\big{)}+\operatorname{O}(1/q).\] These estimates follow from the fact that each facet of the limit compact polytope \(\tilde{K}_{\infty}^{(p)}\) is at an \(\operatorname{O}(1/q)\)-distance of the corresponding facet of the polytope \(\tilde{K}_{q}^{(p)}\), which can be easily seen by comparing the half-space representation (20) of \(\tilde{K}_{\infty}^{(p)}\) with the half-space representation \[\tilde{K}_{q}^{(p)}=\left\{\begin{array}{rl}&\alpha_{j}^{-}x_{j}+\beta_{j}^ {-}/q\leq x_{j+1}\leq\alpha_{j}^{+}x_{j}-\beta_{j}^{+}/q,\quad\forall j=1, \dots,n-1\\ &\alpha_{n}^{-}x_{n}+\beta_{j}^{-}/q\leq 1-\varsigma(\tilde{\boldsymbol{x}})\leq \alpha_{n}^{+}x_{n}-\beta_{j}^{+}/q\\ \tilde{\boldsymbol{x}}\in\mathbb{R}^{n}:&\alpha_{n+1}^{-}(1-\varsigma(\tilde{ \boldsymbol{x}}))+\beta_{n+1}^{-}/q\leq x_{1}\leq\alpha_{n+1}^{+}(1-\varsigma( \tilde{\boldsymbol{x}}))-\beta_{n+1}^{+}/q\\ &x_{j}\geq\chi_{j}/q,\quad\forall j=1,\dots,n\\ &\varsigma(\tilde{\boldsymbol{x}})\leq 1-\chi_{n+1}/q\end{array}\right\}.\] (30) 2. All convex bodies are Jordan measurable and \[\operatorname{V}(J)=\lim_{q\to+\infty}q^{-n}\#\left(J\cap(\mathbb{Z}^{n}/q) \right)=\lim_{q\to+\infty}q^{-n}\#\left(qJ\cap\mathbb{Z}^{n}\right)\] for any Jordan measurable set \(J\subset\mathbb{R}^{n}\), see [31, section 7.2]. Therefore, \[\lim_{q\to+\infty}q^{-n}G_{q}\big{(}P^{(p)}\big{)} \leq\lim_{q\to+\infty}q^{-n}\#\big{(}q\tilde{K}_{q}^{(p)}\cap \mathbb{Z}^{n}\big{)}\] \[\leq\lim_{q\to+\infty}q^{-n}\#\big{(}q\tilde{K}_{\infty}^{(p)} \cap\mathbb{Z}^{n}\big{)}=\mathrm{V}\left(\tilde{K}_{\infty}^{(p)}\right),\] \[\lim_{q\to+\infty}q^{-n}G_{q}\big{(}P^{(p)}\big{)} \geq\lim_{q\to+\infty}\Big{(}V\big{(}\tilde{K}_{\infty}^{(p)} \big{)}+\mathrm{O}(1/q)\Big{)}=\mathrm{V}\left(\tilde{K}_{\infty}^{(p)} \right).\] We have used that \(\tilde{K}_{q}^{(p)}\subset\tilde{K}_{\infty}^{(p)}\) --compare half-space representations (30) and (20)-- in the first line and the lower bound obtained at the beginning of this proof in the second one. 3. It is a simple computation using that \(x_{n+1}=1-\varsigma(\tilde{\mathbf{x}})\) when \(\mathbf{x}=(\tilde{\mathbf{x}},x_{n+1})\in H_{n+1}\).
2309.11005
It's Simplex! Disaggregating Measures to Improve Certified Robustness
Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size. While there is value in these certifications, the techniques through which we assess their performance do not present a proper accounting of their strengths and weaknesses, as their analysis has eschewed consideration of performance over individual samples in favour of aggregated measures. By considering the potential output space of certified models, this work presents two distinct approaches to improve the analysis of certification mechanisms, that allow for both dataset-independent and dataset-dependent measures of certification performance. Embracing such a perspective uncovers new certification approaches, which have the potential to more than double the achievable radius of certification, relative to current state-of-the-art. Empirical evaluation verifies that our new approach can certify $9\%$ more samples at noise scale $\sigma = 1$, with greater relative improvements observed as the difficulty of the predictive task increases.
Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein
2023-09-20T02:16:19Z
http://arxiv.org/abs/2309.11005v1
# It's Simplex! Disaggregating Measures to Improve Certified Robustness ###### Abstract Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size. While there is value in these certifications, the techniques through which we assess their performance do not present a proper accounting of their strengths and weaknesses, as their analysis has eschewed consideration of performance over individual samples in favour of aggregated measures. By considering the potential output space of certified models, this work presents two distinct approaches to improve the analysis of certification mechanisms, that allow for both dataset-independent and dataset-dependent measures of certification performance. Embracing such a perspective uncovers new certification approaches, which have the potential to more than double the achievable radius of certification, relative to current state-of-the-art. Empirical evaluation verifies that our new approach can certify \(9\%\) more samples at noise scale \(\sigma=1\), with greater relative improvements observed as the difficulty of the predictive task increases. certified robustness, adversarial machine learning, adversarial attack, differential privacy ## 1 Introduction Despite their excellent benchmark performance, the black-box nature of deep neural networks makes them prone to unexpected behaviours and instability. Of particular interest are _adversarial examples_, in which model outputs are changed by way of human imperceptible input perturbations [26, 10, 1, 6]. The spectre of such attacks poses risks to deployed models, in a fashion that can materially impact both the model deployer and their users. While numerous defences against the mechanisms that produce these examples exist, they typically only tackle a single vulnerability and be circumvented by considering alternate attack mechanisms. This intrinsic limitation motivated the development of _certifiably robust_ models, which provide a pointwise guarantee of resistance to attacks up to a fixed, calculable size. These certifications are typically constructed by exploiting either _convex relaxation_ or _randomised smoothing_[15], and measure how close the nearest-possible adversarial example could be, independent of the technique employed to identify said attack. Whenever a new certification mechanism is proposed, its utility is demonstrated by considering its average performance over a large number of samples. However, these aggregate measures do not align with the motivations of potential attackers, who may seek to attack individual samples. As such, aggregate measures of certification may disguise how the risk profile of individual samples may change, and more broadly removes the ability to interrogate the factors that drive differential performance of different certification schemes. In contrast to these aggregated measures of performance, this work takes the contrary, disaggregated perspective, and considers how the performance of different certification schemes depends upon where an individual sample exists within the simplex of permissible output scores. This is made possible by way of some simple-yet-powerful observations relating to the analytic nature of certification mechanisms, which allow for analytic, dataset independent comparisons to be performed. These techniques are then merged with a dataset-dependent, sample-wise analysis, which considers a dataset's distribution in the context of the output simplex. In taking this approach, we are able to both better understand the relative performance of certification schemes, and the nature of adversarial risk in certified systems more broadly. This is of critical importance for deployed systems, in which samples will likely be considered to hold differing levels of adversarial risk. Moreover, it may well be that the distribution properties of standard tested datasets may not reflect those observed within vulnerable deployed systems. If this was true, then being able to assess certification performance in a fashion that is aware but not beholden to the output distribution has significant ramifications for understanding the generalisation of certification techniques. This disaggregated perspective reveals the potential for two additions to the certification oeuvre, which we document within this work. The first of which involves a revised mechanism for constructing certifications by way of differential privacy, which for a subset of samples can produce a more than two-fold increase in the achievable certification for what we will categorise as multinomial style certifications, and a more than five-fold increase for softmax certifications. Our second contribution involves treating certifications not as the product of any one mechanism, but rather as the best calculated value across a set of different approaches, an approach that is only made possible by considering certification performance through a disaggregated lens. These improvements in both how we consider certifications, and how we construct certifications are supported by Sections 2 and 3, which introduce the current range of extant certification approaches, and how their analytic nature can be used to construct dataset-independent comparisons. From this, Section 4 then considers how these measures can be used to help reveal the potential for our two new certification approaches, both of which have the potential to help improve the sample-wise performance of certifications through consideration of the simplex of permissible output scores. Section 6 then demonstrates how our new approaches uniformly outperforms prior techniques when considering expectations over models that output softmax probability distributions, while providing significant advantages in a subset of the certification domain for multinomial classifiers. ## 2 Preliminaries Certification mechanisms use a mixture of computational and analytical techniques to provide guarantees of a models resistance to _all_ adversarial attack. While this approach can be applied to training-time processes [17], within this work we are specifically interested in guarding against \(\ell_{2}\)-norm perturbations to images at evaluation-time. A learned classifier \(f_{\boldsymbol{\theta}}\in\mathbb{R}^{K}\) acting upon an input sample \(\mathbf{x}\in\mathbb{R}^{d}\) is considered to be _robust_ to attacks \(\boldsymbol{\gamma}\in\mathbb{R}^{d}\) of bounded size \(\|\boldsymbol{\gamma}\|_{p}\leq L\)--henceforth referred to as \(B_{P}(L)\)--if \[\operatorname*{arg\,max}_{\forall i\in\mathcal{K}}f_{\boldsymbol{ \theta}_{i}}=j\text{ and } \tag{1}\] \[\forall\boldsymbol{\gamma}\in B_{p}(L) \boldsymbol{\bullet}f_{\boldsymbol{\theta}_{j}}(\mathbf{x}+ \boldsymbol{\gamma})>\max_{i:i\neq j}f_{\boldsymbol{\theta}_{i}}(\mathbf{x}+ \boldsymbol{\gamma})\enspace.\] \[\text{ where }\mathcal{K}=\{1,\ldots,K\}\] The simplicity of this statement stands in stark contrast to the difficulty of proving it, as exploring the entire feasible space of \(\boldsymbol{\gamma}\) is computationally intractable, especially as \(d\) increases. As such, in order to establish the robustness of a model, certifications mechanisms instead construct _provable lower bounds_ on the distance to the nearest adversarial example, making such certifications inherently conservative. In attempting to certify against \(\ell_{2}\)-norm bounded perturbations, two primary frameworks have been considered, which can be broadly categorised as _statistical certifications_, and those that _exploit knowledge of the model's architecture_. Of these, the latter involves constructing bounds on the output of a model by inspecting and tracing bifurcations under norm-bounded perturbations [19, 27, 33, 33, 24]. Framed in general as convex relaxation, these techniques opt to use linear relaxation to construct bounding polytopes of a model's outputs over bounded perturbations [23]. These approaches have been extended by adopting augmented loss functions to promote tight output bounds [28]. However, these approaches require significant amounts of computational resources to construct their certifications, which typically leads to these approaches failing to scale beyond datasets of the size of CIFAR-\(10\). In contrast statistical methods typically leverage a process known as _randomised smoothing_, in which repeated model draws under noise are employed to produce what is known as a _smoothed classifier_, the properties of which can be exploited to construct guarantees of model robustness. This is made possible by attempting to parameterise worst-case behaviours under attack. While the addition of this noise is not cost-free, it is an embarrassingly parallel process that requires significantly fewer resources to scale to large models and complex datasets than are required with convex relaxation. Moreover, randomised smoothing does not require any modifications to the core model architecture, nor to the training and testing loops, which significantly reduces the level of engineering required to support the deployment of certified guarantees. It is due to these factors that for the remainder of this work we will only consider such statistical certification techniques. ### _Randomised Smoothing_ To construct the smoothed classifier \(g\), the model is exposed to repeated samples under noise \(\mathcal{N}(\mathbf{x},\sigma^{2}\mathbf{I})\). However, rather than producing a stochastic model, by taking the expectation of the model outputs under noise, \(g\) becomes deterministic a deterministic property, which can be translated into a certification by attempting to parameterise the worst-case response of the model to perturbations. To date, a number of different parameterisation approaches have been proposed, producing certifications of varying tightness. However, these works often leverage different mechanisms to perform their smoothing, the nuance of which has not been fleshed out in prior works. To help formally distinguish between techniques, we will henceforth refer to techniques as drawing upon either the _softmax_ expectation (sometimes referred to as the soft expectation), which represents the expected model output under noise; or the _multinomial_ expectation (sometimes referred to as the hard expectation), which represents the expectation of the \(\operatorname*{arg\,max}\) of a models outputs, which is equivalent to the expected predicted class under noise. While conceptually similar, these two approaches can mathematically be respectively represented by way of \(\tilde{\mathbf{Y}}\) and \(\tilde{\mathbf{Y}}^{\prime}\), where \[\tilde{\mathbf{X}} \sim\mathcal{N}(\mathbf{x},\sigma^{2}\mathbf{I}) \tilde{\mathbf{Z}} =f_{\boldsymbol{\theta}}(\tilde{\mathbf{X}}) \tag{2}\] \[\tilde{\mathbf{Y}} =\Pi(\tilde{\mathbf{Z}}) \tilde{Y}^{\prime}_{j} =\begin{cases}1,&\tilde{Z}_{j}>\max_{k\in\mathcal{K}\setminus j} \tilde{Z}_{k}\\ 0,&\text{otherwise }\enspace,\end{cases} \tag{3}\] were \(\Pi(\cdot)\) represents the softmax operator. Deterministic expectations over these classes are estimated with high probability by constructing a Monte-Carlo estimate over \(\mathcal{D}\), requiring \(n\) i.i.d. draws either \((\tilde{\mathbf{x}}_{i},\tilde{\mathbf{z}}_{i},\tilde{\mathbf{y}}_{i})\) or \((\tilde{\mathbf{x}}_{i},\tilde{\mathbf{z}}_{i},\tilde{\mathbf{y}}^{\prime}_{i})\). From this, the output of the smoothed classifier \(g\) corresponds to the expectations over \(\tilde{\mathbf{Y}}\) and \(\tilde{\mathbf{Y}}^{\prime}\), where \[\mathbb{E}_{\mathcal{D}}[\tilde{\mathbf{Y}}]=\frac{1}{n}\sum_{i=1}^{n}\tilde{ \mathbf{y}}_{i}\qquad\mathbb{E}_{\mathcal{D}}[\tilde{\mathbf{Y}}^{\prime}]= \frac{1}{n}\sum_{i=1}^{n}\tilde{\mathbf{y}}^{\prime}_{i}\enspace. \tag{4}\] The stability of these expectations at inference time are supported by augmenting each training time sample with noise drawn from \(\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\). Given that certification mechanisms seek to guarantee the behaviour of models under potential attack, the introduction of Monte-Carlo estimates of the expectation may appear to be inherently contradictory and unsuitable. However, if we are able to definitively calculate the worst-case expectations for a given Monte-Carlo samplings output, then any subsequent certification can still be confidently considered as a worst-case, conservative bound upon the existence of any potential adversarial examples. In the case of softmax expectations, the calculated and worst-case expectations are related by the well-known Hoeffding inequality [13], which provides a high-probability tail bound for a confidence level \(\alpha>0\), for which \[P\left(\left|\mathbb{E}_{S}[\mathbf{Y}]-\mathbb{E}[\tilde{\mathbf{Y}}]\right| \leq\sqrt{\frac{\log_{e}(2/\alpha)}{2n}}\right)\geq 1-\alpha\enspace, \tag{4}\] for the Monte-Carlo estimate \(\mathbb{E}[\tilde{\mathbf{Y}}]\) and true, worst case softmax expectation \(\mathbb{E}_{S}[\mathbf{Y}]\). For multinomial output distributions, which we will henceforth label as \(\mathbb{E}_{S}[\mathbf{Y}^{\prime}]\), we propose treating the two highest class outputs as distinct and unique outputs of a binomial distribution, and measuring uncertainties as such. Such bounds can be estimated by way of the Beta distribution, which reliably produces bounds that achieve the nominal coverage [3]. These uncertainties are calculated subject to the Bonferroni correction to \(\alpha\)[8], to account for the two measures being were drawn from the same sampling process. Taking this approach is significantly computationally cheaper than other, more comprehensive mechanisms for constructing bounds upon the expectations [25, 11], while still producing guaranteed coverage. For future clarity, we will henceforth refer to the sorted softmax and multinomial class expectations as \(E_{S}\) and \(E_{M}\) respectively, where the first element of each--\(E_{S,0}\) and \(E_{M,0}\)--employ the calculated lower bound on the estimated expectations, while the second element--\(E_{S,1}\) and \(E_{M,1}\)--correspond to the calculated upper bounds. ### _Certification Mechanisms_ While previous works have considered the softmax and multinomial expectations to be broadly interchangeable, it is important to emphasise that the conceptual differences between the value of these outputs means that they are addressing fundamentally similar-but-distinct problem spaces. As such, we will now summarise key certification mechanisms for \(\ell_{2}\) threat models in a fashion that reflects the applicable expectation framework for the technique at hand. The first randomised smoothing based certifications drew upon differential privacy [9] in order to bound the response of models under noise-based perturbation, leading to what is known as the Lecuyer et al. [15] approach. Under this framework, certified robustness over \(\boldsymbol{\gamma}\in B_{p}(L)\) can be calculated by way of \[L_{\text{L,covery}}= \tag{5}\] \[\max_{\epsilon\in(0,1]}\frac{\sigma\epsilon}{\triangle\sqrt{2\log (1.25(1+e^{\epsilon})/(E_{S,0}-e^{2\epsilon}E_{S,1}))}}\enspace. \tag{6}\] Here \(\triangle\) is a variant of the local Lipschitz continuity with respect to input perturbations of the base model \(f(\cdot)\), which for \(\ell_{2}\)-norm-bounded perturbation corresponds to \[\triangle=\max_{\mathbf{x},\mathbf{x}+\boldsymbol{\gamma}}\frac{\|f( \mathbf{x})-f(\mathbf{x}+\boldsymbol{\gamma})\|_{2}}{\|\mathbf{x}-(\mathbf{x} +\boldsymbol{\gamma})\|_{2}}\enspace. \tag{7}\] Figure 1: A representation of the process of certifying a single image. Within this diagram, the blue squares represent repeated independent calculations, over which the ensemble expectations are calculated. In this paper, the ensemble itself is taken over the maximum certifications of the Cohen et al. [5], Li et al. [16], and Our approaches. While Equation 5 does rely upon finding a maximum, failing to reach a global maxima does not void the certification, as the established bound is provably true for all \(\epsilon\). In practice, while Lecuyer et al. explicitly framed this approach in terms of the softmax output distribution, it can be applied to systems which only return a multinomial output distribution. While the above certification mechanism was the first to provide guarantees of robustness for data sets as large as ImageNet, the conservative nature of the established bounds has left scope for new techniques to try and extend the size of achievable certifications. This was demonstrated by Li et al. [16], who exploited Renyi Divergence to provide an improved guarantee of size \[L_{\text{Li}}=\sup_{\omega>1}\sigma\sqrt{\log\left[\,1-E_{M,0}-E_{M,1}+2\left( \frac{1}{2}E_{M,0}^{1-\omega}\right.\right.} \tag{8}\] Unlike the approach of Lecuyer et al., this this approach does not apply to outputs employing the softmax expectations. The most popular mechanism in the current literature was developed by Cohen et al. [5, 22], and constructs certifications in terms of the multinomial output by way of the Gaussian quantile function \(\Phi^{-1}\), yielding certifications: \[L_{\text{Cohen}}=\sigma\left(\Phi^{-1}\left(E_{M,0}\right)\right)\enspace. \tag{9}\] While this work was presented alongside a second certification in terms of \(E_{M,0}\) and \(E_{M,1}\) that presents a tighter bound, their experiments exclusively considered the form above, which we will follow. It must also be noted that previous implementations of Equation 9 have used a process based on the binomial distribution, which introduces a low probability chance of incorrectly selecting the wrong output class and producing a failed certification--a detail that is further discussed in Section 5. To alleviate these concerns we will consider _Cohen_ et al. to refer to Equation 9 subject to the same multinomial distribution as Li et al.. We note that recent works have provided further extensions upon the radii of certification achievable through these mechanisms. Some attempt to improve the mechanisms through which we calculate certifications [7]; while others attempt to induce shifts in the output distribution through training time loss-function modifications that incentivise larger certification radii [22, 32], with MACER being particularly popular [31]. However, deploying all of these approaches introduce significant increases in the requisite computational time, with MACER inducing a \(40\)-fold increase in training time on our system. Moreover, it is crucial to emphasise that all of these systems for modifying training time certified robustness still derive their certifications using the approach of Cohen et al.. As such, within this work we will focus our improvements upon these core certification regimes, rather than their extensions, as improvements to the these core routines will still yield improvements when the modified mechanisms are deployed. ## 3 Comparing Certification Performance Each of the aforementioned certification mechanisms has demonstrated its utility by showing improvements over the previously established state of the art. In each of these works, the core metric has been the certified accuracy, which corresponds to the proportion of samples that are correctly predicted with a radius greater than \(r\), equivalent to \[c_{A}(r)=\frac{1}{N}\sum_{i=1}^{N}\mathds{1}[\arg\max g(\mathbf{x}_{i})=l( \mathbf{x}_{i})]\mathds{1}[L(g(\mathbf{x}_{i}))>r]\enspace, \tag{10}\] where \(l(\mathbf{x})\) is the correct class label for the sample \(\mathbf{x}\), and \(L(\cdot)\) is the certification stemming from the smoothed classifier \(g\). While such a measure allows for comparisons between techniques to be easily parsed, it also presents a picture that suggests that improvements in certification performance between approaches are uniform across all samples. In doing so, the drivers of certification performance in different techniques are distorted, and more difficult to interrogate. This is intrinsically problematic, as it both limits our ability to understand how a technique may generalise to new, semantically different datasets; and the capacity to assess certification performance for datasets where samples have differing adversarial risks. In order to resolve these limitations in how we assess certifications schemes, we introduce a simple but powerful observation: _certification mechanisms often decompose such that the only determinant of certification performance are the model output expectations_. This would appear to be immediately contradicted by Equations 5, 8, and 9, however each of these exhibits linear, multiplicative proportionality to \(\sigma\), and as such, the only determinant of performance is the expectation set. While this is an obvious statement, it is also not a consideration that any prior work has considered, and allows for a dataset, learner, and model agnostic mechanism for comparing certification approaches, while also providing a framework for disaggregated analysis of the performance of any specific combination of dataset, learner, and model. We introduce Definition 3.1 to formalise this statement, and to emphasise that certification mechanisms strictly depend upon the projection of the output \(C_{K}\) space onto the \(C_{3}\)-simplex. Thus any comparison between technique performance can also be considered in terms of this permissible space, yielding what we term as a _dataset independent comparison_. In doing so, the performance of the certifications can be compared in a fashion that is agnostic to the choice of model, dataset, or training infrastructure employed. **Definition 3.1**.: Consider the set of smoothed classifiers \(\mathcal{G}\), from which any smoothed classifier \(g\in\mathcal{G}\) constructs the mapping \(g:\mathbb{R}^{d}\to C_{K}\) from input \(\mathbf{x}\) to a point in the \(K\)-simplex, for \(K\) output classes. A certification mechanism for the model family \(\mathcal{G}\) is a mapping \(h:\mathcal{G}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{\geq 0}\) from a model and input instance to a certified radius. To explore this concept, Figure 2 considers the relative certification performance between the Cohen et al. and Li et al. approaches, by considering the permissible space of expectations across the \(C_{3}\) simplex as inputs, rather than the output of any specific model. While prior works have considered the Cohen et al. approach to uniformly produce the largest achievable radii of certification, in practice the commonly employed form of Cohen only out-certifies in the neighbourhood of \(E_{0}+E_{1}\approx 1\). Outside of this region--which is likely to be seen in datasets which exhibit significant semantic overlap between classes--Li et al. begins to produce significantly larger certifications. ### Distributionally Aware Comparisons The above framing demonstrates that the performance of certification schemes can be considered strictly in context of the permissible output space in \(C_{3}\). Doing so allows for a direct comparison of the performance of certification schemes without relying upon the model, dataset, or any other parts of the learning infrastructure. However, Definition 3.1 also suggests a second feasible framework for assessing certification performance. Consider the output distribution of \(g(\mathbf{x})\) where the samples \(\mathbf{x}\) are drawn from some data distribution \(\mathcal{P}\). Assessing this output distribution in the context of Figure 2, the dataset specific drivers of certification performance can be considered. It is this form of comparison that we will refer to as a _distributionally-aware_ analysis of certification performance. Such a perspective is valuable as it inherently allows us to better understand the factors driving certification performance for a particular trained model. In doing so also allows for inferences to be made about the potential certification performance of new and untested datasets, based upon their semantic complexity. Such an analysis may also allow deployed systems to develop an understanding of the risk of attack for particular samples. ## 4 An Improved Differential Privacy Based Mechanism Extending the dataset-independent analysis of Figure 2 to incorporate Lecuyer et al. reveals that for multinomial outputs it is uniformly outperformed across the entirety of permissible outputs. However, the very notion that differential performance is possible upon a sample-wise basis suggests that improving upon the bounds delivered by Lecuyer et al. may achieve improvements over a subset of the output space. This is especially true as recent works have shown the differential privacy mechanism that underpins Lecuyer et al. underestimates the level of privacy that can be achieved for a given level of added noise [1, 34]. Within the remainder of this section, we will demonstrate how this improved mechanism can be incorporated into a new certification regime, that both uniformly outperforms Lecuyer et al. across the \(C_{3}\) simplex; and yields improvements over Li et al. and Cohen et al. for the majority of the permissible output space. In aide of this goal, we begin by introducing some core concepts of differential privacy: differential privacy as a stability condition on output distributions and how it translates to the stability of expected outputs (Lemma 4.1); the post-processing inequality (Lemma 4.2) and how it captures the invariance of differential privacy to data-independent compositions; and the improved analysis of the Gaussian mechanism. Of these, Lemma 4.1 and Lemma 4.2 follow Lecuyer [15]; and the improved analysis of the Gaussian mechanism follows [1]. **Lemma 4.1** (Expected Output Stability Bound).: _Consider a randomised function \(A:\mathbb{R}^{n}\rightarrow[0,1]\) that preserves \((\epsilon,\delta)\)-DP, then it must be that \(\mathbb{E}[A(\mathbf{x})]\leq e^{\epsilon}\mathbb{E}[A(\mathbf{x}+\mathbf{\gamma} )]+\delta\), where the expectations are taken over the randomness in \(A\)._ The familiar post-processing inequality of differential privacy [9] is critical for certification in that it permits privacy-preserving randomisation to be applied at early network layers. **Lemma 4.2** (**Post-Processing Inequality**).: _Consider any randomised algorithm \(A\) acting on databases, and any (possibly randomised) algorithm \(B\) with domain \(range(A)\). If \(A\) is \((\epsilon,\delta)\)-DP then so too is \(B\circ A\). Moreover, at the level of database pairs \(R_{\epsilon,\delta}(A)\subseteq R_{\epsilon,\delta}(B\circ A)\)._ The \(\epsilon\)-DP of a random mechanism \(A(\mathbf{x})\) is captured by the privacy loss random variable \[l_{A,\mathbf{x},\mathbf{x}^{\prime}}=\log\left(\frac{P(A(\mathbf{x})=\mathcal{ O})}{P(A(\mathbf{x}^{\prime})=\mathcal{O})}\right)\enspace, \tag{11}\] where \(\mathbf{x}^{\prime}=\mathbf{x}+\mathbf{\gamma}\). By then introducing \(L_{A,\mathbf{x},\mathbf{x}^{\prime}}=l_{A,\mathbf{x},\mathbf{x}^{\prime}}(Y)\)[1], an equivalent condition for differential privacy is \[P[L_{A,\mathbf{x},\mathbf{x}^{\prime}}\geq\epsilon]-e^{\epsilon}P[L_{A, \mathbf{x},\mathbf{x}^{\prime}}\leq-\epsilon]\leq\delta\enspace. \tag{12}\] To elaborate upon these probabilities, consider a mechanism of the form \(y\sim h(\mathbf{x})+\mathcal{N}(0,\sigma^{2}\mathbf{I})\), where \(h(x)\) is any arbitrary function. Taking such a framing allows the privacy loss random variable to be analytically expressed as \[l_{A,\mathbf{x},\mathbf{x}^{\prime}} =-\frac{1}{2\sigma^{2}}\left(\left\|\mathbf{y}-h(\mathbf{x})\right\| _{2}^{2}-\left\|\mathbf{y}-h(\mathbf{x}^{\prime})\right\|_{2}^{2}\right)\enspace,\] \[=\frac{1}{2\sigma^{2}}\left\|h(\mathbf{x})-h(\mathbf{x}^{\prime} )\right\|_{2}^{2}+\frac{1}{\sigma^{2}}\langle\mathbf{y}-h(\mathbf{x}), \tag{13}\] \[h(\mathbf{x})-h(\mathbf{x}^{\prime})\rangle\enspace.\] A consequence of the fact that the inner product is equivalent to \(\mathcal{N}\left(0,\sigma^{2}\left\|h(\mathbf{x})-h(\mathbf{x}^{\prime}) \right\|_{2}^{2}\right)\) is that \[A_{\mathbf{x},\mathbf{x}^{\prime}}=\mathcal{N}\left(\frac{\left\|h(\mathbf{x})- h(\mathbf{x}^{\prime})\right\|_{2}^{2}}{2\sigma^{2}},\frac{\left\|h(\mathbf{x})-h( \mathbf{x}^{\prime})\right\|_{2}^{2}}{\sigma^{2}}\right)\enspace. \tag{14}\] Based upon this framing, the components of Equation (12) can be constructed as \[\begin{split}& P[L_{A,\mathbf{x},\mathbf{x}^{\prime}}\geq\epsilon]=\\ &\Phi\left(\frac{\left\|h(\mathbf{x})-h(\mathbf{x}^{\prime}) \right\|_{2}}{2\sigma}-\frac{\epsilon\sigma}{\left\|h(\mathbf{x})-h(\mathbf{x }^{\prime})\right\|_{2}}\right)\enspace,\\ & P[L_{A,\mathbf{x},\mathbf{x}^{\prime}}\leq-\epsilon]=\\ &\Phi\left(-\frac{\left\|h(\mathbf{x})-h(\mathbf{x}^{\prime}) \right\|_{2}}{2\sigma}-\frac{\epsilon\sigma}{\left\|h(\mathbf{x})-h(\mathbf{ x}^{\prime})\right\|_{2}}\right)\enspace,\end{split} \tag{15}\] Extending this concept to certified robustness requires the application of the post-processing inequality of 4.2. If we consider a function \(h(\cdot)\) such that \(h(\mathbf{x})=\mathbf{x}\), then Equations 12 and 15 can be combined to take the form \[\Phi\left(\frac{L}{2\sigma}-\frac{\epsilon\sigma}{L}\right)-e^{\epsilon}\Phi \left(-\frac{L}{2\sigma}-\frac{\epsilon\sigma}{L}\right)\leq\delta\enspace, \tag{16}\] where \(L=\|\mathbf{x}-\mathbf{x}^{\prime}\|\) is the certified radius. If this is true for any function \(y\sim h(\mathbf{x})+\mathcal{N}(0,\sigma^{2}\mathbf{I})\), then by virtue of the post-processing inequality the equivalent privacy relationship also holds for any mechanism \(f(\cdot)\), which allows us to define our randomised mechanism as \[A(\mathbf{x}) =f(h(\mathbf{x})+\mathcal{N}(0,\sigma^{2}\mathbf{I}))\] \[=f(\mathbf{x}+\mathcal{N}(0,\sigma^{2}\mathbf{I}))\enspace. \tag{17}\] This definition of \(A(\mathbf{x})\) then becomes equivalent to the \(g(\mathbf{x})\) of Section 2.1, if the expectations were to be taken over a single draw of noise. In a similar fashion to Equation 5, this differential privacy based certification scheme can be framed as a maximisation problem, especially as Equation 16 does not admit an analytic inverse. However, rather than strictly considering \(\epsilon\in(0,1]\) as the optimisation criteria, the above criteria can be recast as a constrained optimisation problem over \(\epsilon\geq 0\) and \(\delta\in[0,1]\) in order to construct a certification by way of \[\begin{split}& L=\max_{\epsilon\geq 0,\delta\in[0,1]}L^{\prime} \\ &\text{s.t. }\Phi\left(\frac{\Delta L^{\prime}}{2\sigma}-\frac{ \epsilon\sigma}{\triangle L^{\prime}}\right)-e^{\epsilon}\Phi\left(-\frac{ \triangle L^{\prime}}{2\sigma}-\frac{\epsilon\sigma}{\triangle L^{\prime}} \right)\leq\delta\enspace,\\ & E[A_{i}(\mathbf{x})]-\max_{j\in\mathcal{K}\setminus\{i\}}E[A_{ j}(\mathbf{x})]e^{2\epsilon}-(1+e^{\epsilon})\delta\geq 0\enspace.\end{split} \tag{18}\] where \(i\) corresponds to the predicted class, and \(E[A_{i}(\mathbf{x})]\) and \(E[A_{j}(\mathbf{x})]\) are respectively upper and lower bounded as per Section 2.1. While the form of the above equation is complex and nonlinear, the constraint functions exhibit near-monotonic behaviour in \((\epsilon,\delta,L)\), and as such can be quickly solved with any conventional constrained numerical optimisation tools. That this approach is only conditional upon \(A(\mathbf{x})\in[0,1]\), it can be applied to both softmax and multinomial distributions, or indeed any \(A(\mathbf{x})\in[0,c]\), as the latter case is simply a uniform scaling of \([0,1]\). The provably true nature of these guarantees is a direct consequence of Lecuyer et al. [15], as our approach tightens their bounds. Beyond its ability to incorporate the improved privacy mechanism, framing the certification process as an optimisation process presents an additional advantage over the differentially privacy certifications of Lecuyer et al.: we remove the need to arbitrarily set the \((\epsilon,\delta)\)-privacy level prior to certification. This is meaningful as most of the contexts for which certification is useful do not care for a specific, fixed privacy level across all samples, rather they value producing the largest achievable certification, in order to more accurately gauge adversarial risks. In the context of the dataset-agnostic comparisons across the \(C_{3}\) simplex, Figure 3 reveals that for a softmax output distribution, our approach uniformly outperforms Lecuyer et al. across the entire simplex, exhibiting a maximal relative \(5\)-fold improvement in the calculated certification. For a multinomial output distribution, our new technique yields improved certifications against both Cohen et al. and Li et al. over the majority of the output space. When incorporating our technique into the comparison, across the Figure 2: Comparing the relative performance of Equations 8 and 9 by considering \((L_{\text{Cohen}}-L_{\text{Li}})\) at \(\sigma=1.0\), by projecting all possible class outputs onto the surface of the \(3\)-simplex. Note that the linear multiplicative proportionality to \(\sigma\) ensures that these relative scores can be rescaled by multiplying by some new \(\sigma^{\prime}\). \(C_{3}\) simplex Cohen et al. produces strong certifications only in the neighbourhood of \(E_{1}=1-E_{0}\); while Li et al. [16] produces the strongest certification near \((E_{0},E_{1})\rightarrow(1,0)\). However, as \(E_{0}\) and \(E_{1}\) both decrease--as seen in semantically complex samples--our comparisons demonstrate that it is possible to increase the certification more than two-fold. ### Cost-Free Improvements To Certifications Our next key observation builds upon Section 3: _if each base certification mechanism is superior to all other mechanisms under consideration even on only one point in the output simplex, then taking the maximum across an ensemble of mechanisms' radii provably dominates the performance of any single mechanisms_. **Corollary 4.3** (**Ensembling Certifications**).: _Consider the set of certification mechanisms \(L_{1},\ldots,L_{n}\), each of which incorporate a mapping from the \(L_{i}:C_{3}\rightarrow\mathbb{R}^{\geq 0}\). Each of these yields a certification \(k_{i}(g,\cdot)=(L_{i}\cdot g)(\cdot)\), where \(k_{i}:\mathcal{R}^{d}\rightarrow\mathbb{R}^{\geq 0}\). If \(L^{\prime}(\mathbf{s})=\max_{i\in 1,\ldots,n}L_{i}(\mathbf{s})\) then \(k(g,\mathbf{x})\geq k_{i}(g,\mathbf{x})\) for all \(g\in\mathcal{G},\mathbf{x}\in\mathbb{R}^{d}\). Moreover, if each region of superiority each region of superiority \(H_{i}=\{\mathbf{s}\in C:k_{i}(\mathbf{s})>k_{j}(\mathbf{s}),\forall j\neq i\}\) is non-empty, then \(k\) dominates each base mechanism \(g_{1},\ldots,g_{n}\)._ Figure 1 diagrammatically represents this ensembling mechanism, while the differential performance of Figure 3 demonstrates both the nature of the elements of \(H_{i}\) and the functional differences that empower the ensembling process. It must be stressed is that if certifications in terms of a softmax output distribution are sought then only our technique and that of Lecuyer et al. can be certified. However, if we certify in terms of the multinomial distribution, all of the randomised smoothing based techniques can be applied, although in practice the Lecuyer approach is uniformly outperformed by all other multinomial approaches. It is important to emphasise that this ensemble certification process is almost cost-free, as the dominant computation in certification is evaluating \(f(\mathbf{x})\) by Monte-Carlo estimation, and as such the incremental cost of ensembling by Corollary 4.3 is minimal, as all the techniques build upon the same expectations. The evaluation of subsequent \(h_{i}(\cdot)\) involves a handful of arithmetic calculations or simple numerical library calls (for Normal quantiles), which is trivial by comparison to the cost of completely restarting the certification process from scratch. Reusing the expectations across the ensembling process also obviates the need to adjust the confidence intervals, even though multiple experiments are being performed. This stems from the fact that we are calculating expectation ranges with high probability, and the worst-case variant of these is being applied to the certification mechanisms. That these certification mechanisms are deterministic interpretations of said expectations eliminates any considerations regarding potential multiple hypothesis testing. The concept of ensembling can be further extended by incorporating the convex relaxation style certifications as described in Section 2. However, while the randomised smoothing mechanisms share the majority of their computational burden between the techniques, no such opportunities to optimise the aggregate performance exist for convex relaxation methods. Due to this consideration, and the broader limitations of the convex relaxation based techniques, within this work we restrict our focus to those approaches that leverage randomised smoothing. While contemporaneous work [30] has considered the robustness of ensembling neural network _models_, our work considers ensembles of _certification mechanisms_ acting on a single common model. An ensemble of models requires multiple independent (and often costly) training loops, before requiring independent evaluations for each constituent model. In contrast, an ensemble of certification mechanisms Figure 3: Assessing the relative performance of our technique by considering the ratio between \(L_{\text{Ours}}\), \(\text{Softmax}\) and \(L_{\text{L,\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \ allows for the majority of the computational burden of certification to be recycled between techniques, with the only additional burden the computational cost associated with solving the analytic certification equations. ## 5 Implementation ``` 1:functionMultinomial-Certify(\(n,\mathcal{K},\sigma,\alpha,\mathbf{x},f\))\(\triangleright\)\(n\) samples for class selection and certification; \(\mathcal{K}\) set of output classes, \(\sigma\) s.d of noise; \(\alpha\) percentage confidence; \(\mathbf{x},f\) sample and function 2:\(E=\frac{\text{Count}(n,\mathcal{K},\sigma,\alpha,\mathbf{x},f)}{n}\) 3:\(j=\arg\max E\) 4:\(E=\text{Sort}(E)\)\(\triangleright\) Sort in descending order 5:\(E_{0}=E[0]-\text{Confint}(E[0]\times n,n,\alpha)\)\(\triangleright\) Largest element of \(E\). Confint using Beta-method 6:\(E_{1}=E[1]+\text{Confint}(E[1]\times n,n,\alpha)\) 7:if\(E_{0}>0.5\)then\(L_{\text{C}}=\) Equation 9 else 0 8:if\(E_{0}>E_{1}\)then 9:\(L_{\text{Li}}=\) Equation 8 10:\(L_{\text{OM}}=\) Equation 18 11:else 12:\(L_{\text{Li}}=0,\qquad L_{\text{O-M}}=0\) 13:endif 14:\(L_{\text{Ensemble}}=\max\left(L_{\text{C}},L_{\text{Li}},L_{\text{O-M}}\right)\) 15:return\(j,(L_{\text{Ensemble}},L_{\text{C}},L_{\text{Li}},L_{\text{O-M}})\) 16:endfunction 17: 18:functionCount(\(n,\mathcal{K},\sigma,\mathbf{x},f\)) 19:\(c_{j}=0\)\(\forall j\in\mathcal{K}\) 20:for all\(i\in n\)do 21:\(c_{j}=c_{j}+1\)if\(\arg\max f(\mathbf{x}+\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}))=j\) 22:endfor 23:return\(c_{j}\) 24:endfunction ``` **Algorithm 1** Implementation of all tested algorithms under the Multinomial framework. Here \(O-M\) denotes Our Multinomial approach. To demonstrate how the aforementioned processes can be implemented, Algorithms 1 and 2 cover certifications across all of the techniques for all of the multinomial and softmax approaches. For a given test sample \(x\), the functions _Multinomial-Certify_ and _Softmax-Certify_ return the predicted class and certifications, with both tasks performed through randomised smoothing. Within this algorithm, the function _Confint_ refers to the Beta function approach with Bonferroni correction, as is described within Section 2.1. Every effort was made to accurately recreate the implementations of Li et al. and Lecuyer et al., in order to perform a fair comparison. However, as was alluded to in Section 2, the approach of Cohen et al. has the potential to incorrectly classify samples, and in doing so fail to certify. This stems from Algorithm 3's sampling approach, in which classification is based on a small number of samples--as few as \(n_{0}=100\) in Cohen et al., before a second sampling draw over \(n\) samples estimates the expectation based solely upon the count of times the classifier's class is selected. ``` 1:functionBinomial-Certify(\(n_{0},n,\mathcal{K},\sigma,\alpha,\mathbf{x},f\))\(\triangleright\)\(n_{0}\) and \(n\) (\(n_{0}\ll n\)) are samples for class selection and certification; \(\mathcal{S}\) set of output classes, \(\sigma\) s.d of noise; \(\alpha\) percentage confidence; \(\mathbf{x},f\) sample and function 2:\(j=\arg\max\text{Count}(n_{0},\mathcal{K},\sigma,\mathbf{x},f)\) 3:\(E_{0}=\frac{\text{Count}(n,j,\sigma,\mathbf{x},f)}{n}\)\(\triangleright\) Returns a single value 4:\(E_{0}=E_{0}-\text{Confint}(E_{0}\times n,n,\alpha)\) 5:if\(E_{0}>0.5\)then 6:\(L_{\text{C}}=\sigma\Phi^{-1}(E_{0})\) 7:else 8:\(L_{C}=0\) 9:endififreturn\(j,L_{C}\) 10:endfunction ``` **Algorithm 2** Softmax Certification framework. Here \(\Pi(\cdot)\) represents the softmax operator and \(O-S\) denotes OurSoftmax implementation. While selecting a large enough \(n\) produces tight bounds on the uncertainties for the expectation, the uncertainties for the \(100\) samples are often large enough to not be able to accurately classify the output solution. Moreover, the very nature of the binomial sampling of Cohen--in which the expectation of the classifier's output is compared to the likelihood of the intersection of _all other classes_--means that if the incorrect class is chosen, the mistake cannot be rectified without completely re-sampling. As such, we instead selected the expectation for Cohen based not upon the binomial sampling process, but the same multinomial sampling process employed by both us and Li et al.. ## 6 Results To validate the applicability of our improved Gaussian mechanism and ensembling approach, experiments were performed for \(\ell_{2}\)-norm-bounded certifications utilising both \begin{table} \begin{tabular}{l CIFAR-\(10\)[14], and the latest face-blurred variant of ImageNet [29], which respectively exist under a MIT and a BSD \(3-\)Clause license. While this blurring has been shown to introduce a slight degradation of predictive performance, the privacy-preserving protections that are introduced are important for the integrity of vision research. Our results highlight ImageNet, due to presence of human indistinguishable adversarial examples within trained models [26]. To support this, training was performed upon NVIDIA P\(100\) GPUs using in PyTorch [20] and a cross-entropy loss, with a fixed random seed employed to ensure reproducibility. Certification employed \(n=100,000\) samples for estimating expectations. Confidence intervals were set at \(\alpha=0.001\) for a \(0.1\%\) chance of any produced certification over-estimating the radius of certification. These parameters mirror those of Cohen et al. [5]. In the case of CIFAR-\(10\), two GPUs and a \(110\)-layer residual network were trained for \(90\) epochs under added noise \(\sigma\in\{0.12,0.25,0.5,1.0\}\), with certification then being applied using the appropriate matched \(\sigma\). This training process used stochastic gradient descent subject to an initial learning rate of \(0.1\), momentum of \(0.9\) and weight decay of \(10^{-4}\), and batch size set at \(400\). The learning rate was refined in a step-wise fashion where \(L_{r}=0.1\times 0.1^{\lfloor\frac{\alpha}{90}\rfloor}\), where \(e\) is the current epoch. For ImageNet, training was performed using a mixed precision [18] ResNet-\(50\) model using a ten-node system, where each node also had access to two GPUs. In order to understand the influence of noise on the more-complex model, training and certification were performed at levels \(\sigma\in\{0.25,0.5,1,2,3,4,5\}\). Due to the increased complexity of ResNet-\(50\), training was modified to match best practices under available system resources. Both GPU cores on every node were trained with a batch size of \(200\), and in the fashion of [12] the learning rate is set at \(L_{r}=0.1563\), to be equivalent to a learning rate of \(0.1\) for a single node training with a batch size of \(256\). To improve convergence, the first three epochs were performed under \(L_{r}=0.01563\), before reverting to the original rate. This was then scaled by \(0.1\) at the \(30\), \(60\) and \(80\)-th epochs. ### _Certified Accuracy_ To assess the level of certified robustness provided, we adopt the now standard concept of _certified accuracy_. This records the proportion of samples correctly predicted by randomised smoothing with a certified \(r\geq R\), and in doing so captures both the accuracy of the model under noise, and the level of certification that can be provided to samples. Reflecting our analytic analysis of the softmax techniques, Figure 4 demonstrates a uniform improvement over Lecuyer et al., with both a larger certified radius as \(r\to 0\), and a slower rate of decay in the certified accuracy as the radius Fig. 6: Certification Proportion (defined as \(r>0.05\)) and Time for ImageNet at \(\sigma=1.0\). For Figure b), all times calculated were averaged over \(100\) samples. Fig. 7: Proportion of multinomial samples for which there is a percentage improvement (of more than the value given in the horizontal-axis) in the ensemble relative to Cohen. increases. These performance increases reflect the analytic improvements demonstrated within Figure 3. While prior works have indicated that Cohen et al. [5] uniformly outperformed all other techniques under a multinomial distribution _by considering aggregate statistics_, our experiments clearly demonstrate that both our approach and Li et al. are able to certify a greater proportion of samples. This is a product of the implicit restriction in Cohen et al. that \(E_{0}\geq 0.5\). Relative to Li et al., our technique yields improvements for samples in which \(r<1\), however beyond this point the distribution decays due to the tightness of the analytic bound as \(E_{0}\to 1\), relative to the other tested approaches. The ensemble approach clearly improves upon both the number of samples certified, and the radii at which these samples are certified across all \(\sigma\), as per Table 1 and Figures 4 and 8. While the relative performance is maintained across both datasets, for CIFAR-\(10\) there is an across the board increase in the overall certified accuracy, due to the decreased prediction difficulty in the \(10\)-class CIFAR relative to the \(1000\)-class ImageNet. Such differences in relative performance align with the multinomial comparison of Figure 3 and underscore the importance of our proposed ensemble certification approach, as it leverages the regions of the simplex of output score in which each technique produces the largest certification radii. Figure 7 reinforces this, by demonstrating the proportion of samples for which the ensemble produces more than a given percentage-improvement, relative to Cohen et al.. We reiterate the observation of Algorithm 1 abstaining from certifying a greater number of samples than the alternate techniques, and thus the ensemble produces an infinite percentage improvement relative to Cohen et al. for these samples. More broadly, the level of out-performance of the Ensemble relative to Cohen et al. is confirmed by Table 2, which demonstrates the relative performance of the Ensemble against the prior state of the art of Cohen et al.. We note that as the Wilcoxon test produces highly significant statistics as it is comparing the prior state of the art in Cohen et al. to the ensemble, which also incorporates Cohen et al.. However, broadly Table 2 demonstrate that as \(\sigma\) increases the Ensemble is able to reliably improve upon a significant proportion of samples, with a notable effect on the mean certification. ### Computational Costs The nature of randomised smoothing--in that it requires repeated sampling--inherently introduces a significant computational cost, even if this process can be trivially parallelised. Intuitively it would appear that the computational cost of any ensembling approach would scale multiplicatively with the number of techniques being employed within the ensemble. However, as the expectations can be reused across all the ensembled techniques, this dominant component of the computational cost only has to be performed once. In practice, on NVIDIA P\(100\) GPU the time to calculate the radius of certification (after the expectations have been estimated) is less than \(0.1\) seconds for per technique, which is equivalent to less than the cost of \(100\) performing samples under noise, as result that is borne out by Figure (b)b. When considering computational cost, it is important to emphasise that mechanistic improvements in how we perform certifications are useful not just for the increases in certified radius, but also to potentially decrease the computational cost. Due to the inherent link between the sample size and uncertainty levels, and these uncertainty levels with the certified radius (as is seen in Figure (a)a), _improved certifications can be considered as an offset to the number of samples required, leading to commensurate decreases in the overall computational time_. ### Simplex Coverage As was established within Figure 3 and Section 2, any experimental comparison of these approaches is an implicit function of the model, dataset, training procedure, and hyperparameters employed in training. This dependency is visible in Figures 5 and 9, in which transitioning from CIFAR-\(10\) to ImageNet induces a _distributional shift_ in the distribution of label-space expectations towards the region of the simplex of output scores that favours our certification approach, with the average \(E_{0}\) and \(E_{1}\) both decreasing. That this shift occurs reflects the greater semantic similarities between classes within ImageNet, which results in output expectations that are more evenly spread across classes. This sensitivity to the input parameters is reinforced by Table 1, which demonstrates the inherent coupling between predictive difficulty (as indicated by the Top-1 accuracy) and a translation of the output distribution towards a region that is favourable to our differential privacy based approach. These changes in the output distribution in turn induce a shift in performance, from the initial strong performance of Cohen et al. (with exception to the proportion of samples certified with radii greater than \(0.05\)), to metrics that uniformly favour our new approach as \(\sigma>1\). These results also demonstrate that our approach can certify \(9\%\) more samples above \(0.05\) at \(\sigma=1\), and \(49\%\) more as \(\sigma\) is increased to \(5\). Increasing \(\sigma\) above \(1\) also leads to our technique exhibiting a monotonic improvement in the median certified radii, which stands in stark contrast to the approach of Cohen et al., which exhibits a consistent decrease in the median certification radius to \(0\) at \(\sigma=5\). This behaviour in Cohen et al. is a product of the smoothing effect of additive noise, which results in fewer and fewer samples having a highest class expectation above \(0.5\), preventing certification. Both Figure 5 and Table 1 demonstrate that our analytic comparison and ensemble frameworks take advantage of simplex regions of differential performance to _generate ensemble certifications that produce consistent, best-in-class results_. Moreover, we can be confident that the outperformance of this technique will be maintained irrespective of the complexity of the underlying dataset. This work presents both ensembling and disaggregated analysis in the context of an \(\ell_{2}\) bounded threat model, due to the relative maturity of attacks in this space. However, our approaches are equally applicable to analysing certification performance against _any_ threat model, and future works expanding the oeuvre of certified threat models should exploit the techniques described within this work in order to better understand and maximise certification performance. ### Alternative Training Approaches The same independence of our approaches to a specific threat model also holds true when considering alternate certification frameworks-- including MACER [31], denoising [4], or Geometrically Informed Certified Robustness [7]--as they each construct certifications in an identical manner. While these techniques shift the distributions seen within Figure 9, our analysis still holds. While Figure 10 demonstrates that MACER shifts the overall point distribution towards the region where Cohen et al. is favoured, a significant proportion of samples are still located within the region where our new technique yields improved certifications. ## 7 Conclusions By considering certification performance through a disaggregated lens, this work demonstrates that it is possible to better understand the drivers of certification performance, from both a dataset and mechanistic perspective. This form \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & & & & \multicolumn{2}{c}{Median} & \multicolumn{2}{c}{Mean} \\ Dataset & \(\sigma\) & p & Statistic & Proportion & Absolute & \(\%\) & Absolute & \(\%\) \\ \hline Cifar & \(0.12\) & \(<10^{-5}\) & \(>10^{4}\) & \(14.4\%\) & 0 & - & \(0.0020\) & \(14.9\%\) \\ & \(0.25\) & \(<10^{-5}\) & \(>10^{4}\) & \(26.1\%\) & 0 & - & \(0.0079\) & \(52.2\%\) \\ & \(0.5\) & \(<10^{-5}\) & \(>10^{4}\) & \(47.4\%\) & 0 & - & \(0.035\) & \(46.1\%\) \\ & \(1.0\) & \(<10^{-5}\) & \(>10^{4}\) & \(70.2\%\) & 0.079 & \(32.2\%\) & \(0.12\) & \(307.1\%\) \\ Imagenet & \(0.25\) & \(<10^{-5}\) & \(>10^{4}\) & \(22.9\%\) & 0 & - & \(0.0083\) & \(37.1\%\) \\ & \(0.5\) & \(<10^{-5}\) & \(>10^{4}\) & \(36.5\%\) & 0 & - & \(0.031\) & \(181.3\%\) \\ & \(1.0\) & \(<10^{-5}\) & \(>10^{4}\) & \(60.2\%\) & 0.053 & 16.6\% & 0.12 & \(99.2\%\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Metrics of performance, comparing the ensemble to the state of the art of Cohen et al.. Here the columns \(p\) and _Statistic_ refer to the results of the Wilcoxon signed rank test (with the Pratt signed-rank zero procedure) [21]; _Proportion_ is the percentage of samples which yield an improved certification due to the ensembling process, and the median and mean (excluding where \(L_{\text{Cohen}}=0\)) columns cover the improvements. (in both absolute and percentage-values). Figure 8: CIFAR-\(10\) Certified Accuracy, similar to Figure 4. Figure a) demonstrates the variable performance of each technique, while b) considers the performance of our technique against the Ensemble (that includes our approach). Figure 9: Distribution of multinomial output expectations when the samples are drawn from CIFAR-10, subject to the parameters \(\sigma=1\) and \(n=100,000\), in a fashion equivalent to Figure 5. of analysis demonstrated the utility of our multiple improvements to to the current oeuvre of certification techniques, including an improved differential privacy based Gaussian mechanism that for some samples can produce a more than two-fold increase in the achievable multinomial certification, or up to five-fold in the case of softmax certifications. These improvements are particularly evident in the case where the largest class expectation is diminished. Beyond this, our work also demonstrates that a simple ensemble-of-certifications can reuse the costly components of certifications, in order to improve upon performance relative to any one single technique. This technique, which introduces almost no additional computational burden, is able to certify \(98\%\) of samples above \(r=0.05\) at \(\sigma=1\) for ImageNet, relative to the \(89\%\) achieved by the prior state-of-the-art in Cohen et al. [5]. Our technique's advantage over other certification mechanisms grows with both the semantic complexity of the dataset and \(\sigma\). Through this works mechanisms, we have demonstrated how minor changes to certification systems can be used to construct larger certifications, which in turn would allow for a greater degree of confidence in the adversarial resistance of systems deployed in contexts where they may be implemented. Moreover, our approach of assessing certification performance within the context of the simplex of output scores has the potential to allow for a more nuanced view of adversarial risk. Operationalising this perspective, in the context of our improvements to the achievable radii of certification, has the potential to reduce reduce the need for domain experts to manually verify inputs that have the potential to be adversarially influenced, or to guide a greater understanding of adversarial risk in deployed systems. ## Acknowledgements This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. This work was also supported in part by the Australian Department of Defence Next Generation Technologies Fund, as part of the CSIRO/Data61 CRP AMLC project. Sarah Erfani is in part supported by Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) DE220100680. ## Resource Availability The full suite of code required to replicate the experiments contained within this work can be found at [https://github.com/andrew-cullen/ensemble-simplex-certifications](https://github.com/andrew-cullen/ensemble-simplex-certifications) ## Ethics Statement The techniques and processes described within this paper have the potential to decrease the vulnerability of deployed machine learning systems to adversarial example. However, in doing so there is also the potential to also counter beneficial applications of attacks, such as stylometric privacy. However, we believe that the value in minimising risks to deployed systems significantly outweighs these concerns.
2305.19954
Matrix Orthogonal Polynomials: A Riemann--Hilbert approach
In this work we show how to get advantage from the Riemann--Hilbert analysis in order to obtain information about the matrix orthogonal polynomials and functions of second kind associated with a weight matrix. We deduce properties for the recurrence relation coefficients from differential properties of the weight matrix. We take the matrix polynomials of Hermite, Laguerre and Jacobi type as a case study.
Amílcar Branquinho, Ana Foulquié-Moreno, Assil Fradi, Manuel Mañas
2023-05-31T15:38:56Z
http://arxiv.org/abs/2305.19954v1
# Matrix orthogonal polynomials: ###### Abstract. In this work we show how to get advantage from the Riemann-Hilbert analysis in order to obtain information about the matrix orthogonal polynomials and functions of second kind associated with a weight matrix. We deduce properties for the recurrence relation coefficients from differential properties of the weight matrix. We take the matrix polynomials of Hermite, Laguerre and Jacobi type as a case study. Key words and phrases:Riemann-Hilbert problems; matrix Pearson equations; discrete integrable systems; non-Abelian discrete Painleve IV equation 2 Now, we present the structure of the work: In Section 1 we state the basic facts of the theory of matrix orthogonal polynomials that will be used in the text. We begin with the definition of regular weight matrix and the three term recurrence relation for the matrix orthogonal polynomials. Next we reinterpret the Berezanskii matrix orthogonal polynomials in terms of scalar orthogonal polynomials. We also make a reinterpretation of the matrix orthogonality in terms of the Gauss-Borel factorization of the moment matrix, and at the end of the section we state a general Riemann-Hilbert problem associated with the sequences of monic matrix orthogonal polynomials. In Section 2 we present the general weights that we have considered in the previous works on matrix orthogonal polynomials. We will see that all the properties of these systems come from the analytical properties of a fundamental matrix associated with the matrix functions \(Y_{n}^{\mathsf{L}}\) (as well as \(Y_{n}^{\mathsf{R}}\)) and of the weight matrix, \(\omega\), that will be called \(M_{n}^{\mathsf{L}}\), \(M_{n}^{\mathsf{R}}\). As the weight functions considered here satisfy a generalized Pearson matrix equation we could present in Section 2 a first order differential equations for the \(Y_{n}^{\mathsf{L}}\) and \(Y_{n}^{\mathsf{R}}\). From these first order differential equations we derive a second order ones for the \(Y_{n}^{\mathsf{L}}\) and \(Y_{n}^{\mathsf{R}}\) respectively. This is the subject of Section 3. There, we will construct second order differential operators that have sequence of Berezanskii matrix polynomials (respectively, of second kind matrix functions) as eigenfunctions. We end this work, with Section 4, by showing some examples of discrete matrix Painleve equations for the three term recurrence relation coefficients associated with generalized Hermite, Laguerre and Jacobi type weight matrix. ## 1. Matrix biorthogonality Let \[\omega=\begin{bmatrix}\omega^{(1,1)}&\cdots&\omega^{(1,N)}\\ \vdots&\ddots&\vdots\\ \omega^{(N,1)}&\cdots&\omega^{(N,N)}\end{bmatrix},\] be a \(N\times N\) weight matrix with support on the real line, \(\mathbb{R}\) or on a smooth oriented non self-intersecting curve \(\gamma\), i.e. \(\omega\in\mathbb{M}_{N}(\mathbb{C})\) and \(\omega^{(j,k)}\) is, for each \(j,k\in\left\{1,\ldots,N\right\}\), a complex weight with support on the real line or on a smooth oriented non self-intersecting curve \(\gamma\), in the complex plane \(\mathbb{C}\), i.e. \(\omega^{(j,k)}\) is, for each \(j,k\in\left\{1,\ldots,N\right\}\), a complex weight with support on \(\gamma\). We define the moment of order \(n\) associated with \(\omega\) as \[\omega_{n}=\int_{\gamma}x^{n}\,\omega(x)\,\frac{\mathrm{d}x}{2\pi\,\mathrm{i} },\qquad\qquad n\in\mathbb{N}\coloneqq\left\{0,1,\ldots\right\}.\] We say that \(\omega\) is _regular_ if the moments, \(\omega_{n}\), \(n\in\mathbb{N}\), exist and the \(n\)-th matrix of moments, \[\mathbf{U}_{n}\coloneqq\begin{bmatrix}\omega_{0}&\cdots&\omega_{n}\\ \vdots&\ddots&\vdots\\ \omega_{n}&\cdots&\omega_{2n}\end{bmatrix}, n\in\mathbb{N},\] is such that \[\det\mathbf{U}_{n}\neq 0, n\in\mathbb{N}. \tag{1.1}\] In this way, we define a _sequence of matrix monic polynomials_, \(\left\{P_{n}^{\perp}\right\}_{n\in\mathbb{N}}\), where \(\deg P_{n}^{\perp}(z)=n\), \(n\in\mathbb{N}\), _left orthogonal_ and _right orthogonal_, \(\left\{P_{n}^{R}\right\}_{n\in\mathbb{N}}\), where \(\deg P_{n}^{R}(z)=n\), \(n\in\mathbb{N}\), with respect to a regular weight matrix \(\omega\), by the conditions, \[\int_{\gamma}P_{n}^{\perp}(z)\omega(z)z^{k}\frac{\mathrm{d}z}{2 \pi\,\mathrm{i}} =\delta_{n,k}C_{n}^{-1}, \tag{1.3}\] \[\int_{\gamma}z^{k}\omega(z)P_{n}^{R}(z)\frac{\mathrm{d}z}{2\pi\, \mathrm{i}} =\delta_{n,k}C_{n}^{-1}, \tag{1.2}\] for \(k\in\left\{0,1,\ldots,n\right\}\) and \(n\in\mathbb{N}\), where \(C_{n}\) is a nonsingular matrix. We can see (cf. [4]) that a sequence of monic polynomials \(\left\{P_{n}^{\perp}\right\}_{n\in\mathbb{N}}\), respectively, \(\left\{P_{n}^{R}\right\}_{n\in\mathbb{N}}\), is defined by (1.2), respectively, (1.3), with respect to a regular weight matrix, \(\omega\). Notice that neither the weight matrix is requested to be Hermitian nor the curve \(\gamma\) to be on the real line, i.e., we are dealing, in principle with nonstandard orthogonality and, consequently, with biorthogonal matrix polynomials instead of orthogonal matrix polynomials, i.e. the sequences of matrix polynomials \(\left\{P_{n}^{\perp}\right\}_{n\in\mathbb{N}}\) and \(\left\{P_{n}^{R}\right\}_{n\in\mathbb{N}}\) are biorthogonal with respect to a weight matrix functions \(\omega\), as from (1.2) and (1.3) \[\int_{\gamma}P_{n}^{\perp}(t)\omega(t)P_{m}^{R}(t)\frac{\mathrm{d}t}{2\pi\, \mathrm{i}} =\delta_{n,m}C_{n}^{-1}, n, m\in\mathbb{N}. \tag{1.4}\] As the polynomials are chosen to be monic, we can write \[P_{n}^{\perp}(z) =Iz^{n}+P_{\perp,n}^{1}z^{n-1}+P_{\perp,n}^{2}z^{n-2}+\cdots+p_{ \perp,n}^{n},\] \[P_{n}^{R}(z) =Iz^{n}+P_{\perp,n}^{1}z^{n-1}+P_{\perp,n}^{2}z^{n-2}+\cdots+p_{ \perp,n}^{n},\] with matrix coefficients \(p_{\perp,n}^{k},p_{\perp,n}^{k}\in\mathbb{C}^{N\times N}\), \(k=0,\ldots,n\) and \(n\in\mathbb{N}\) (imposing that \(p_{\perp,n}^{0}=p_{\perp,n}^{0}=\mathrm{I}\), \(n\in\mathbb{N}\)). Here \(\mathrm{I}\in\mathbb{C}^{N\times N}\) denotes the identity matrix. From (1.2) we deduce that the Fourier coefficients of the expansion \[zP_{n}^{\perp}(z)=\sum_{k=0}^{n+1}\ell_{\perp,k}^{n}P_{k}^{\perp}(z),\] are given by \(\ell_{\perp,k}^{n}=\mathbf{0}\), \(k=0,1,\ldots,n-2\) (here we denote the zero matrix by \(\mathbf{0}\)), \(\ell_{\perp,n-1}^{n}=C_{n}^{-1}C_{n-1}\) (is a direct consequence of orthogonality conditions), \(\ell_{\perp,n+1}^{n}=\mathrm{I}\) (as \(P_{n}^{\perp}(z)\) are monic polynomials) and \(\ell_{\perp,n}^{n}=p_{\perp,n}^{1}-p_{\perp,n+1}^{1}\)\(=:\beta_{n}^{\perp}\) (by comparison of the coefficients, assuming \(C_{0}=\mathrm{I}\)). Hence, assuming the orthogonality relations (1.2), we conclude that the sequence of monic polynomials \(\left\{P_{n}^{\perp}\right\}_{n\in\mathbb{N}}\) is defined by the three term recurrence relation \[zP_{n}^{\perp}(z)=P_{n+1}^{\perp}(z)+\beta_{n}^{\perp}P_{n}^{\perp}(z)+\gamma_ {n}^{\perp}P_{n-1}^{\perp}(z),\qquad\quad n\in\mathbb{N}, \tag{1.5}\] with recursion coefficients \[\beta_{n}^{\perp}:=p_{\perp,n}^{1}-p_{\perp,n+1}^{1},\qquad\qquad\gamma_{n+1} ^{\perp}:=C_{n+1}^{-1}C_{n},\qquad\qquad n\in\mathbb{N},\] and initial conditions, \(P_{-1}^{\perp}=\mathbf{0}\) and \(P_{0}^{\perp}=\mathrm{I}\). Any sequence of monic matrix polynomials, \(\left\{P_{n}^{\mathrm{R}}\right\}_{n\in\mathbb{N}}\), with \(\deg P_{n}^{\mathrm{R}}=n\), biorthogonal with respect to \(\left\{P_{n}^{\perp}\right\}_{n\in\mathbb{N}}\) and \(\omega(z)\), i.e. (1.4) is fulfilled, also satisfies a three term relation. To prove this we proceed in the same way as in the left case, arriving to \(P_{-1}^{\mathrm{R}}=\mathbf{0}\), \(P_{0}^{\mathrm{R}}=\mathrm{I}\), \[zP_{n}^{\mathrm{R}}(z)=P_{n+1}^{\mathrm{R}}(z)+P_{n}^{\mathrm{R}}(z)\beta_{n}^ {\mathrm{R}}+P_{n-1}^{\mathrm{R}}(z)\gamma_{n}^{\mathrm{R}},\qquad\quad n\in \mathbb{N}, \tag{1.6}\] where \[\beta_{n}^{\mathrm{R}}:=C_{n}\beta_{n}^{\perp}C_{n}^{-1},\qquad\qquad\gamma_{ n}^{\mathrm{R}}:=C_{n}\gamma_{n}^{\perp}C_{n}^{-1}=C_{n-1}C_{n}^{-1}, \tag{1.7}\] and the orthogonality conditions (1.3) are satisfied. ### Berezanskii matrix orthogonal polynomials #### 1.1.1. First example The notion of matrix orthogonality can be seen coming from the scalar one. In fact, following the ideas of Berezanskii, in [3], given two sequences of monic polynomials, \(\left\{p_{n}^{1}\right\}_{n\in\mathbb{N}}\) and \(\left\{p_{n}^{2}\right\}_{n\in\mathbb{N}}\), orthogonal with respect to \(\omega^{1}\), \(\omega^{2}\), respectively, with the same support of orthogonality, \(\Omega\subset\mathbb{R}\), i.e. \[\int_{\Omega}p_{n}^{1}(t)\omega^{1}(t)p_{m}^{1}(t)\,\frac{\mathrm{ d}\,t}{2\pi\,\mathrm{i}} =\kappa_{n}^{1}\,\delta_{n,m},\] \[\int_{\Omega}p_{n}^{2}(t)\omega^{2}(t)p_{m}^{2}(t)\,\frac{\mathrm{ d}\,t}{2\pi\,\mathrm{i}} =\kappa_{n}^{2}\,\delta_{n,m},\qquad\qquad n,m\in\mathbb{N},\] with \(\kappa_{n}^{1},\kappa_{n}^{2}>0\), \(n\in\mathbb{N}\); we can construct a matrix sequence of monic polynomial \[\mathbb{P}_{n}(x)=\frac{1}{2}\begin{bmatrix}p_{n}^{1}(x)+p_{n}^{2}(x)&p_{n}^{1}( x)-p_{n}^{2}(x)\\ p_{n}^{1}(x)-p_{n}^{2}(x)&p_{n}^{1}(x)+p_{n}^{2}(x)\end{bmatrix},\qquad\quad n \in\mathbb{N}, \tag{1.8}\] orthogonal with respect to the weights matrix in \(\Omega\) \[\mathbb{W}(x)=\frac{1}{2}\begin{bmatrix}\omega^{1}(x)+\omega^{2}(x)&\omega^{1 }(x)-\omega^{2}(x)\\ \omega^{1}(x)-\omega^{2}(x)&\omega^{1}(x)+\omega^{2}(x)\end{bmatrix}.\] In fact, \[\int_{\Omega}\mathbb{P}_{n}(t)\mathbb{W}(t)\mathbb{P}_{m}^{\top}(t)\,\frac{ \mathrm{d}\,t}{2\pi\,\mathrm{i}}=\mathbb{K}_{n}\,\delta_{n,m},\qquad\qquad n, m\in\mathbb{N}, \tag{1.9}\] where \(\mathbb{K}_{n}\) is an invertible matrix given by \[\mathbb{K}_{n}\coloneqq\frac{1}{2}\begin{bmatrix}\kappa_{n}^{1}+\kappa_{n}^ {2}&\kappa_{n}^{1}-\kappa_{n}^{2}\\ \kappa_{n}^{1}-\kappa_{n}^{2}&\kappa_{n}^{1}+\kappa_{n}^{2}\end{bmatrix}, \qquad\qquad n\in\mathbb{N}.\] This example can be deconstructed in order to get the scalar orthogonality. In fact, we can rewrite \(\mathbb{P}_{n}\) and \(\mathbb{W}\) as \[\mathbb{P}_{n}(x) =\frac{1}{2}\,\boldsymbol{\alpha}\begin{bmatrix}p_{n}^{1}(x)&0\\ 0&p_{n}^{2}(x)\end{bmatrix}\boldsymbol{\alpha}^{\top}, \qquad\qquad n\in\mathbb{N},\] \[\mathbb{W}(x) =\frac{1}{2}\,\boldsymbol{\alpha}\begin{bmatrix}\omega^{1}(x)&0 \\ 0&\omega^{2}(x)\end{bmatrix}\boldsymbol{\alpha}^{\top},\] with \[\boldsymbol{\alpha}=\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}, \qquad\boldsymbol{\alpha}^{-1}=\frac{1}{2}\begin{bmatrix}1&-1\\ 1&1\end{bmatrix}^{\top}. \tag{1.10}\] Now, taking into account that (1.10) equation (1.9) takes the form \[\int_{\Omega}\mathbb{P}_{n}(t)\mathbb{W}(t)\mathbb{P}_{m}^{\top}(t )\,\frac{\mathrm{d}\,t}{2\pi\,\mathrm{i}}\\ =\frac{1}{2}\,\boldsymbol{\alpha}\int_{\Omega}\begin{bmatrix}p_{n} ^{1}(t)&0\\ 0&p_{n}^{2}(t)\end{bmatrix}\begin{bmatrix}\omega^{1}(t)&0\\ 0&\omega^{2}(t)\end{bmatrix}\begin{bmatrix}p_{m}^{1}(t)&0\\ 0&p_{m}^{2}(t)\end{bmatrix}^{\top}\,\frac{\mathrm{d}\,t}{2\pi\,\mathrm{i}} \,\boldsymbol{\alpha}^{\top},\] and so \[\int_{\Omega}\mathbb{P}_{n}(t)\mathbb{W}(t)\mathbb{P}_{m}^{\top}(t)\,\frac{ \mathrm{d}\,t}{2\pi\,\mathrm{i}}=\frac{1}{2}\,\boldsymbol{\alpha}\begin{bmatrix} \kappa_{n}^{1}&0\\ 0&\kappa_{n}^{2}\end{bmatrix}\boldsymbol{\alpha}^{\top}\,\delta_{n,m},\qquad n, m\in\mathbb{N}.\] Now, applying once time (1.10), we get from the last equation \[\int_{\Omega}\begin{bmatrix}p_{n}^{1}(t)\omega^{1}(t)p_{m}^{1}(t)&0\\ 0&p_{n}^{2}(t)\omega^{2}(t)p_{m}^{2}(t)\end{bmatrix}\,\frac{\mathrm{d}\,t}{2 \pi\,\mathrm{i}}=\begin{bmatrix}\kappa_{n}^{1}&0\\ 0&\kappa_{n}^{2}\end{bmatrix}\delta_{n,m},\quad n,m\in\mathbb{N}.\] Hence we recover the initial data written in matrix notation. When we consider, as in [3], some specific Jacobi weights in \([-1,1]\), \[\omega^{1}(x)=\sqrt{\frac{1-x}{1+x}},\qquad\quad\omega^{2}(x)=\sqrt{\frac{1+x}{1- x}},\qquad\quad x\in[-1,1].\] In the notation just given, we have \[\mathbb{P}_{n}(x) =\frac{1}{2^{n}}\begin{bmatrix}U_{n}(x)&-U_{n-1}(x)\\ -U_{n-1}(x)&U_{n}(x)\end{bmatrix}, n\in\mathbb{N},\] \[\mathbb{W}(x) =\begin{bmatrix}\sqrt{1-x^{2}}&x\,\sqrt{1-x^{2}}\\ x\,\sqrt{1-x^{2}}&\sqrt{1-x^{2}}\end{bmatrix}, x\in[-1,1].\] Here the polynomials \(U_{n}(x)\), \(n\in\mathbb{N}\), are the second kind Chebychev orthogonal polynomials, i.e. \[U_{n}(x)=\frac{\sin\big{(}(n+1)t\big{)}}{\sin(t)},\qquad\quad\cos(t)=x,\qquad \qquad n\in\mathbb{N}.\] #### 1.1.2. Second example It is important to notice that, if we depart from two left and right matrix sequences, \(\big{\{}P_{n}^{1,\mathsf{L}}\big{\}}\), \(\big{\{}P_{n}^{1,\mathsf{R}}\big{\}}\), and \(\big{\{}P_{n}^{2,\mathsf{L}}\big{\}}\), \(\big{\{}P_{n}^{2,\mathsf{R}}\big{\}}\), biorthogonal with respect to \(W^{1}\), \(W^{2}\), respectively, with support on the same curve \(\gamma\subset\mathbb{C}\), then the matrix sequence of polynomials \[\mathbb{P}_{n}^{\mathsf{L}}(x) =\frac{1}{2}\begin{bmatrix}P_{n}^{1,\mathsf{L}}(x)+P_{n}^{2, \mathsf{L}}(x)&P_{n}^{1,\mathsf{L}}(x)-P_{n}^{2,\mathsf{L}}(x)\\ P_{n}^{1,\mathsf{L}}(x)-P_{n}^{2,\mathsf{L}}(x)&P_{n}^{1,\mathsf{L}}(x)+P_{n}^ {2,\mathsf{L}}(x)\end{bmatrix},\] \[\mathbb{P}_{n}^{\mathsf{R}}(x) =\frac{1}{2}\begin{bmatrix}P_{n}^{1,\mathsf{R}}(x)+P_{n}^{2, \mathsf{R}}(x)&P_{n}^{1,\mathsf{R}}(x)-P_{n}^{2,\mathsf{R}}(x)\\ P_{n}^{1,\mathsf{R}}(x)-P_{n}^{2,\mathsf{R}}(x)&P_{n}^{1,\mathsf{R}}(x)+P_{n}^ {2,\mathsf{R}}(x)\end{bmatrix}, n\in\mathbb{N},\] are biorthogonal with respect to \(\mathbb{W}\) defined by \[\mathbb{W}(x)=\frac{1}{2}\begin{bmatrix}W^{1}(x)+W^{2}(x)&W^{1}(x)-W^{2}(x)\\ W^{1}(x)-W^{2}(x)&W^{1}(x)+W^{2}(x)\end{bmatrix}\qquad\text{on}\qquad\gamma,\] i.e. we have \[\int_{\gamma}\mathbb{P}_{n}^{\mathsf{L}}(t)\mathbb{W}(t)\mathbb{P}_{m}^{ \mathsf{R}}(t)\;\mathrm{d}t=\mathbb{K}_{n}^{-1}\,\delta_{n,m}, n,m\in\mathbb{N}.\] where \[\mathbb{K}_{n}^{-1}=\frac{1}{2}\begin{bmatrix}\mathrm{I}&-\mathrm{I}\\ \mathrm{I}&\mathrm{I}\end{bmatrix}\begin{bmatrix}\big{(}C_{n}^{1}\big{)}^{-1}& \mathbf{0}\\ \mathbf{0}&\big{(}C_{n}^{2}\big{)}^{-1}\end{bmatrix}\begin{bmatrix}\mathrm{I}&- \mathrm{I}\\ \mathrm{I}&\mathrm{I}\end{bmatrix}^{\top}, n\in\mathbb{N},\] and \(C_{n}^{1}\), \(C_{n}^{1}\), are invertible matrices coming from \[\int_{\gamma}P_{n}^{j,\mathsf{L}}(t)W^{j}(t)P_{m}^{j,R}((t)\;\mathrm{d}\,t=\left( C_{n}^{j}\right)^{-1}\delta_{n,m},\qquad j=1,2,\qquad n,m\in\mathbb{N}.\] It is important to notice that \[\begin{bmatrix}\mathrm{I}&-\mathrm{I}\\ \mathrm{I}&\mathrm{I}\end{bmatrix}^{-1}=\frac{1}{2}\begin{bmatrix}\mathrm{I}&- \mathrm{I}\\ \mathrm{I}&\mathrm{I}\end{bmatrix}^{\top},\] and so we can apply all the procedure explained before in order to reinterpret the matrix orthogonality in the diagonal matrix setting. ### Gauss-Borel interpretation of the biorthogonality #### 1.2.1. Second kind functions We define the _sequence of second kind matrix functions_ by \[Q_{n}^{\mathsf{L}}(z) \coloneqq\int_{\gamma}\frac{P_{n}^{\mathsf{L}}(t)}{t-z}\omega(t) \frac{\mathrm{d}\,t}{2\pi\mathrm{i}}, \tag{1.12}\] \[Q_{n}^{R}(z) \coloneqq\int_{\gamma}\omega(t)\frac{P_{n}^{R}(t)}{t-z}\frac{ \mathrm{d}\,t}{2\pi\mathrm{i}}, n\in\mathbb{N}. \tag{1.11}\] From the orthogonality conditions (1.2) and (1.3) we have, for all \(n\in\mathbb{N}\), the following asymptotic expansion near infinity for the sequence of functions of the second kind \[Q_{n}^{\mathsf{L}}(z) =-C_{n}^{-1}\big{(}1z^{-n-1}+q_{\mathsf{L},n}^{1}z^{-n-2}+\cdots \big{)},\] \[Q_{n}^{R}(z) =-\big{(}1z^{-n-1}+q_{\mathsf{R},n}^{1}z^{-n-2}+\cdots\big{)}\,C_ {n}^{-1}.\] From now on we assume that the weights \(W^{(j,k)}\), \(j,k\in\big{\{}1,\ldots,N\big{\}}\) are Holder continuous. Hence using the Plemelj's formula, cf. [11], applied to (1.11) and (1.12), the following fundamental jump identities hold \[\big{(}Q_{n}^{\mathsf{L}}(z)\big{)}_{+}-\big{(}Q_{n}^{\mathsf{L}} (z)\big{)}_{=} =P_{n}^{\mathsf{L}}(z)\omega(z),\] \[\big{(}Q_{n}^{R}(z)\big{)}_{+}-\big{(}Q_{n}^{R}(z)\big{)}_{=} =\omega(z)P_{n}^{R}(z), n\in\mathbb{N},\] \(z\in\gamma\), where, \(\big{(}f(z)\big{)}_{\pm}=\lim_{\epsilon\to 0^{\pm}}f(z+i\epsilon)\); here \(\pm\) indicates the positive/negative region according to the orientation of the curve \(\gamma\). Now, multiplying equation (1.5) on the right by \(\omega\) and integrating we get, using the definition (1.11) of \(\big{\{}Q_{n}^{\mathsf{L}}\big{\}}_{n\in\mathbb{N}}\), that \[\int_{\gamma}\frac{t\,P_{n}^{\mathsf{L}}(t)}{t-z}\omega(t)\frac{\mathrm{d}\,t} {2\pi\mathrm{i}}=Q_{n+1}^{\mathsf{L}}(z)+\beta_{n}^{\mathsf{L}}Q_{n}^{\mathsf{ L}}(z)+C_{n}^{-1}C_{n-1}Q_{n-1}^{\mathsf{L}}(z).\] As \(\frac{t}{t-z}=1+\frac{z}{t-z}\), from the orthogonality conditons (1.2) we conclude that \[zQ_{n}^{\mathrm{L}}(z)=Q_{n+1}^{\mathrm{L}}(z)+\beta_{n}^{\mathrm{L}}Q_{n}^{ \mathrm{L}}(z)+C_{n}^{-1}C_{n-1}Q_{n-1}^{\mathrm{L}}(z),\qquad n\in\mathbb{N}, \tag{1.13}\] as well as, from (1.6) \[zQ_{n}^{\mathrm{R}}(z)=Q_{n+1}^{\mathrm{R}}(z)+Q_{n}^{\mathrm{R}}(z)\,\beta_{n }^{\mathrm{R}}+Q_{n-1}^{\mathrm{R}}(z)\,C_{n-1}C_{n}^{-1},\qquad n\in\mathbb{N}, \tag{1.14}\] with initial conditions \[Q_{-1}^{\mathrm{L}}(z)=Q_{-1}^{\mathrm{R}}(z)=-C_{-1}^{-1},\] \[Q_{0}^{\mathrm{L}}(z)=Q_{0}^{\mathrm{R}}(z)=S_{\omega}(z):=\int_ {\gamma}\frac{\omega(t)}{t-z}\frac{\mathrm{d}\,t}{2\pi\,\mathrm{i}},\] where \(S_{\omega}(z)\) is the Stieltjes-Markov like transformation of the weight matrix \(\omega\). **Theorem 1.1**.: _Let \(a\) and \(b\) be the end points of \(\gamma\), respectively. Let \(C\) be a circle, negatively oriented (clockwise), such that \(a\) and \(b\) are in the interior of \(C\). Then the Stieltjes-Markov like transformation \(S_{\omega}\) is a complex measure of biorthogonality for \(\left\{P_{n}^{\mathrm{L}}\right\}_{n\in\mathbb{N}}\) and \(\left\{P_{n}^{\mathrm{R}}\right\}_{n\in\mathbb{N}}\) over \(C\), i.e._ \[\int_{C}P_{n}^{\mathrm{L}}(z)S_{\omega}(z)P_{m}^{\mathrm{R}}(z)\,\frac{ \mathrm{d}z}{2\pi\,\mathrm{i}}=C_{n}^{-1}\,\delta_{n,m},\qquad\quad n,m\in \mathbb{N}, \tag{1.15}\] _for some invertible matrices, \(C_{n},\;n\in\mathbb{N}\)._ Proof.: We have the following identities \[\int_{C}P_{n}^{\mathrm{L}}(z)S_{\omega}(z)P_{m}^{\mathrm{R}}(z)\, \frac{\mathrm{d}z}{2\pi\,\mathrm{i}}=\int_{C}P_{n}^{\mathrm{L}}(z)\left(\int_ {\gamma}\frac{\omega(t)}{t-z}\,\frac{\mathrm{d}\,t}{2\pi\,\mathrm{i}}\right)P _{m}^{\mathrm{R}}(z)\,\frac{\mathrm{d}z}{2\pi\,\mathrm{i}}\] \[\qquad\qquad=\int_{\gamma}\left(\int_{C}\frac{P_{n}^{\mathrm{L}} (z)\omega(t)P_{m}^{\mathrm{R}}(z)}{t-z}\,\frac{\mathrm{d}z}{2\pi\,\mathrm{i}} \right)\,\frac{\mathrm{d}t}{2\pi\,\mathrm{i}}\qquad\quad\text{(Fubini's theorem)}\] \[\qquad\qquad=\int_{\gamma}P_{n}^{\mathrm{L}}(t)\omega(t)P_{m}^{ \mathrm{R}}(t)\,\frac{\mathrm{d}\,t}{2\pi\,\mathrm{i}}\qquad\quad\text{( Cauchy's integral theorem)}\] and so from (1.4) we get the desired result. #### 1.2.2. Gauss-Borel factorization Let us give a nice interpretation of this biorthogonality. First of all, we can see that the Stieltjes-Markov like transformation of the weight matrix \(\omega\) is a generating function of the moments of \(\omega\). In fact, \[S_{\omega}(z)=\int_{\gamma}\frac{\omega(t)}{t-z}\frac{\mathrm{d}\,t}{2\pi\, \mathrm{i}}=\frac{1}{2\pi\,\mathrm{i}}\sum_{n=0}^{\infty}\frac{\int_{\gamma}t^{ n}\omega(t)\,\mathrm{d}\,t}{z^{n+1}}=\sum_{n=0}^{\infty}\frac{\omega_{n}}{z^{n+1}},\] for \(|z|>r:=\max\left\{|t|,t\in\gamma\right\}\). Hence, \(S_{\omega}\) is an analytic function on compact sets of \(\mathbb{C}\setminus\left\{z\in\mathbb{C}:|z|<r\right\}\). We write down the biorthogonality conditions (1.15) that we have just derived in Theorem 1.1, \[\int_{C}P_{n}^{\perp}(z)S_{\omega}(z)P_{m}^{\mathsf{R}}(z)\,\frac{\mathrm{d}z} {2\pi\,\mathrm{i}}=\sum_{n=0}^{\infty}\int_{C}P_{n}^{\perp}(z)\frac{\omega_{n }}{z^{n+1}}P_{m}^{\mathsf{R}}(z)\,\frac{\mathrm{d}z}{2\pi\,\mathrm{i}}\] and by the Cauchy integral formula we get \[C_{n}^{-1}\delta_{n,m}=\int_{C}P_{n}^{\perp}(z)S_{\omega}(z)P_{m}^{\mathsf{R} }(z)\,\frac{\mathrm{d}z}{2\pi\,\mathrm{i}}=\sum_{k=0}^{\infty}\frac{\left(P_{ n}^{\perp}(z)\omega_{k}P_{m}^{\mathsf{R}}(z)\right)^{(k)}\Big{|}_{z\to 0}}{k!}. \tag{1.16}\] Now, from the Leibniz rule for the derivatives, we know that \[\frac{1}{k!}\big{(}P_{n}^{\perp}(z)\omega_{k}P_{m}^{\mathsf{R}}(z)\big{)}^{(k )}\Big{|}_{z\to 0}=\sum_{j=0}^{k}\frac{\left(P_{n}^{\perp}\right)^{(j)}(0)}{j!} \omega_{k}\frac{\left(P_{m}^{\mathsf{R}}\right)^{(k-j)}(0)}{(k-j)!}.\] With this identity we can reinterpret (1.16) in matrix notation \[\mathcal{P}^{\perp}\,\mathsf{U}\,\mathcal{P}^{\mathsf{R}}=\mathrm{diag}\left\{ C_{0}^{-1},C_{1}^{-1},\dots\right\}\] where \[\mathsf{U}=\begin{bmatrix}\omega_{0}&\cdots&\omega_{n}&\cdots\\ \vdots&\ddots&\vdots&\\ \omega_{n}&\cdots&\omega_{2n}&\cdots\\ \vdots&&\vdots&\ddots\end{bmatrix}\] \[\mathcal{P}^{\perp}=\begin{bmatrix}P_{0}^{\perp}&&&&\\ P_{1}^{\perp}&\left(P_{1}^{\perp}\right)^{\prime}&&\\ P_{2}^{\perp}&\left(P_{2}^{\perp}\right)^{\prime}&\frac{1}{2!}\big{(}P_{2}^{ \perp}\big{)}^{\prime\prime}&&\\ \vdots&\vdots&\vdots&\ddots&\\ P_{n}^{\perp}&\left(P_{n}^{\perp}\right)^{\prime}&\frac{1}{2!}\big{(}P_{2}^{ \perp}\big{)}^{\prime\prime}&\cdots&\frac{1}{n!}\big{(}P_{n}^{\perp}\big{)}^{( n)}\\ \vdots&&&&\ddots\end{bmatrix}\Big{|}_{z\to 0}P_{n}^{\perp}\,\mathrm{Taylor}\,\mathrm{ coefficients}\] \[\mathcal{P}^{R}=\begin{bmatrix}P_{0}^{R}&&\\ P_{1}^{R}&\left(P_{1}^{R}\right)^{\prime}&&\\ P_{2}^{R}&\left(P_{2}^{R}\right)^{\prime}&\frac{1}{2!}\big{(}P_{2}^{R}\big{)}^{ \prime\prime}&&\\ \vdots&\vdots&\vdots&\ddots&\\ P_{n}^{R}&\left(P_{n}^{R}\right)^{\prime}&\frac{1}{2!}\big{(}P_{2}^{R}\big{)}^{ \prime\prime}&\cdots&\frac{1}{n!}\big{(}P_{n}^{R}\big{)}^{(n)}\\ \vdots&\vdots&\vdots&&\ddots\end{bmatrix}\Bigg{]}_{z\to 0}\] As a conclusion: We get the Gauss-Borel factorization of the moment matrix \[\boldsymbol{\mathsf{U}}=\left(\mathcal{P}^{L}\right)^{-1}\text{diag}\left\{ C_{0}^{-1},C_{1}^{-1},\dots\right\}\left(\mathcal{P}^{R}\right)^{-1}. \tag{1.17}\] We can see from (1.17) that the orthogonality relays on the Gauss-Borel factorization of the moment matrix. Remember that the necessary and sufficient conditions in order to assure that this representation takes place are exactly the ones we assume at the beginning in order to define the sequence of monic polynomials (left and right ones), i.e. (1.1) takes place. ### Riemann-Hilbert problem It can be seen that \[P_{n}^{L}(z)Q_{0}^{L}(z)=\int_{\gamma}\left(P_{n}^{L}(z)-P_{n}^{L}(x)\right) \frac{\omega(x)}{z-x}\,\frac{\mathrm{d}\,x}{2\pi\,\mathrm{i}}+\int_{\gamma}P_ {n}^{L}(x)\,\frac{\omega(x)}{z-x}\,\frac{\mathrm{d}\,x}{2\pi\,\mathrm{i}}\] and so, \(P_{n-1}^{L(1)}\), defined by \[P_{n}^{L}(z)Q_{0}^{L}(z)-Q_{n}^{L}(z)=P_{n-1}^{L(1)}(z) \tag{1.18}\] is for each \(n\in\mathbb{Z}_{+}\coloneqq\left\{1,2,\dots\right\}\), a polynomial of degree at most \(n-1\), called the _first kind associated polynomial_ with respect to \(\left\{P_{n}^{L}\right\}_{n\in\mathbb{N}}\) and \(\omega\). In fact, \[P_{n-1}^{L(1)}(z)\coloneqq\int_{\gamma}\left(P_{n}^{L}(z)-P_{n}^{L}(x)\right) \frac{\omega(x)}{z-x}\,\frac{\mathrm{d}\,x}{2\pi\,\mathrm{i}},\qquad\quad n \in\mathbb{Z}_{+}.\] We can matricially summarize the identities (1.5), (1.13), as \[\begin{bmatrix}P_{n+1}^{L}(z)&Q_{n+1}^{L}(z)\\ C_{n}\,P_{n}^{L}(z)&C_{n}\,Q_{n}^{L}(z)\end{bmatrix}=\begin{bmatrix}z\, \mathrm{I}-\beta_{n}^{L}&-C_{n}^{-1}\\ C_{n}&\boldsymbol{0}\end{bmatrix}\begin{bmatrix}P_{n}^{L}(z)&Q_{n}^{L}(z)\\ C_{n-1}\,P_{n-1}^{L}(z)&C_{n-1}\,Q_{n-1}^{L}(z)\end{bmatrix};\] and by (1.18) we also have that \[\begin{bmatrix}P_{n}^{L,(1)}(z)\\ C_{n}\,P_{n-1}^{L,(1)}(z)\end{bmatrix}=\begin{bmatrix}z\,\mathrm{I}-\beta_{n}^{L }&-C_{n}^{-1}\\ C_{n}&\boldsymbol{0}\end{bmatrix}\begin{bmatrix}P_{n-1}^{L,(1)}(z)\\ C_{n-1}\,P_{n-2}^{L,(1)}(z)\end{bmatrix}.\] Taking \[Y_{n}^{\mathsf{L}}(z):=\begin{bmatrix}P_{n}^{\mathsf{L}}(z)&Q_{n}^{\mathsf{L}}(z) \\ C_{n-1}\,P_{n-1}^{\mathsf{L}}(z)&C_{n-1}\,Q_{n-1}^{\mathsf{L}}(z)\end{bmatrix} \quad\text{and}\quad T_{n}^{\mathsf{L}}:=\begin{bmatrix}z\,\mathsf{I}-\beta_{n}^ {\mathsf{L}}&-C_{n}^{-1}\\ C_{n}&\mathbf{0}\end{bmatrix},\] for all \(n\in\mathbb{N}\), we get \[\det\,Y_{n}^{\mathsf{L}}(z)=\det\,Y_{0}^{\mathsf{L}}(z)=1\qquad\text{as} \qquad\det\,T_{n}^{\mathsf{L}}=1,\qquad n\in\mathbb{N}. \tag{1.19}\] In the same way, we get from (1.6) and (1.14) that \[Y_{n+1}^{\mathsf{R}}(z)=Y_{n}^{\mathsf{R}}(z)\,T_{n}^{\mathsf{R}}(z),\qquad \qquad\qquad\quad n\in\mathbb{N},\] where \[Y_{n}^{\mathsf{R}}(z):=\begin{bmatrix}P_{n}^{\mathsf{R}}(z)&-P_{n-1}^{\mathsf{ R}}(z)C_{n}\\ Q_{n}^{\mathsf{R}}(z)&-Q_{n-1}^{\mathsf{R}}(z)C_{n}\end{bmatrix}\quad\text{ and}\qquad T_{n}^{\mathsf{R}}(z):=\begin{bmatrix}z\,\mathsf{I}-\beta_{n}^{\mathsf{R}}&-C_{n} \\ C_{n}^{-1}&\mathbf{0}\end{bmatrix}\] where \(\beta_{n}^{\mathsf{R}}\) is defined in (1.7). Here we also have \[\det\,Y_{n}^{\mathsf{R}}(z)=\det\,Y_{0}^{\mathsf{R}}(z)=1\qquad\text{as} \qquad\det\,T_{n}^{\mathsf{R}}=1,\qquad n\in\mathbb{N}. \tag{1.20}\] As a conclusion: We can establish that the matrix function \(Y_{n}^{\mathsf{L}}\) (respectively \(Y_{n}^{\mathsf{R}}\)) is, for each \(n\in\mathbb{N}\), the unique solution of the Riemann-Hilbert problem; which consists in determining a \(2N\times 2N\) matrix complex function \(G_{n}^{\mathsf{L}}\) (respectively \(G_{n}^{\mathsf{R}}\)) such that: **(RH1):**: \(G_{n}^{\mathsf{L}}\) (respectively \(G_{n}^{\mathsf{R}}\)) is analytic in \(\mathbb{C}\setminus\gamma\); **(RH2):**: has the following asymptotic behavior near infinity, \[G_{n}^{\mathsf{L}}(z)=\begin{pmatrix}1+\mathsf{O}(z^{-1})\end{pmatrix} \begin{bmatrix}1\,z^{n}&\mathbf{0}\\ \mathbf{0}&\mathsf{I}\,z^{-n}\end{bmatrix}\] \[\text{(respectively,}\qquad\qquad G_{n}^{\mathsf{R}}(z)=\begin{bmatrix} 1\,z^{n}&\mathbf{0}\\ \mathbf{0}&\mathsf{I}z^{-n}\end{bmatrix}\big{(}1+\mathsf{O}(z^{-1})\big{)});\] **(RH3):**: satisfies the jump condition \(\big{(}G_{n}^{\mathsf{L}}(z)\big{)}_{+}=\big{(}G_{n}^{\mathsf{L}}(z)\big{)}_{- }\begin{bmatrix}1&\omega(x)\\ \mathbf{0}&\mathsf{I}\end{bmatrix}\) \[\text{(respectively,}\,\big{(}G_{n}^{\mathsf{R}}(z)\big{)}_{+}=\begin{bmatrix} 1&\mathbf{0}\\ \omega(x)&\mathsf{I}\end{bmatrix}\big{(}G_{n}^{\mathsf{R}}(z)\big{)}_{-}),\,x \in\gamma.\] Now, we will see that these two Riemann-Hilbert problems are related by similarity. **Theorem 1.2**.: _Let \(Y_{n}^{\mathsf{L}}\) and \(Y_{n}^{\mathsf{R}}\) be, for each \(n\in\mathbb{N}\), the unique solutions of the Riemann-Hilbert problems just defined; then_ \[\Big{(}Y_{n}^{\mathsf{L}}(z)\Big{)}^{-1}=\begin{bmatrix}\mathbf{0}&1\\ -1&\mathbf{0}\end{bmatrix}Y_{n}^{\mathsf{R}}(z)\begin{bmatrix}\mathbf{0}&-1\\ 1&\mathbf{0}\end{bmatrix},\qquad\qquad n\in\mathbb{N}. \tag{1.21}\] This is an easy consequence of the Christoffel-Darboux identities, valid for all \(n\in\mathbb{N}\), \[(z-t)\sum_{k=0}^{n}P_{k}^{\mathsf{R}}(t)C_{k}P_{k}^{\mathsf{L}}(z) =P_{n}^{\mathsf{R}}(t)C_{n}P_{n+1}^{\mathsf{L}}(z)-P_{n+1}^{\mathsf{ R}}(t)C_{n}P_{n}^{\mathsf{L}}(z),\] \[(z-t)\sum_{k=0}^{n}Q_{k}^{\mathsf{R}}(t)C_{k}Q_{k}^{\mathsf{L}}(z) =Q_{n}^{\mathsf{R}}(t)C_{n}Q_{n+1}^{\mathsf{L}}(z)-Q_{n+1}^{\mathsf{ R}}(t)C_{n}Q_{n}^{\mathsf{L}}(z)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+S_{ \omega}(z)-S_{\omega}(t),\] \[(z-t)\sum_{k=0}^{n}Q_{k}^{\mathsf{R}}(t)C_{k}P_{k}^{\mathsf{L}}(z) =Q_{n}^{\mathsf{R}}(t)C_{n}P_{n+1}^{\mathsf{L}}(z)-Q_{n+1}^{ \mathsf{R}}(t)C_{n}P_{n}^{\mathsf{L}}(z)+\mathrm{I}\] \[(z-t)\sum_{k=0}^{n}P_{k}^{\mathsf{R}}(t)C_{k}Q_{k}^{\mathsf{L}}(z) =P_{n}^{\mathsf{R}}(t)C_{n}Q_{n+1}^{\mathsf{L}}(z)-P_{n+1}^{\mathsf{ R}}(t)C_{n}Q_{n}^{\mathsf{L}}(z)-\mathrm{I}.\] Taking \(z=t\) we arrive to the identities \[P_{n}^{\mathsf{R}}(t)C_{n}P_{n+1}^{\mathsf{L}}(z)-P_{n+1}^{ \mathsf{R}}(t)C_{n}P_{n}^{\mathsf{L}}(z) =\mathbf{0},\] \[Q_{n}^{\mathsf{R}}(t)C_{n}Q_{n+1}^{\mathsf{L}}(z),-Q_{n+1}^{ \mathsf{R}}(t)C_{n}Q_{n}^{\mathsf{L}}(z) =\mathbf{0},\] \[Q_{n+1}^{\mathsf{R}}(t)C_{n}P_{n}^{\mathsf{L}}(z)-Q_{n}^{ \mathsf{R}}(t)C_{n}P_{n+1}^{\mathsf{L}}(z) =\mathrm{I},\] \[P_{n}^{\mathsf{R}}(t)C_{n}Q_{n+1}^{\mathsf{L}}(z)-P_{n+1}^{ \mathsf{R}}(t)C_{n}Q_{n}^{\mathsf{L}}(z) =\mathrm{I}.\] All of this becomes \[\begin{bmatrix}-Q_{n-1}^{\mathsf{R}}(z)C_{n-1}&-Q_{n}^{\mathsf{R}}(z)\\ P_{n-1}^{\mathsf{R}}(z)C_{n-1}&P_{n}^{\mathsf{R}}(z)\end{bmatrix}Y_{n}^{ \mathsf{L}}(z)=\mathrm{I},\qquad\qquad n\in\mathbb{N},\] and as \[\begin{bmatrix}-Q_{n-1}^{\mathsf{R}}(z)C_{n-1}&-Q_{n}^{\mathsf{R}}(z)\\ P_{n-1}^{\mathsf{R}}(z)C_{n-1}&P_{n}^{\mathsf{R}}(z)\end{bmatrix}=\begin{bmatrix} \mathbf{0}&\mathrm{I}\\ -\mathrm{I}&\mathbf{0}\end{bmatrix}Y_{n}^{\mathsf{R}}(z)\begin{bmatrix} \mathbf{0}&-\mathrm{I}\\ \mathrm{I}&\mathbf{0}\end{bmatrix},\qquad n\in\mathbb{N},\] we get the desired result. ## 2. Fundamental matrices from Riemann-Hilbert problems Here we consider weight matrices, \(\omega\), satisfying a matrix Pearson type equation \[\phi(z)\omega^{\prime}(z)=h^{\mathsf{L}}(z)\omega(z)+\omega(z)h^{ \mathsf{R}}(z),\] where \(\phi\) is a scalar polynomial of degree at most 2, and \(h^{\mathsf{L}}\), \(h^{\mathsf{R}}\) are entire matrix functions. If we take a weight function \(\omega^{\mathsf{L}}\) such that \[\phi(z)\big{(}\omega^{\mathsf{L}}\big{)}^{\prime}(z)=h^{\mathsf{L}}(z)\omega^ {\mathsf{L}}(z), \tag{2.22}\] then there exists a matrix function \(\omega^{\mathsf{R}}(z)\) such that \(\omega(z)=\omega^{\mathsf{L}}(z)\omega^{\mathsf{R}}(z)\) with \[\phi(z)\big{(}\omega^{\mathsf{R}}\big{)}^{\prime}(z)=\omega^{\mathsf{R}}(z)h^{ \mathsf{R}}(z). \tag{2.23}\] The reciprocal is also true. For each factorization \(\omega=\omega^{\mathsf{L}}\omega^{\mathsf{R}}\), we introduce the _constant jump fundamental matrices_ which will be instrumental in what follows \[Z^{\mathsf{L}}_{n}(z) \coloneqq Y^{\mathsf{L}}_{n}(z)\begin{bmatrix}\omega^{\mathsf{L} }(z)&\mathbf{0}\\ \mathbf{0}&\big{(}\omega^{\mathsf{R}}(z)\big{)}^{-1}\end{bmatrix}, \tag{2.25}\] \[Z^{\mathsf{R}}_{n}(z) \coloneqq\begin{bmatrix}\omega^{\mathsf{R}}(z)&\mathbf{0}\\ \mathbf{0}&\big{(}\omega^{\mathsf{L}}(z)\big{)}^{-1}\end{bmatrix}Y^{\mathsf{R} }_{n}(z), n\in\mathbb{N}. \tag{2.24}\] Taking inverse on (2.24) and applying (1.21) we see that \(Z^{\mathsf{R}}_{n}\) given in (2.25) admits the representation \[Z^{\mathsf{R}}_{n}(z)=\begin{bmatrix}\mathbf{0}&-1\\ 1&\mathbf{0}\end{bmatrix}\big{(}Z^{\mathsf{L}}_{n}(z)\big{)}^{-1}\begin{bmatrix} \mathbf{0}&1\\ -1&\mathbf{0}\end{bmatrix}, n\in\mathbb{N}. \tag{2.26}\] In parallel to the matrices \(Z^{\mathsf{L}}_{n}(z)\) and \(Z^{\mathsf{R}}_{n}(z)\), we introduce what we call _structure matrices_ given in terms of the _left_, respectively _right_, logarithmic derivatives by, \[M^{\mathsf{L}}_{n}(z)\coloneqq\big{(}Z^{\mathsf{L}}_{n}\big{)}^{\prime}(z) \big{(}Z^{\mathsf{L}}_{n}(z)\big{)}^{-1}, M^{\mathsf{R}}_{n}(z)\coloneqq\big{(}Z^{\mathsf{R}}_{n}(z)\big{)}^{-1} \big{(}Z^{\mathsf{R}}_{n}\big{)}^{\prime}(z). \tag{2.27}\] It is not difficult to see that \[M^{\mathsf{R}}_{n}(z)=-\begin{bmatrix}\mathbf{0}&-1\\ 1&\mathbf{0}\end{bmatrix}M^{\mathsf{L}}_{n}(z)\begin{bmatrix}\mathbf{0}&1\\ -1&\mathbf{0}\end{bmatrix}, n\in\mathbb{N}, \tag{2.28}\] as well as, the following properties hold (cf. [4]): 1. The transfer matrices satisfy \[T^{\mathsf{L}}_{n}(z)Z^{\mathsf{L}}_{n}(z)=Z^{\mathsf{L}}_{n+1}(z), Z^{\mathsf{R}}_{n}(z)T^{\mathsf{R}}_{n}(z)=Z^{\mathsf{R}}_{n+1}(z), n\in\mathbb{N}.\] 2. The zero curvature formulas holds, \[\begin{bmatrix}1&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}=M^{\mathsf{L}}_{n+1}(z)T^{\mathsf{L}}_{n}(z )-T^{\mathsf{L}}_{n}(z)M^{\mathsf{L}}_{n}(z), n\in\mathbb{N},\] \[\begin{bmatrix}1&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}=T^{\mathsf{R}}_{n}(z)M^{\mathsf{R}}_{n+1}(z )-M^{\mathsf{R}}_{n}(z)T^{\mathsf{R}}_{n}(z), n\in\mathbb{N}.\] We see from (1.21), (2.26) and (2.28) that we only need to consider the left side of the objects \(Y_{n}\), \(Z_{n}\) and \(M_{n}\). ### Hermite case In this case we consider \(\phi(z)=1\). We underline that for a given regular weight matrix \(\omega(z)\) we will have many possible factorization \(\omega(z)=\omega^{\perp}(z)\omega^{\mathsf{R}}(z)\). Indeed, if we define an equivalence relation \(\left(\omega^{\perp},\omega^{\mathsf{R}}\right)\sim\left(\widetilde{\omega}^{ \perp},\widetilde{\omega}^{\mathsf{R}}\right)\) if, and only if, \(\omega^{\perp}\omega^{\mathsf{R}}=\widetilde{\omega}^{\mathsf{L}}\widetilde{ \omega}^{\mathsf{R}}\), then each weight matrix \(\omega\) can be though as a class of equivalence, and can be described by the orbit \[\left\{\left(\omega^{\perp}\varphi,\varphi^{-1}\omega^{\mathsf{R}}\right),\ \varphi\ \text{ is a nonsingular matrix of entire functions}\right\}.\] The constant jump fundamental matrices \(Z_{n}^{\mathsf{L}}(z)\) and \(Z_{n}^{\mathsf{R}}(z)\) are, for each \(n\in\mathbb{N}\), characterized by the following properties: 1. They are holomorphic on \(\mathbb{C}\setminus\gamma\). 2. We have the following asymptotic behaviors \[Z_{n}^{\perp}(z) =\left(1+\mathrm{O}(z^{-1})\right)\begin{bmatrix}z^{n}\omega^{ \perp}(z)&\mathbf{0}\\ \mathbf{0}&z^{-n}(\omega^{\mathsf{R}}(z))^{-1}\end{bmatrix},\] \[Z_{n}^{\mathsf{R}}(z) =\begin{bmatrix}z^{n}\omega^{\mathsf{R}}(z)&\mathbf{0}\\ \mathbf{0}&(\omega^{\perp}(z))^{-1}z^{-n}\end{bmatrix}\left(1+\mathrm{O}(z^{-1 })\right),\qquad\text{ for }z\to\infty.\] 3. They present the following _constant jump condition_ on \(\gamma\) \[\left(Z_{n}^{\perp}(z)\right)_{+}=\left(Z_{n}^{\perp}(z)\right)_{-}\begin{bmatrix} 1&1\\ \mathbf{0}&1\end{bmatrix},\qquad\left(Z_{n}^{\mathsf{R}}(z)\right)_{+}= \begin{bmatrix}1&\mathbf{0}\\ 1&1\end{bmatrix}\left(Z_{n}^{\mathsf{R}}(z)\right)_{-},\] for all \(z\in\gamma\) in the support on the weight matrix. In [4] we have proved that the structure matrices \(M_{n}^{\perp}(z)\) and \(M_{n}^{\mathsf{R}}(z)\), just defined, cf.(2.27), are, for each \(n\in\mathbb{N}\), matrices of entire functions in the complex plane. ### Laguerre case Here we consider, \(\phi(z)=z\), and \(\omega\) a regular Laguerre type weight matrix, i.e. \(\omega=\left[\begin{array}{ccc}\omega^{(1,1)}&\cdots&\omega^{(1,N)}\\ \vdots&\ddots&\vdots\\ \omega^{(N,1)}&\cdots&\omega^{(N,N)}\end{array}\right]\), with \[\omega^{(j,k)}(z)=\sum_{m\in I_{j,k}}A_{m}(z)z^{\alpha_{m}}\log^{p_{m}}z, z\in(0,+\infty),\] where \(I_{j,k}\) denotes a finite set of indexes, \(\operatorname{Re}\left(\alpha_{m}\right)>-1\), \(p_{m}\in\mathbb{N}\) and \(A_{m}(z)\) is Holder continuous and bounded. In [10] different examples of Laguerre weights for the matrix orthogonal polynomials on the real line are studied. In order to state a Riemann-Hilbert problem for the Laguerre type weights we must add to the one presented in Section 1.3, i.e. to (RH1)-(RH3), the condition **(RH4):**: \(Y_{n}^{\mathsf{L}}(z)=\begin{bmatrix}\mathsf{O}(1)&s_{1}^{\mathsf{L}}(z)\\ \mathsf{O}(1)&s_{2}^{\mathsf{L}}(z)\end{bmatrix}\) and \(Y_{n}^{\mathsf{R}}(z)=\begin{bmatrix}\mathsf{O}(1)&\mathsf{O}(1)\\ s_{1}^{\mathsf{R}}(z)&s_{2}^{\mathsf{R}}(z)\end{bmatrix}\), as \(z\to 0\), with \(\lim_{z\to 0}zs_{j}^{\mathsf{L}}(z)=\mathbf{0}\), \(\lim_{z\to 0}zs_{j}^{\mathsf{R}}(z)=\mathbf{0}\), \(j=1,2\) and the \(\mathsf{O}\) conditions are understood entrywise. The solutions of (2.22) and (2.23) are of type \[\omega^{\mathsf{L}}(z)=H^{\mathsf{L}}(z)z^{A^{\mathsf{L}}}\omega_{0}^{ \mathsf{L}},\qquad\qquad\omega^{\mathsf{R}}(z)=\omega_{0}^{\mathsf{R}}z^{A^{ \mathsf{R}}}H^{\mathsf{R}}(z)\] where \(H^{\mathsf{L}}\), \(H^{\mathsf{R}}\) are entire and nonsingular matrix function such that \(H^{\mathsf{L}}(0)=H^{\mathsf{R}}(0)=\mathsf{I}\), and \(\omega_{0}^{\mathsf{L}}\), \(\omega_{0}^{\mathsf{R}}\) are a constant nonsingular matrix. The constant jump fundamental matrices, introduced in (2.24) and (2.25), i.e. \(Z_{n}^{\mathsf{L}}\) and \(Z_{n}^{\mathsf{R}}\) satisfy, for each \(n\in\mathbb{N}\), the following properties: 1. They are holomorphic on \(\mathbb{C}\setminus\gamma\). 2. Present the following constant jump condition on \(\gamma\) \[\left(Z_{n}^{\mathsf{L}}(z)\right)_{+}=\left(Z_{n}^{\mathsf{L}}(z) \right)_{-}\begin{bmatrix}(\omega_{0}^{\mathsf{L}})^{-1}e^{-2\pi\mathrm{i}A^{ \mathsf{L}}}\omega_{0}^{\mathsf{L}}&(\omega_{0}^{\mathsf{L}})^{-1}e^{-2\pi \mathrm{i}A^{\mathsf{L}}}\omega_{0}^{\mathsf{L}}\\ \mathbf{0}&\omega_{0}^{\mathsf{R}}e^{2\pi\mathrm{i}A^{\mathsf{R}}}(\omega_{0} ^{\mathsf{R}})^{-1}\end{bmatrix},\] \[\left(Z_{n}^{\mathsf{R}}(z)\right)_{+}=\begin{bmatrix}\omega_{0} ^{\mathsf{R}}e^{-2\pi\mathrm{i}A^{\mathsf{R}}}(\omega_{0}^{\mathsf{R}})^{-1}& \mathbf{0}\\ \omega_{0}^{\mathsf{R}}e^{-2\pi\mathrm{i}A^{\mathsf{R}}}(\omega_{0}^{\mathsf{ R}})^{-1}&(\omega_{0}^{\mathsf{L}})^{-1}e^{2\pi\mathrm{i}A^{\mathsf{L}}} \omega_{0}^{\mathsf{L}}\end{bmatrix}\left(Z_{n}^{\mathsf{R}}(z)\right)_{-},\] for all \(z\in\gamma\), where \(A^{\mathsf{L}}=h^{\mathsf{L}}(0)\), \(A^{\mathsf{R}}=h^{\mathsf{R}}(0)\). In [5], we discuss the holomorphic properties of the structure matrices introduced in (2.27). We could prove that, in the Laguerre case, the structure matrices \(M_{n}^{\mathsf{L}}(z)\) and \(M_{n}^{\mathsf{R}}(z)\) are, for each \(n\in\mathbb{N}\), meromorphic on \(\mathbb{C}\), with singularity located at \(z=0\), which happens to be a removable singularity or a simple pole. ### Jacobi case Here we follow [6] considering, \(\phi(z)=z(1-z)\), and \(\omega\) be a regular Jacobi type weight matrix, i.e. \(\omega=\begin{bmatrix}\omega^{(1,1)}&\cdots&\omega^{(1,N)}\\ \vdots&\ddots&\vdots\\ \omega^{(N,1)}&\cdots&\omega^{(N,N)}\end{bmatrix}\), supported on \(\gamma\), with \[\omega^{(j,k)}(z)=\sum_{m\in I_{j,k}}\varphi_{m}(z)z^{\alpha_{m}}(1-z)^{\beta_ {m}}\log^{p_{m}}(z)\log^{q_{m}}(1-z),\qquad z\in\gamma,\] where \(I_{j,k}\) denotes a finite set of indexes, \(\mathrm{Re}(\alpha_{m})\), \(\mathrm{Re}(\beta_{m})>-1\), \(p_{m}\), \(q_{m}\in\mathbb{N}\), and \(\varphi_{m}\) is Holder continuous, bounded and non-vanishing on \(\gamma\). We assume that the determination of the logarithm and the powers are taken along \(\gamma\). We will request, in the development of the theory, that the functions \(\varphi_{m}\) have a holomorphic extension to the whole complex plane. This case have been studied in [6] and includes the non scalar examples of Jacobi type weights given in the literature [1, 7, 8, 9, 15, 18]. In order to state a Riemann-Hilbert problem for the Jacobi type weights we must add to the one presented in Section 1.3, i.e. to (RH1)-(RH3), the conditions * \(Y_{n}^{\perp}(z)=\begin{bmatrix}\mathrm{O}(1)&s_{1}^{\perp}(z)\\ \mathrm{O}(1)&s_{2}^{\perp}(z)\end{bmatrix}\), \(Y_{n}^{R}(z)=\begin{bmatrix}\mathrm{O}(1)&\mathrm{O}(1)\\ s_{1}^{R}(z)&s_{2}^{R}(z)\end{bmatrix}\), as \(z\to 0\), with \(\lim\limits_{z\to 0}zs_{j}^{\perp}(z)=\mathbf{0}\) and \(\lim\limits_{z\to 0}zs_{j}^{R}(z)=\mathbf{0}\), \(j=1,2\). * \(Y_{n}^{\perp}(z)=\begin{bmatrix}\mathrm{O}(1)&r_{1}^{\perp}(z)\\ \mathrm{O}(1)&r_{2}^{\perp}(z)\end{bmatrix}\), \(Y_{n}^{R}(z)=\begin{bmatrix}\mathrm{O}(1)&\mathrm{O}(1)\\ r_{1}^{R}(z)&r_{2}^{R}(z)\end{bmatrix}\), as \(z\to 1\), with \(\lim\limits_{z\to 1}(1-z)r_{j}^{\perp}(z)=\mathbf{0}\) and \(\lim\limits_{z\to 1}(1-z)r_{j}^{R}(z)=\mathbf{0}\), \(j=1,2\). The \(s_{i}^{\perp},s_{i}^{R}\) (respectively, \(r_{i}^{\perp}\) and \(r_{i}^{R}\)) could be replaced by \(\mathrm{o}(1/z)\), as \(z\to 0\) (respectively, \(\mathrm{o}(1/(1-z))\), as \(z\to 1\)). The \(\mathrm{O}\) and \(\mathrm{o}\) conditions are understood entry-wise. The solution of (2.22) and (2.23) will have possibly branch points at \(0\) and \(1\), cf. [21]. This means that there exist constant matrices, \(C_{j}^{\perp}\), \(C_{j}^{R}\), with \(j=0,1\), such that \[(\omega^{\perp}(z))_{-}=(\omega^{\perp}(z))_{+}C_{0}^{\perp}, \quad(\omega^{R}(z))_{-}=C_{0}^{R}(\omega^{R}(z))_{+}, \quad\text{in}(0,1),\] \[(\omega^{\perp}(z))_{-}=(\omega^{\perp}(z))_{+}C_{1}^{\perp}, \quad(\omega^{R}(z))_{-}=C_{1}^{R}(\omega^{R}(z))_{+}, \quad\text{in}(1,+\infty).\] The constant jump fundamental matrices, introduced in (2.24) and (2.25), i.e. \(Z_{n}^{\perp}\) and \(Z_{n}^{R}\) satisfy, for each \(n\in\mathbb{N}\), the following properties: * Are holomorphic on \(\mathbb{C}\setminus[0,+\infty)\). * Present the following _constant jump condition_ on \((0,1)\) \[\big{(}Z_{n}^{\perp}(z)\big{)}_{+}=\big{(}Z_{n}^{\perp}(z)\big{)}_{-}\!\! \begin{bmatrix}C_{0}^{\perp}&C_{0}^{\perp}\\ \mathbf{0}&1\end{bmatrix},\quad\quad\big{(}Z_{n}^{R}(z)\big{)}_{+}=\!\! \begin{bmatrix}I&\mathbf{0}\\ C_{0}^{R}&C_{0}^{R}\end{bmatrix}\!\big{(}Z_{n}^{R}(z)\big{)}_{-}.\] * Present the following _constant jump condition_ on \((1,+\infty)\) \[\big{(}Z_{n}^{\perp}(z)\big{)}_{+}=\big{(}Z_{n}^{\perp}(z)\big{)}_{-}\!\! \begin{bmatrix}C_{1}^{\perp}&\mathbf{0}\\ \mathbf{0}&C_{1}^{R}\end{bmatrix},\quad\quad\big{(}Z_{n}^{R}(z)\big{)}_{+}= \!\!\begin{bmatrix}C_{1}^{R}&\mathbf{0}\\ \mathbf{0}&C_{1}^{\perp}\end{bmatrix}\!\big{(}Z_{n}^{R}(z)\big{)}_{-}.\] In [6], we discuss the holomorphic properties of the structure matrices introduced in (2.27). We could prove that, in the Jacobi case, the structure matrices \(M_{n}^{\perp}(z)\) and \(M_{n}^{R}(z)\) are, for each \(n\in\mathbb{N}\), meromorphic on \(\mathbb{C}\), with singularities located at \(z=0\) and \(z=1\), which happens to be a removable singularity or a simple pole. ### First order equations The analytic properties of the matrix, \(M_{n}^{\perp}\), just studied, represent an open door to the differential properties of \(\big{\{}Z_{n}^{\perp}\big{\}}_{n\in\mathbb{N}}\) defined in (2.24), as well as for \(\left\{Y_{n}^{\perp}\right\}_{n\in\mathbb{N}}\) (the same can be done in the right case). In fact, from the analytic properties of \(M_{n}^{\perp}\) we get that \[\phi(z)\bigl{(}Z_{n}^{\perp}(z)\bigr{)}^{\prime}=\widehat{M}_{n}^{\perp}(z)\,Z_ {n}^{\perp}(z), n\in\mathbb{N}. \tag{2.29}\] where \(\left\{\widehat{M}_{n}^{\perp}\right\}_{n\in\mathbb{N}}\) is a sequence of entire functions defined by \[\widehat{M}_{n}^{\perp}(z)\coloneqq\phi(z)\,M_{n}^{\perp}(z), n\in\mathbb{N}, \tag{2.30}\] From (2.26) and taking into account \[\Bigl{(}\bigl{(}Z_{n}^{\perp}(z)\bigr{)}^{-1}\Bigr{)}^{\prime}=-\bigl{(}Z_{n}^ {\perp}(z)\bigr{)}^{-1}\bigl{(}Z_{n}^{\perp}(z)\bigr{)}^{\prime}\bigl{(}Z_{n}^ {\perp}(z)\bigr{)}^{-1},\] we arrive to \[\phi(z)\bigl{(}Z_{n}^{\mathsf{R}}(z)\bigr{)}^{\prime}=Z_{n}^{\mathsf{R}}(z) \,\widehat{M}_{n}^{\mathsf{R}}(z), n\in\mathbb{N}, \tag{2.31}\] where by (2.28) Hence, the entries of the matrices \[\widehat{M}_{n}^{\perp}(z)=\begin{bmatrix}L_{n}^{1,1}&L_{n}^{1,2}\\ L_{n}^{2,1}&L_{n}^{2,2}\end{bmatrix}\qquad\text{ and }\qquad\widehat{M}_{n}^{ \mathsf{R}}(z)=\begin{bmatrix}R_{n}^{1,1}&R_{n}^{1,2}\\ R_{n}^{2,1}&R_{n}^{2,2}\end{bmatrix},\] are related by \[R_{n}^{2,2}=-L_{n}^{1,1}, R_{n}^{2,1}=L_{n}^{1,2}, R_{n}^{1,2}=L_{n}^{2,1}, R_{n}^{1,1}=-L_{n}^{2,2}, n\in\mathbb{N}.\] Now, from (2.29) and (2.31) we get the first order structure relations for the sequences \(\left\{P_{n}^{\perp}\right\}_{n\in\mathbb{N}}\), \(\left\{Q_{n}^{\perp}\right\}_{n\in\mathbb{N}}\), \(\left\{P_{n}^{\mathsf{R}}\right\}_{n\in\mathbb{N}}\), and \(\left\{Q_{n}^{\mathsf{R}}\right\}_{n\in\mathbb{N}}\) \[\phi(z)\bigl{(}P_{n}^{\perp}(z)\bigr{)}^{\prime}+P_{n}^{\perp}(z) h^{\perp}(z) =L_{n}^{1,1}(z)P_{n}^{\perp}(z)-L_{n}^{1,2}(z)C_{n-1}P_{n-1}^{ \perp}(z),\] \[\phi(z)\bigl{(}Q_{n}^{\perp}(z)\bigr{)}^{\prime}-Q_{n}^{\perp}(z) h^{\mathsf{R}}(z) =L_{n}^{1,1}(z)Q_{n}^{\perp}(z)-L_{n}^{1,2}(z)C_{n-1}Q_{n-1}^{ \perp}(z),\] \[\phi(z)\bigl{(}P_{n}^{\mathsf{R}}(z)\bigr{)}^{\prime}+h^{\mathsf{ R}}(z)P_{n}^{\mathsf{R}}(z) =-P_{n}^{\mathsf{R}}(z)L_{n}^{2,2}(z)-P_{n-1}^{\mathsf{R}}(z)C_{n- 1}L_{n}^{1,2}(z),\] \[\phi(z)\bigl{(}Q_{n}^{\mathsf{R}}(z)\bigr{)}^{\prime}-h^{\perp}(z) Q_{n}^{\mathsf{R}}(z) =-Q_{n}^{\mathsf{R}}(z)L_{n}^{2,2}(z)-Q_{n-1}^{\mathsf{R}}(z)C_{n -1}L_{n}^{1,2}(z).\] Next we give the representation of the matrix \(M_{n}^{\perp}\) when we are in one of the cases we have just studied, with \(h^{\perp}(z)=A^{\perp}z+B^{\perp}\) and \(h^{\mathsf{R}}(z)=A^{\mathsf{R}}z+B^{\mathsf{R}}\). For example in [4] we get for the matrix \(M_{n}^{\perp}\) in the Hermite case the representation \(M_{n}^{\perp}(z)=\mathscr{A}^{\perp}z+\mathscr{K}_{n}^{\perp}\), with \[\mathscr{A}^{\perp}=\begin{bmatrix}A^{\perp}&0_{N}\\ 0_{N}&-A^{\mathsf{R}}\end{bmatrix},\qquad\mathscr{K}_{n}^{\perp}=\begin{bmatrix} B^{\perp}+\left[P_{n,A}^{\perp}\right]&C_{n}^{-1}A^{\mathsf{R}}+A^{\perp}C_{n}^{-1} \\ -C_{n-1}A^{\mathsf{R}}-A^{\mathsf{R}}C_{n-1}&-B^{\mathsf{R}}-\left[q_{1,n-1,A^{ \mathsf{R}}}^{\perp}\right]\end{bmatrix},\qquad n\in\mathbb{N}.\] In the Laguerre case we get following [5] that the matrix \(\widetilde{M}_{n}^{\perp}\) defined in (2.30) is given by \[\widetilde{M}_{n}^{\perp}(z)=\begin{bmatrix}A^{\perp}z+[p_{\perp,n}^{1},A^{\perp }]+nI_{N}+B^{\perp}&A^{\perp}C_{n}^{-1}+C_{n}^{-1}A^{\mathrm{R}}\\ -C_{n-1}A^{\perp}-A^{\mathrm{R}}C_{n-1}&-A^{\mathrm{R}}z+[p_{\mathrm{R},n}^{1},A^{\mathrm{R}}]-nI_{N}-B^{\mathrm{R}}\end{bmatrix},\ \ \ n\in\mathbb{N}.\] For the Jacobi case we get in [6], for all \(n\in\mathbb{N}\), that the matrix \(\widetilde{M}_{n}^{\perp}\) defined by (2.30) is \[\widetilde{M}_{n}^{\perp}(z)=\begin{bmatrix}\big{(}A^{\perp}-nI_{N}\big{)}z+[ p_{\perp,n}^{1},A^{\perp}]+p_{\perp,n}^{1}+nI_{N}+B^{\perp}&A^{\mathrm{L}}C_{n}^{- 1}+C_{n}^{-1}A^{\mathrm{R}}-(2n+1)C_{n-1}^{-1}\\ -C_{n-1}A^{\mathrm{L}}-A^{\mathrm{R}}C_{n-1}+(2n-1)C_{n-1}&\big{(}nI_{N}-A^{ \mathrm{R}}\big{)}z+[p_{\mathrm{R},n}^{1},A^{\mathrm{R}}]-p_{\mathrm{R},n}^{1} -nI_{N}-B^{\mathrm{R}}\end{bmatrix}.\] ## 3. Second order differential relations Taking derivative on (2.29) we arrive to \[\phi^{\prime}(z)\big{(}Z_{n}^{\perp}(z)\big{)}^{\prime}\big{(}Z_{ n}^{\perp}(z)\big{)}^{-1}+\phi(z)\big{(}Z_{n}^{\perp}(z)\big{)}^{\prime \prime}\big{(}Z_{n}^{\perp}(z)\big{)}^{-1}\\ -\phi(z)\big{(}Z_{n}^{\perp}(z)\big{)}^{\prime}\big{(}Z_{n}^{ \perp}(z)\big{)}^{-1}\big{(}Z_{n}^{\perp}(z)\big{)}^{\prime}\big{(}Z_{n}^{ \perp}(z)\big{)}^{-1}=\big{(}\widetilde{M}_{n}^{\perp}(z)\big{)}^{\prime},\] and using again (2.29) we get \[\phi(z)\big{(}Z_{n}^{\perp}(z)\big{)}^{\prime\prime}=\left\{\big{(}\widetilde{ M}_{n}^{\perp}(z)\big{)}^{\prime}-\frac{\phi^{\prime}(z)}{\phi(z)} \widetilde{M}_{n}^{\perp}(z)+\frac{\big{(}\widetilde{M}_{n}^{\perp}(z)\big{)} ^{2}}{\phi(z)}\right\}\,Z_{n}^{\perp}(z),\] From (2.22) and (2.23) we have \[\phi(z)\big{(}\omega^{\perp}(z)\big{)}^{\prime\prime}\big{(} \omega^{\perp}(z)\big{)}^{-1} =\big{(}h^{\perp}(z)\big{)}^{\prime}-\frac{\phi^{\prime}(z)}{\phi (z)}h^{\perp}(z)\frac{\big{(}h^{\perp}(z)\big{)}^{2}}{\phi(z)},\] \[\phi(z)\Big{(}\big{(}\omega^{\mathrm{R}}(z)\big{)}^{-1}\Big{)}^{ \prime\prime}\omega^{\mathrm{R}}(z) =\frac{\big{(}h^{\mathrm{R}}(z)\big{)}^{2}}{\phi(z)}+\frac{\phi^ {\prime}(z)}{\phi(z)}h^{\mathrm{R}}(z)-\big{(}h^{\mathrm{R}}(z)\big{)}^{ \prime}.\] Now, since \[\phi(z)\big{(}Z_{n}^{\perp}\big{)}^{\prime\prime}\big{(}Z_{n}^{ \perp}\big{)}^{-1} =\phi(z)(Y_{n}^{\perp})^{\prime\prime}Y_{n}^{\perp}+\big{(}Y_{n}^ {\perp}\big{)}^{\prime}\begin{bmatrix}2h^{\perp}&0_{N}\\ 0_{N}&-2h^{\mathrm{R}}\end{bmatrix}(Y_{n}^{\perp})^{-1}\] \[+Y_{n}^{\perp}\begin{bmatrix}\phi(z)\big{(}\omega^{\perp}\big{)}^ {\prime\prime}\big{(}\omega^{\perp}\big{)}^{-1}&\mathbf{0}\\ \mathbf{0}&\phi(z)\big{(}(\omega^{\mathrm{R}})^{-1}\big{)}^{\prime\prime} \omega^{\mathrm{R}}\end{bmatrix}(Y_{n}^{\perp})^{-1},\] we finally get \[\phi(z)\big{(}Y_{n}^{\perp}\big{)}^{\prime\prime}+\big{(}Y_{n}^{ \perp}\big{)}^{\prime}\begin{bmatrix}2h^{\perp}+\phi^{\prime}(z)\mathrm{I}& \mathbf{0}\\ \mathbf{0}&-2h^{\mathrm{R}}+\phi^{\prime}(z)\mathrm{I}\end{bmatrix}\\ +Y_{n}^{\perp}(z)\begin{bmatrix}\mathcal{N}(h^{\perp})&\mathbf{0}\\ \mathbf{0}&\mathcal{N}(-h^{\mathrm{R}})\end{bmatrix}=\mathcal{N}(\widetilde{M }_{n}^{\perp})Y_{n}^{\perp}, \tag{3.32}\] where \(\mathcal{N}(F(z))=F^{\prime}(z)+\dfrac{F^{2}(z)}{\phi(z)}\). The same procedure leads us to \[\phi(z)\big{(}Y_{n}^{\text{\tiny R}}\big{)}^{\prime\prime}+\begin{bmatrix}2h ^{\text{\tiny R}}+\phi^{\prime}(z)\text{I}&\mathbf{0}\\ \mathbf{0}&-2h^{\text{\tiny L}}+\phi^{\prime}(z)\text{I}\end{bmatrix}\big{(}Y_ {n}^{\text{\tiny R}}\big{)}^{\prime}\\ +\begin{bmatrix}\mathcal{N}(h^{\text{\tiny R}})&\mathbf{0}\\ \mathbf{0}&\mathcal{N}(-h^{\text{\tiny L}})\end{bmatrix}Y_{n}^{\text{\tiny R}}( z)=Y_{n}^{\text{\tiny R}}\mathcal{N}(\widetilde{M}_{n}^{\text{\tiny R}}). \tag{3.33}\] We can see that (3.32) and(3.33) enclose second order differential relations for \(P_{n}^{\text{\tiny L}}\), \(P_{n}^{\text{\tiny R}}\), \(Q_{n}^{\text{\tiny L}}\) and \(Q_{n}^{\text{\tiny R}}\) that we have explicitly determine in [4], [5] and [6], for the Hermite, Laguerre and Jacobi cases, respectively. We could recover the scalar classical cases of Hermite, Laguerre and Jacobi form the study we have presented for the matrix cases. ### Hermite case For example for the scalar Hermite case we get that \[H_{n}^{\prime\prime}(z)-2zH_{n}^{\prime}(z) =-4\gamma_{n}H_{n}(z),\] \[Q_{n}^{\prime\prime}(z)+2zQ_{n}^{\prime}(z) =-(4\gamma_{n}+2)Q_{n}(z), n\in\mathbb{N},\] where, \(\big{\{}H_{n}\big{\}}_{n\in\mathbb{N}}\) is the sequence of monic orthogonal polynomials orthogonal with respect to the weight \(w(x)=e^{-x^{2}}\) in \(\mathbb{R}\), satisfying a three term recurrence relation \[xH_{n}(x) =H_{n+1}(x)+\gamma_{n}H_{n-1}(x), n\in\mathbb{N},\] with \(H_{-1}(x)=0\), \(H_{0}(x)=1\), \(\gamma_{n}=\frac{n}{2}\), and \(\big{\{}Q_{n}\big{\}}_{n\in\mathbb{N}}\) is the sequence of second kind functions associated with \(\big{\{}H_{n}\big{\}}_{n\in\mathbb{N}}\) and \(w\). ### Berezanskii Laguerre case In the scalar monic Laguerre polynomials, \(\big{\{}L_{n}^{\alpha}\big{\}}_{n\in\mathbb{N}}\), orthogonal with respect to the weight \(w^{\alpha}(x)=x^{\alpha}e^{-x}\) in \((0,+\infty)\), with \(\alpha\in(-1,+\infty)\), and second kind functions, \(\big{\{}Q_{n}^{\alpha}\big{\}}_{n\in\mathbb{N}}\), we got in [5] that \[z\big{(}L_{n}^{\alpha}\big{)}^{\prime\prime}(z)-(z-\alpha-1) \big{(}L_{n}^{\alpha}\big{)}^{\prime}(z) =-n\,L_{n}^{\alpha}(z),\] \[z\big{(}Q_{n}^{\alpha}\big{)}^{\prime\prime}(z)+(z-\alpha+1) \big{(}Q_{n}^{\alpha}\big{)}^{\prime}(z) =-(n+1)\,Q_{n}^{\alpha}(z), n\in\mathbb{N}.\] Now, considering the Berezanskii example, of monic matrix orthogonal polynomials, \(\big{\{}\mathbb{P}_{n}\big{\}}_{n\in\mathbb{N}}\), given in (1.8), with \(\omega^{1}(x)=w^{\alpha}(x)\) and \(\omega^{2}(x)=w^{\beta}(x)\), we get that \[z\mathbb{P}_{n}^{\prime\prime}(z)-\mathbb{P}_{n}^{\prime}(z) \Psi_{1}(z) =-n\,\mathbb{P}_{n}(z),\] \[z\mathbb{Q}_{n}^{\prime\prime}(z)+\mathbb{Q}_{n}^{\prime}(z) \Psi_{2}(z) =-(n+1)\,\mathbb{Q}_{n}(z), n\in\mathbb{N},\] where \[\Psi_{1}(z) =\frac{1}{2}\begin{bmatrix}-\alpha-\beta+2(z-1)&\beta-\alpha\\ \beta-\alpha&-\alpha-\beta+2(z-1)\end{bmatrix},\] \[\Psi_{2}(z) =\frac{1}{2}\begin{bmatrix}-\alpha-\beta+2(z+1)&\beta-\alpha\\ \beta-\alpha&-\alpha-\beta+2(z+1)\end{bmatrix}.\] ### Berezanskii Jacobi case Let us consider the weight \(W^{\alpha,\beta}(z)=z^{\alpha}(1-z)^{\beta}\), in \([-1,1]\), with \(\alpha\), \(\beta\) scalars in \((-1,\infty)\). Then, the scalar second order equation for \(\left\{p_{n}\right\}_{n\in\mathbb{N}}\) and \(\left\{q_{n}\right\}_{n\in\mathbb{N}}\) (cf. for example [19]) is given by \[z(1-z)p_{n}^{\prime\prime}(z)+\big{(}1+\alpha-(\alpha+\beta+2) z\big{)}p_{n}^{\prime}(z)+n(\alpha+\beta+n+1)p_{n}(z)=0,\] \[z(1-z)q_{n}^{\prime\prime}(z)+\big{(}1-\alpha+(\alpha+\beta-2) z\big{)}q_{n}^{\prime}(z)+(n+1)(\alpha+\beta+n)q_{n}(z)=0.\] Now, we can construct the Berezanskii monic matrix orthogonal polynomials, \(\left\{\mathbb{P}_{n}\right\}_{n\in\mathbb{N}}\), given in (1.8), with \(\omega^{1}(x)=W^{\alpha,\beta}(x)\) and \(\omega^{2}(x)=W^{\beta,\alpha}(x)\), we get that \[z\mathbb{P}_{n}^{\prime\prime}(z)+\mathbb{P}_{n}^{\prime}(z) \Psi_{1}(z)+n(\alpha+\beta+n+1)\mathbb{P}_{n}(z) =\mathbf{0},\] \[z\mathbb{Q}_{n}^{\prime\prime}(z)+\mathbb{Q}_{n}^{\prime}(z) \Psi_{2}(z)+(n+1)(\alpha+\beta+n)\mathbb{Q}_{n}(z) =\mathbf{0},\qquad n\in\mathbb{N},\] where \[\Psi_{1}(z) =\frac{1}{2}\begin{bmatrix}-\alpha-\beta+2(z-1)&\beta-\alpha\\ \beta-\alpha&-\alpha-\beta+2(z-1)\end{bmatrix},\] \[\Psi_{2}(z) =\frac{1}{2}\begin{bmatrix}(2z-1)(\alpha+\beta-2)&\beta-\alpha \\ \beta-\alpha&(2z-1)(\alpha+\beta-2)\end{bmatrix}.\] ## 4. Discrete Painleve type matrix equations We now discuss the case of a weight \(\omega\) satisfying a generalized matrix Pearson equation \[\omega^{{}^{\prime}}(z)=(\lambda+\mu z+\nu z^{2})\omega(z),\qquad\quad\text{ with}\qquad\quad\mu,\,\nu\in\mathbb{C}^{N\times N}.\] The structure matrix, cf. (2.27), is given by \(M_{n}(z)=M_{n}^{0}z^{2}+M_{n}^{1}z+M_{n}^{2}\) with \[M_{n}^{0}=\begin{bmatrix}\nu&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix},\qquad\qquad M_{n}^{1}=\begin{bmatrix} \mu-\begin{bmatrix}\nu,p_{n}^{1}\end{bmatrix}&\nu C_{n}^{-1}\\ -C_{n-1}\nu&\mathbf{0}\end{bmatrix},\] \[M_{n}^{2}=\begin{bmatrix}\lambda-\begin{bmatrix}\beta,p_{n}^{1} \end{bmatrix}-\begin{bmatrix}\nu,p_{n}^{2}\end{bmatrix}+\nu\begin{bmatrix}p_{n} ^{2}\end{bmatrix}^{2}-p_{n}^{1}\nu p_{n}^{1}+\nu C_{n}^{-1}C_{n-1}&\left(\mu- \begin{bmatrix}\nu,p_{n}^{1}\end{bmatrix}+\gamma\beta_{n}\right)C_{n}^{-1}\\ -C_{n-1}\left(\mu+p_{n-1}^{1}\nu-\nu p_{n}^{1}\right)&-C_{n-1}\nu C_{n}^{-1} \end{bmatrix}.\] Then in [4], we prove that the three term recursion coefficients \(\gamma_{n}\) can be expressed directly in terms of the recursion coefficients \(\beta_{n}\), for all \(n\in\mathbb{N}\), \[\gamma_{n+1}=-(n+1)\Big{(}\beta+\Big{[}\gamma,\sum_{k=0}^{n-1}\beta_{k}\Big{]}+ \gamma(\beta_{n}+\beta_{n+1})\Big{)}^{-1}.\] The coefficients \(\beta_{n}\) fulfill, for all \(n\in\mathbb{N}\), the following non-Abelian alt-dPI, \[\lambda+\nu\big{(}\gamma_{n}+\gamma_{n+1}+\beta_{n}^{2}\big{)}- \mu\beta_{n}+\Big{[}\beta,\sum_{k=0}^{n-1}\beta_{k}\Big{]}\big{(}I_{N}+\beta_{n }\big{)}\\ +\Big{[}\nu,\sum_{m=1}^{n-1}\gamma_{m}-\sum_{0\leq k<m\leq n-1} \beta_{m}\beta_{k}\big{]}+\Big{[}\nu,\sum_{k=0}^{n-1}\beta_{k}\Big{]}\sum_{k=0 }^{n-1}\beta_{k}=\mathbf{0}.\] Now, we consider a Laguerre type weight matrix \(W\) such that \[zW^{\prime}(z)=(h_{0}+h_{1}z+h_{2}z^{2})W(z).\] Then, the entries of the structure matrix, \(\widetilde{M}_{n}\), cf. (2.30), are given by \[\widetilde{M}_{n}^{11}=(h_{0}+h_{1}z+h_{2}z^{2})+h_{1}q_{R,n-1}^{ 1}+p_{\text{L},n}^{1}h_{1}\\ +z(h_{2}q_{R,n-1}^{1}+p_{\text{L},n}^{1}h_{2})+h_{2}q_{R,n-1}^{2 }+p_{\text{L},n}^{2}h_{2}+p_{\text{L},n}^{1}h_{2}q_{R,n-1}^{1}+n\,\text{I},\\ \widetilde{M}_{n}^{12}=(h_{1}+h_{2}z+h_{2}q_{R,n}^{1}+p_{\text{L },n}^{1}h_{2})C_{n}^{-1},\] \[\widetilde{M}_{n}^{21}=-C_{n-1}(h_{1}+h_{2}z+h_{2}q_{R,n-1}^{1}+p_{\text{L},n- 1}^{1}C_{\text{L}}),\] \[\widetilde{M}_{n}^{22}=-C_{n-1}h_{2}C_{n}^{-1}-n\,\text{I}.\] From these we proved in [5] that the following system of matrix equations is a noncommutative version of an instance of discrete Painleve IV equation: \[(2n+1)\,\text{I}+h_{0}+h_{2}(\gamma_{n+1}+\gamma_{n}))+(h_{2} \beta_{n}+h_{1})\beta_{n}\\ =\Big{[}\sum_{k=0}^{n-1}\beta_{k},h_{2}\Big{]}\sum_{k=0}^{n}\beta _{k}-\Big{[}\sum_{i,j=0}^{n-1}\beta_{i}\beta_{j}-\sum_{k=0}^{n-1}\gamma_{k},h_ {2}\Big{]}-\Big{[}\sum_{k=0}^{n-1}\beta_{k},h_{1}\Big{]},\\ \beta_{n}-\gamma_{n}\big{(}h_{2}(\beta_{n}+\beta_{n-1})+h_{1} \big{)}+\big{(}h_{2}(\beta_{n}+\beta_{n+1})+h_{1}\big{)}\gamma_{n+1}\\ =-\gamma_{n}\Big{[}\sum_{k=0}^{n-1}\beta_{k},h_{2}\Big{]}+\Big{[} -\sum_{k=0}^{n-1}\beta_{k},h_{2}\Big{]}\gamma_{n+1}.\] We can also consider, the weight matrix \(W(z)\) such that \[z(1-z)W^{\prime}(z)=(h_{0}+h_{1}z+h_{2}z^{2})W(z),\] which is a Jacobi type weight matrix. It was proved in [6] that the entries of the structure matrix, \(\widehat{M}_{n}\), cf. (2.30), are given by \[\widehat{M}_{n}^{11}=(h_{0}+h_{1}z+h_{2}z^{2})+h_{1}q_{\mathsf{R},n- 1}^{1}+p_{\mathsf{L},n}^{1}h_{1}+z(h_{2}q_{\mathsf{R},n-1}^{1}+p_{\mathsf{L},n}^ {1}h_{2})\] \[\qquad\qquad\qquad+h_{2}q_{\mathsf{R},n-1}^{2}+p_{\mathsf{L},n}^ {2}h_{2}+p_{\mathsf{L},n}^{1}h_{2}q_{\mathsf{R},n-1}^{1}+n\mathsf{I}-\!\!zn \mathsf{I}+\!\!p_{\mathsf{L},n}^{1},\] \[\widehat{M}_{n}^{12}=(h_{1}+h_{2}z+h_{2}q_{\mathsf{R},n}^{1}+p_{ \mathsf{L},n}^{1}h_{2})C_{n}^{-1}-(2n+1)C_{n}^{-1},\] \[\widehat{M}_{n}^{21}=-C_{n-1}(h_{1}+h_{2}z+h_{2}q_{\mathsf{R},n- 1}^{1}+p_{\mathsf{L},n-1}^{1}h_{2})+(2n-1)C_{n-1},\] \[\widehat{M}_{n}^{22}=-C_{n-1}h_{2}C_{n}^{-1}-n\mathsf{I}+\!\!zn \mathsf{I}-\!\!p_{\mathsf{R},n}^{1}.\] We plug this information in the equations of zero curvature presented in Section 2 and get \[(2n+1)\mathsf{I}+\!h_{0}+h_{2}(\gamma_{n+1}+\gamma_{n})+(h_{2} \beta_{n}+h_{1}-(2n+1)\mathsf{I})\,\beta_{n}+\sum_{k=0}^{n-1}\beta_{k}\] \[+C_{n}^{-1}\sum_{k=0}^{n}\beta_{k}C_{n}=\Big{[}\sum_{k=0}^{n-1} \beta_{k},h_{2}\Big{]}\!\sum_{k=0}^{n}\beta_{k}-\!\Big{[}\sum_{i,j=0}^{n-1} \beta_{i}\beta_{j}-\sum_{k=0}^{n-1}\gamma_{k},h_{2}\Big{]}\!-\!\Big{[}\sum_{k =0}^{n-1}\beta_{k},h_{1}\Big{]},\] \[\beta_{n}-(\beta_{n})^{2}-\gamma_{n}\big{(}h_{2}(\beta_{n}+\beta_ {n-1})+h_{1}-(2n-1)\mathsf{I}\big{)}+\big{(}h_{2}(\beta_{n}+\beta_{n+1})+h_{1}\] \[-(2n+3)\mathsf{I}\big{)}\gamma_{n+1}=\gamma_{n}\Big{[}\sum_{k=0} ^{n-2}\beta_{k},h_{2}\Big{]}\!-\!\Big{[}\sum_{k=0}^{n-1}\beta_{k},h_{2}\Big{]} \!\gamma_{n+1}-\!\Big{[}\sum_{k=0}^{n-1}\beta_{k},\sum_{k=0}^{n}\beta_{k}\Big{]}.\] In [6] it is proven that this system contains a non-commutative version of an instance of discrete Painleve IV equation. ## Acknowledgements AB acknowledges Centro de Matematica da Universidade de Coimbra (CMUC) - UID/MAT/00324/2020, funded by the Portuguese Government through FCT/MEC and co-funded by the European Regional Development Fund through the Partnership Agreement PT2020. AFM and AF thanks CIDMA Center for Research and Development in Mathematics and Applications (University of Aveiro) and the Portuguese Foundation for Science and Technology (FCT) within project UIDB/04106/2020 and UIDP/04106/2020. MM was partially supported by the Spanish "Agencia Estatal de Investigacion" research project [PGC2018-096504-B-C33], _Ortogonalidad y Aproximacion: Teoria y Aplicaciones en Fisica Matematica_ and research project [PID2021- 122154NB-I00], _Ortogonalidad y aproximacion con aplicaciones en machine learning y teoria de la probabilidad_.
2310.20393
Parameter estimation of the Bardeen-Kerr black hole in cloud of strings using shadow analysis
We consider the rotating generalization of the Bardeen black hole solution in the presence of cloud of strings (CoS). The parameter space for which the black hole horizon exists is determined. We also study the static limit surface and the ergo-region in the presence of the CoS parameter. We consider photon orbits and obtain the deformation of black hole shadows due to rotation for various values of CoS parameter. The shadow deformation is used to determine the black hole spin for different values of the black hole parameters.
Bijendra Kumar Vishvakarma, Dharm Veer Singh, Sanjay Siwach
2023-10-31T12:15:52Z
http://arxiv.org/abs/2310.20393v1
# Parameter estimation of the Bardeen-Kerr black hole in cloud of strings using shadow analysis ###### Abstract We consider the rotating generalization of the Bardeen black hole solution in the presence of cloud of strings (CoS). The parameter space for which the black hole horizon exists is determined. We also study the static limit surface and the ergo-region in the presence of the CoS parameter. We consider photon orbits and obtain the deformation of black hole shadows due to rotation for various values of CoS parameter. The shadow deformation is used to determine the black hole spin for different values of the black hole parameters. Introduction Black holes provide an interesting laboratory to test the predictions of the General Theory of Relativity as well as that of theories beyond General Relativity. The measurement of shadow size using the Event Horizon Telescope recently has opened up the possibility of determining the black hole parameters and future observations should be precise enough to make a distinction of black holes from different theories. A class of theories for which black hole solutions have been obtained in recent years are those with a non-linear electrodynamics source [1]. The recent interest in these solutions lies due to the absence of singularities for these solutions [2; 3; 4; 5; 6]. The theories of inflation and quantum gravity indicate the possibility of the existence of primordial black holes formed by self-gravitating magnetic mono-poles [7; 8]. These black holes may have survived due to their topological stability and can provide clues about the observables in the early universe. They belong to Bardeen type space-time [9] and its generalizations [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Black holes are also investigated in non-trivial space-time e.g. cloud of strings (CoS) in order to mimic the early universe environment [23; 24; 25]. In this context, the black hole solutions are investigated in CoS, and their shadows are constructed using numerical methods. These solutions are not asymptotically flat and provide new examples of space-time with this property. The generalization of these black holes to include the effects of Bardeen type non-linear electrodynamics (NED) was achieved recently [26; 27] (see also [28; 29; 30]). The solutions correspond to that of a self-gravitating magnetic mono-pole and may provide an opportunity to explore the black holes produced in the early universe. Their shadows and quasi-normal modes of this solution were also investigated recently [31]. Recently, the method of estimating the distortion of black hole shadow from circular shapes is proposed by Hioki and Meeda [32] and its generalizations in static [33; 34; 35; 36; 37] and rotating black holes are also considered [38; 39; 40; 41; 42; 43; 44]. The spatial angular resolution of VLBI radio observation is now below the horizon radius of super-massive black holes viz. Sgr A* and M87 [45; 46; 47; 48; 49; 50]. This has opened up the possibility of determination of the parameter of the astrophysical black holes using black hole shadows [51; 52; 53; 54; 55]. In this paper, we consider the rotating generalization of the Bardeen black hole in CoS. The rotating generalizations provide a unique opportunity to capture several observable features that are absent for charged black holes e.g. shape deformation of shadows. We calculate the range of parameters for which the horizon exists. The ergo-region and shadows are also plotted for different sets of parameters. The shadows around rotating black holes can be used to determine the parameters like spin and mass of the black holes. We use this to obtain the spin of the black for different values of the CoS parameter. The dependence of shadow radius on spin and CoS parameters is presented. The shape deformation parameters are obtained as a function of shadow radius. The paper is organized as follows. In section II we review the Letelier-Bardeen black hole and present the rotating generalization using the Newman-Janus procedure. The horizon exists for a constrained set of black hole parameters only and we obtain this limit on parameter space numerically. The ergo-region is obtained in section III. In section IV, we consider the motion of mass-less particles (photons) around the black hole space-time and obtain the shadows for a permissible set of parameters. The distortion of black hole shadows from circular geometry is used to determine the black hole parameters in section VI. We summarise our results in the concluding section. ## II Letelier-Bardeen-Kerr black hole Let us consider the action of Einstein's gravity coupled with a NED and cloud of strings source, \[S=\int d^{4}x\sqrt{-g}\left[R+\mathcal{L}_{NED}+\mathcal{L}_{cs}\right], \tag{1}\] where \(R\) is the scalar curvature, \(\mathcal{L}_{NED}\) and \(\mathcal{L}_{CS}\) respectively are the Lagrangian density of the nonlinear source and CoS source. The equations of motion are obtained by varying the action with respect to metric tensor, \(g_{\mu\nu}\) and electromagnetic potential, \(A_{\mu}\), and can be written in the form, \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=T_{\mu\nu}^{NED}+T_{\mu\nu}^{CS}, \tag{2}\] \[\nabla_{\mu}\left(\frac{\partial\mathcal{L}}{\partial F}F^{\mu \nu}\right)=0 \tag{3}\] where, we consider the Lagrangian density of the non-linear electrodynamics, which is taken as a function of \(F=F_{ab}F^{ab}\) and specifically we consider the Bardeen type source given as, [11; 15] \[{\cal L}(F)=\frac{3}{2sg^{2}}\left(\frac{\sqrt{2g^{2}F}}{1+\sqrt{2g^{2}F}}\right)^ {\frac{5}{2}} \tag{4}\] where \(M\) and \(g\) are the parameters to be identified with magnetic monopole charge and mass and \(s=g/2M\). The energy-momentum tensor can be obtained from equation (4) and is given as, \[T_{ab}^{NED}=2\left[\frac{\partial L(F)}{\partial F}F_{ac}F_{\nu}^{c}-\tilde{g }_{ab}L(F)\right], \tag{5}\] The cloud of strings term in the action is given by the Nambu-Goto action and the energy-momentum tensor is given by, [25] \[T^{\mu\nu}=\frac{\rho\Sigma^{\mu\rho}\Sigma_{\rho}^{\ \nu}}{\sqrt{-\gamma}}. \tag{6}\] where \(\rho\) is the density and \(\gamma\) is the induced metric on the worldsheet. The \(\Sigma^{\mu\nu}\) is a bivector given by, \[\Sigma^{\mu\nu}=\epsilon^{ab}\frac{\partial x^{\mu}}{\partial\lambda^{a}} \frac{\partial x^{\nu}}{\partial\lambda^{b}}, \tag{7}\] where \(\epsilon^{ab}\) being the Levi-Civta tensor. Let us consider the ansatz for the static spherically symmetric space-time, given by the line element \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\Omega^{2}, \tag{8}\] where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\). We take the following form of the metric function, \[f(r)=1-\frac{2m(r)}{r}. \tag{9}\] For magnetically charged black holes, the non-linear electrodynamics field strength can be taken in the form, of \(F_{\theta\phi}=2g\sin\theta\) and the non-vanishing components of energy-momentum tensor (EMT) are given by, \[T_{t}^{t}=T_{r}^{r}=\frac{8Mg^{2}}{(r^{2}+g^{2})^{5/2}}+\frac{b}{r^{2}}, \tag{10}\] where (\(M\)) is black hole mass and (\(a\)) is the CoS parameter. The equations of motion give, \[m^{\prime}(r)=\frac{8Mg^{2}}{(r^{2}+g^{2})^{5/2}}+\frac{b}{r^{2}}, \tag{11}\] which can be integrated to give the black hole solution [26; 27] \[ds^{2}=-\left(1-\frac{2Mr^{2}}{(r^{2}+g^{2})^{3/2}}-b\right)dt^{2}+\frac{1}{\left( 1-\frac{2Mr^{2}}{(r^{2}+g^{2})^{3/2}}-b\right)}+r^{2}d\theta^{2}+r^{2}\sin^{2} \theta d\phi^{2} \tag{12}\] This is a Letelier-Bardeen-like black hole characterized by its mass (\(M\)), magnetic monopole charge (\(g\)), and a CoS parameter (\(b\)). To obtain the rotating counterpart of the black hole, we employ the Newman-Janus procedure and get the metric of the Letelier-Bardeen-Kerr black hole. \[ds^{2}=-\left(1-\frac{br^{2}+\frac{2Mr^{4}}{(r^{2}+g^{2})^{3/2}} }{\Sigma}\right)dt^{2}-\frac{2a\sin^{2}\theta}{\Sigma}\left(r^{2}+a^{2}- \Delta\right)dtd\phi+\frac{\Sigma}{\Delta}dr^{2}+\] \[\Sigma\,d\theta^{2}+\frac{\sin^{2}\theta}{\Sigma}((r^{2}+a^{2})^ {2}-\Delta\ a^{2}\sin^{2}\theta)d\phi^{2}, \tag{13}\] where \[\Delta=(1-b)r^{2}+a^{2}-\frac{2Mr^{4}}{(r^{2}+g^{2})^{\frac{3}{2}}}\qquad \text{and}\qquad\Sigma=r^{2}+a^{2}\cos^{2}\theta \tag{14}\] Eq. (13) represents the rotating counterpart of Latelier- Bardeen black hole space-times in the Boyer-Lindquist coordinates. The spin parameter (\(a=J/M\)) is the ratio of the angular momentum (\(J\)) and ADM mass (\(M\)) of the rotating black hole. The solution (13) goes over to the Bardeen-Kerr black hole in the absence of CoS parameters. Next, we investigate the horizon structure of the black hole solution (13) that corresponds to the space-time points (\(g^{rr}=\Delta=0\)): \[(1-b)r^{2}+a^{2}-\frac{2Mr^{4}}{(r^{2}+g^{2})^{\frac{3}{2}}}|_{r_{+}}=0. \tag{15}\] The Eq. (15) gives the location of the black hole horizons, which can not be solved analytically, the plot of the Eq. (15) is depicted in Fig. 1 for different values of CoS parameter (\(b\)), and spin parameter (\(a\)) with fixed value of magnetic monopole charge (\(g=0.1\)). ### Static limit Surface and Ergo-region The static limit surface is the region between the event horizon and the ergo-region and it is defined as a surface, where no observer can be at rest and static. We plot the ergo-region in the \(x-z\) plane as depicted in Fig. 2, for different values of black hole parameters, \((a,b,g)\). The static limit is defined by, (\(g_{tt}=0\)) The numerical values of SLS are tabulated in Tab. 1, for a given value of spin parameter and angle \((a,\theta)\), and for different values of CoS parameter (\(b\)). The SLS has no root when (\(b>b_{s}\)) and has two simple zeros, if (\(b<b_{s}\)). Similarly, we can also see the effect spin parameter on SLS with a fixed value of the CoS parameter and magnetic monopole charge. The size of the SLS increases with the increase in the spin parameter (\(a\)). We also notice that the size of the SLS decreases with increases in the CoS parameter and increases with increases in the spin parameter. Thus the the effect of CoS parameter and spin parameter on SLS are opposite to each other. We also investigate the effect of black hole parameters \((a,b,g)\) on ergo-region which is plotted in Fig. 2. We plot the ergo-region for different values of the CoS parameter (\(b\)) with for fixed values of \((a,g)\). We notice that when we increase the value of the CoS parameter (\(b\)) the resulting ergo-region also increases. Figure 1: Metric function \(\Delta(r)\) vs \(r\) for different values of the spin parameter and CoS parameter with a fixed value of black hole mass (\(M=1\)) and magnetic monopole charge (\(g=0.1\)). ## III Geodesics around the Letelier-Bardeen-Kerr black hole Let us consider the motion of the massless particles (photons) moving in the space-time (13). We shall be interested in the photon motion in the equatorial plane by restricting \(\theta=\pi/2\). The corresponding equations of motion can be obtained using the Hamiltonian-Jacobi formalism, [28; 35]. The equations of motion are obtained as, \[\Sigma\frac{dr}{d\tau} = \sqrt{\mathcal{R}(r)} \tag{16}\] \[\Sigma\frac{d\theta}{d\tau} = \sqrt{\Theta(\theta)} \tag{17}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(b\) & \(r_{1}\) & \(r_{2}\) & \(\delta\) & \(b\) & \(r_{1}\) & \(r_{2}\) & \(\delta\) \\ \hline & \(a=0.0,\;\;g=0.1\) & & & & \(a=0.1,\;g=0.1\) & \\ \hline 0.10 & 0.556 & 2.210 & 2.154 & 0.0 & 0.965 & 1.752 & 0.787 \\ \hline 0.10 & 0.159 & 2.153 & 1.994 & 0.10 & 0.774 & 2.301 & 1.577 \\ \hline 0.20 & 0.324 & 2.028 & 1.704 & 0.20 & 0.656 & 2.895 & 2.239 \\ \hline 0.30 & 0.607 & 1.788 & 1.181 & 0.30 & 0.564 & 3.560 & 2.996 \\ \hline & \(a=0.3,\;\;g=0.1\) & & & \(a=0.5,\;g=0.1\) & \\ \hline 0.0 & 0.116 & 2.169 & 2.053 & 0.0 & 1.275 & 1.510 & 0.230 \\ \hline 0.10 & 0.264 & 2.109 & 1.845 & 0.10 & 0.940 & 2.212 & 1.272 \\ \hline 0.20 & 0.453 & 1.973 & 1.520 & 0.20 & 0.807 & 2.832 & 2.025 \\ \hline 0.30 & 0.769 & 1.696 & 0.927 & 0.30 & 0.717 & 3.602 & 2.887 \\ \hline & \(a=0.7,\;\;g=0.1\) & & & \(a=0.9,\;g=0.1\) & \\ \hline 0.0 & 0.199 & 2.081 & 1.882 & 0.0 & — & — & — \\ \hline 0.10 & 0.388 & 2.012 & 1.624 & 0.10 & 1.268 & 1.966 & 0.698 \\ \hline 0.20 & 0.631 & 1.848 & 1.217 & 0.20 & 1.039 & 2.688 & 1.647 \\ \hline 0.30 & 1.193 & 1.340 & 0.147 & 0.30 & 0.921 & 3.488 & 2.567 \\ \hline \end{tabular} \end{table} Table 1: The SLS of Letelier-Bardeen-like black hole for different values of magnetic monopole charge (\(g\)) and CoS parameter (\(b\)), where \(\delta=r_{2}-r_{1}\). where \(\mathcal{R}(r)\) and \(\Theta(\theta)\) are given below as following \[\mathcal{R}(r) =\left(E(r^{2}+a^{2})-a\ L\right)^{2}-\Delta\left(\kappa+\left(L-a \ E\right)^{2}\right) \tag{18}\] \[\Theta(\theta) =\kappa-\cos^{2}\theta(\left(a^{2}-E^{2}+\frac{L^{2}}{\sin^{2} \theta}\right) \tag{19}\] where \(E\) and \(L\) are the energy and angular momentum of the particle respectively. The radial equation can be put in the form, \(\dot{r}^{2}+V_{eff}(r)=E^{2}\), where \(V_{eff}\) is the effective potential, given by \[V_{eff}=\frac{E^{2}r^{4}+\Delta(L-aE)^{2}-[(r^{2}+a^{2})E-aL]^{2}}{r^{4}} \tag{20}\] Figure 2: Plot of ergo-region in x-z plane for different value of CoS parameter with a fixed value of the spin parameter and magnetic monopole charge. The null circular geodesics obey the conditions \(V_{eff}=0,\ \partial V_{eff}/\partial r=0\) and \(\partial^{2}V_{eff}/\partial r^{2}>0\), which gives \[\left((1-b)-\frac{3Mr_{p}^{4}}{(g^{2}+r_{p}^{2})^{5/2}}\right)^{2}-\frac{4a^{2} M(-2g^{2}+r_{p}^{2})}{(g^{2}+r_{p}^{2})^{5/2}}=0. \tag{21}\] In Tab. 2 and Tab. 3, we can see that the photon radius (\(r_{p}\)) of the obtained black hole solution increases with increases in the spin parameter and CoS parameter. The photon radius decreases with increased magnetic monopole charge (\(g\)). We can say that the effect of magnetic monopole charge is opposite with spin parameter and CoS parameter. We obtain the critical values of the impact factor by solving the equations (19) for null geodesics. Using the boundary conditions at \(\mathcal{R}(r)=0\) and \(d\mathcal{R}/dr\) at \(r=r_{p}\), we get \[\eta =\frac{r^{2}(16a^{2}\Delta-16\Delta^{2}+8r\Delta\Delta_{r}-r^{2} \Delta_{r}^{2})}{a^{2}\Delta_{r}^{2}},\] \[\xi =\frac{(r^{2}+a^{2})\Delta_{r}-4r\Delta_{r}}{a\Delta_{r}} \tag{22}\] where, \(\xi=L/E\) and \(\eta=\mathcal{K}/E^{2}\) are the two dimensionless impact parameters and \(\Delta_{r}\) is first derivative of \(\Delta(r)\) with respect to \(r\). The impact parameter (22) reduces to the impact factor \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{\(r_{p}\)} \\ \hline \multicolumn{3}{|c|}{\(a=0.1\)} & \multicolumn{1}{|c|}{\(a=0.3\)} & \multicolumn{1}{|c|}{\(a=0.5\)} & \multicolumn{1}{|c|}{\(a=0.7\)} & \multicolumn{1}{|c|}{\(a=0.9\)} \\ \hline **g** & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) \\ \hline 0.1 & 2.912 & 3.302 & 2.563 & 3.940 & 2.281 & 4.10 & 2.007 & 4.231 & 1.700 & 4.339 \\ \hline 0.2 & 2.884 & 3.672 & 2.526 & 3.922 & 2.235 & 4.086 & 1.945 & 4.215 & 1.594 & 4.324 \\ \hline 0.3 & 2.834 & 3.640 & 2.462 & 3.893 & 2.151 & 4.058 & 1.822 & 4.188 & 0.490 & 4.298 \\ \hline 0.4 & 2.761 & 3.592 & 2.364 & 3.850 & 2.053 & 4.018 & 1.565 & 4.150 & 0.642 & 4.262 \\ \hline 0.5 & 2.657 & 3.529 & 2.215 & 3.793 & 1.763 & 3.965 & 0.813 & 4.100 & 0.770 & 4.213 \\ \hline 0.6 & 2.512 & 3.447 & 1.969 & 3.721 & 0.966 & 3.898 & 0.907 & 4.036 & 0.888 & 4.152 \\ \hline 0.7 & 2.293 & 3.342 & — & 3.629 & 1.027 & 3.813 & 1.013 & 3.956 & 1.007 & 4.076 \\ \hline 0.8 & 1.790 & 3.206 & — & 3.515 & 1.136 & 3.709 & 1.134 & 3.858 & 1.133 & 3.985 \\ \hline 0.9 & — & 3.062 & — & 3.368 & 1.276 & 3.756 & 1.275 & 3.737 & 1.274 & 3.869 \\ \hline \end{tabular} \end{table} Table 2: The numerical values of photon radius for different values of magnetic monopole charge (\(g\)) and spin parameter (\(a\)) with a fixed value of CoS parameter (\(b\)). of rotating Bardeen black hole in the absence of CoS parameter (\(b=0\)), rotating Letelier black hole when (\(g=0\)) as well as Kerr black hole in the limit of (\(g=b=0\)). Further, for an observer at the equatorial plane (\(\theta=\pi/2\)), it simplifies to \[x=-\xi,\hskip 56.905512pt\text{and}\hskip 56.905512pty=\pm\sqrt{\eta} \tag{23}\] The shadow image of the obtained black hole solutions (13) for choice of parameters (\(a,b,g\)) and \(\theta_{0}\) are plotted in Fig. 3. In Fig. 3, we can see that the size of shadow images will increase the CoS parameter (\(b\)) and (\(a\)). The distortion of the shadow images arises at the higher values of the spin parameter (\(a\)). ## IV Summary and Results We have investigated the ergo-regions and shadows of the Bardeen-Kerr black hole in a cloud of strings in the analysis so far. The geodesic equation of the photons is obtained in this geometry and the shadows are plotted for different values of the parameters \(a\) and \(b\). The shadow deformation can be used to determine the spin, \(a\) of the black hole for different \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{\(r_{p}\)} \\ \hline \multicolumn{3}{|c|}{\(a=0.1\)} & \multicolumn{3}{|c|}{\(a=0.3\)} & \multicolumn{3}{|c|}{\(a=0.5\)} & \multicolumn{3}{|c|}{\(a=0.7\)} & \multicolumn{3}{|c|}{\(a=0.9\)} \\ \hline **b** & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) & \(\mathbf{r_{p1}}\) & \(\mathbf{r_{p2}}\) \\ \hline 0.1 & 1.286 & 3.062 & 1.278 & 3.368 & 1.276 & 3.578 & 1.257 & 3.737 & 1.274 & 3.869 \\ \hline 0.2 & 2.146 & 3.578 & 1.280 & 3.909 & 1.278 & 4.118 & 1.277 & 4.280 & 1.276 & 4.414 \\ \hline 0.3 & 3.059 & 4.233 & 2.209 & 4.566 & 1.344 & 3.781 & 1.315 & 4.947 & 1.303 & 5.087 \\ \hline 0.4 & 3.927 & 5.061 & 3.358 & 5.407 & 2.802 & 5.632 & 1.494 & 5.808 & 1.388 & 5.956 \\ \hline 0.5 & 5.018 & 6.178 & 4.494 & 6.546 & 4.070 & 6.787 & 3.653 & 6.977 & 3.173 & 7.138 \\ \hline 0.6 & 6.867 & 7.805 & 6.039 & 8.009 & 5.636 & 8.476 & 5.275 & 8.687 & 4.928 & 8.866 \\ \hline 0.7 & 9.073 & 10.45 & 8.504 & 10.920 & 8.087 & 11.78 & 7.721 & 11.471 & 7.388 & 11.67 \\ \hline 0.8 & 14.09 & 15.66 & 13.34 & 16.23 & 12.80 & 16.31 & 12.46 & 16.91 & 12.101 & 17.17 \\ \hline 0.9 & 28.74 & 31.06 & 27.85 & 31.87 & 27.21 & 32.42 & 26.89 & 32.85 & 26.21 & 33.23 \\ \hline \end{tabular} \end{table} Table 3: The numerical values of photon radius for different values of CoS parameter (\(b\)) and spin parameter (\(a\)) with fixed value of magnetic monopole charge (\(g\)). values of the CoS parameter, \(b\) as explained below. The relation between the celestial coordinate \((x,y)\) and impact parameter \((\eta,\xi)\) is given in Eq. (27). The size and shape of the black hole shadow solution depend upon the parameters \((a,b,g)\) and inclination angle \(\theta\). We can use the Schmidt representation of the rotating black hole as depicted in Fig. 4 to determine the size of the black hole shadow using the following relation [36] Figure 3: The plot of shadow for different values of CoS parameter (\(b\)) with a fixed value of the spin parameter (\(a\)), magnetic monopole charge (\(g\)), and mass of the black hole (\(M=1\)). \[R_{s}=\frac{(x_{t}-x_{r})^{2}+Y_{t}^{2}}{2|x_{t}-x_{r}|}. \tag{24}\] and the distortion parameter defined as a ratio of \(D_{s}\) and \(R_{s}\) reads as \[\delta=\frac{D_{s}}{R_{s}}=\frac{|x_{p}-x_{P}^{\prime}|}{R_{s}}. \tag{25}\] where, \(r,l,t\), and \(b\), refer to the right, left, top, and bottom of the shadow boundary, and \((x_{P},0)\) and \((x_{p1},0)\) are the points where the shadow cut the horizontal axis at the opposite side of \((x_{r},0)\) (see Fig. 5). The black hole's shadow (\(R_{s}\)) and distortion parameter (\(\delta_{s}\)) are plotted in the Fig. 5. We can see that the shadow radius (\(R_{s}\)) increases with the CoS parameter (\(b\)) and is approximately constant with the spin parameter (\(a\)). The distortion \(\delta_{s}\) of the shadow image increases with the spin parameter and decreases with the CoS parameter. The measurement of the distortion parameter, (\(\delta_{s}\)) can be used to determine black hole spin from the above figure by parametric fitting. It would be interesting to compare our results with observational data and see the signatures of black holes created in the early universe, if any. Figure 4: Schematic representation of the observables for the shadow of rotating black holes. ## Data Availability Statement Data sharing is not applicable to this article as no experimental data were used or analyzed during the current study. ###### Acknowledgements. The work of BKV is supported by a UGC fellowship. DVS thanks to the DST-SERB project (grant no. EEQ/2022/000824) under EEQ scheme.
2310.00043
Searching for new physics at $μ\rightarrow e$ facilities with $μ^+$ and $π^+$ decays at rest
We investigate the ability of $\mu\rightarrow e$ facilities, Mu2e and COMET, to probe, or discover, new physics with their detector validation datasets. The validation of the detector response may be performed using a dedicated run with $\mu^+$, collecting data below the Michel edge, $E_e\lesssim 52$ MeV; an alternative strategy using $\pi^+\rightarrow e^+ \nu_e$ may also be considered. We focus primarily on a search for a monoenergetic $e^+$ produced via two-body decays $\mu^+ \rightarrow e^+ X$ or $\pi^+\rightarrow e^+X$, with $X$ a light new physics particle. Mu2e can potentially explore new parameter space beyond present astrophysical and laboratory constraints for a set of well motivated models including: axion like particles with flavor violating couplings ($\mu^+ \rightarrow e^+ a$), massive $Z'$ bosons ($\mu^+ \rightarrow Z' e^+$), and heavy neutral leptons ($\pi^+\rightarrow e^+N$). The projected sensitivities presented herein can be achieved in a matter of days.
Richard J. Hill, Ryan Plestid, Jure Zupan
2023-09-29T18:00:01Z
http://arxiv.org/abs/2310.00043v3
Searching for new physics at \(\mu\to e\) facilities with \(\mu^{+}\) and \(\pi^{+}\) decays at rest ###### Abstract We investigate the ability of \(\mu\to e\) facilities, Mu2e and COMET, to probe, or discover, new physics with their detector validation datasets. The validation of the detector response may be performed using a dedicated run with \(\mu^{+}\), collecting data below the Michel edge, \(E_{e}\lesssim 52\) MeV; an alternative strategy using \(\pi^{+}\to e^{+}\nu_{e}\) may also be considered. We focus primarily on a search for a monoenergetic \(e^{+}\) produced via two-body decays \(\mu^{+}\to e^{+}X\) or \(\pi^{+}\to e^{+}X\), with \(X\) a light new physics particle. Mu2e can potentially explore new parameter space beyond present astrophysical and laboratory constraints for a set of well motivated models including: axion like particles with flavor violating couplings (\(\mu^{+}\to e^{+}a\)), massive \(Z^{\prime}\) bosons (\(\mu^{+}\to Z^{\prime}e^{+}\)), and heavy neutral leptons (\(\pi^{+}\to e^{+}N\)). The projected sensitivities presented herein can be achieved in a matter of days. + Footnote †: preprint: FERMILAB-PUB-23-287-T CALT-TH/2023-017 ## I Introduction Charged lepton flavor violation (CLFV) is a long sought-after target of searches for physics Beyond the Standard Model (BSM) [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. The most stringent limits come from searches for \(\mu\to e\gamma\)[17; 18], \(\mu\to 3e\)[19; 20; 21; 22], and \(\mu A\to eA\) transitions [23; 24] (additional constraints arise from bounds on \(\mu^{-}\to e^{+}\) conversion [25], muonium anti-muonium oscillations [26; 27], and CLFV reactions involving \(\tau\) leptons [28; 29; 30; 31; 32; 33; 34; 35]). The \(\mu A\to eA\) channel, often termed \(\mu\to e\) or muon conversion, relies on the target nucleus to absorb recoil momentum, giving a kinematically allowed transition. Furthermore, if new physics mediating the \(\mu\to e\) CLFV transition couples directly to quarks, then the presence of the nucleus itself catalyzes the reaction. Two upcoming facilities, COMET [36; 37; 38] and Mu2e [39; 40; 41], will search for \(\mu\to e\) with unprecedented sensitivity - the single-event sensitivities are expected to be as low as BR(\(\mu\to e\)) \(\sim 10^{-17}\). Both experiments leverage the extreme kinematics in \(\mu\to e\), where almost all of the muon's rest mass is converted into the electron's kinetic energy. The experiments therefore focus on the near endpoint region of maximal electron energy where the Standard Model (SM) backgrounds are highly suppressed. Unfortunately, the same kinematic suppression applies to almost _any_ process other than \(\mu\to e\), making searches for additional BSM decays using the high energy region datasets at Mu2e and COMET extremely challenging [42; 43]. In contrast, signal yields improve dramatically for many BSM scenarios in the regime of electron energy that is kinematically allowed for a free muon decay at rest. In this regime any particle lighter than the muon can be produced and discovered with indirect search techniques. The simplest scenario to test is the two-body decay \(\mu^{+}\to e^{+}X\), with \(X\) the new light particle. The positively charged muon will decay at rest, resulting in a monoenergetic positron signal. Both Mu2e and COMET are capable of collecting substantial \(\mu^{+}\) (and \(\pi^{+}\)) datasets in this energy regime, which may be used for calibrating their detectors [44]. These datasets would have extremely high statistics relative to past \(\pi^{+}\) and \(\mu^{+}\) decay at rest searches [45], and are therefore well suited to search for light new physics. The Mu2e detector is designed to be charge symmetric such that both electrons and positrons can be reconstructed with high efficiency [41]. Moreover, the design of the transport solenoid makes it possible to transport either \(\mu^{-}\) or \(\mu^{+}\) to the detector. COMET can also deliver \(\mu^{+}\) on target [46]. The use of \(\mu^{+}\) decays instead of \(\mu^{-}\) decays for calibration has several advantages that also help enable a BSM search. Decays of \(\mu^{-}\) are complicated by non-perturbative bound-state effects [47; 48; 49; 50] and backgrounds from radiative muon capture on the nucleus [51; 52; 53]. In contrast, the Michel spectrum of \(\mu^{+}\to e^{+}\nu\bar{\nu}\) decays is extremely well known, since it can be computed using standard diagrammatic techniques [54; 55; 56; 57; 58; 59; 60]. Furthermore, the above-mentioned nuclear backgrounds are also mitigated due to the absence of muon capture for \(\mu^{+}\). Note that such validation datasets can be used to search for _any_ process that produces electrons or positrons close to the Michel edge. Important examples are the already mentioned two body \(\mu^{+}\to e^{+}X\) decays, which result in monoenergetic positrons, but one could also search for non-standard multibody decays, where \(X\) consist of several on-shell or off-shell new physics particles. A particularly interesting case is when \(X\) is the QCD axion. Our study shows that the Mu2e validation data can probe the region of parameter space in which the QCD axion is a cold dark matter candidate, assuming it has unsuppressed flavor violating couplings to muons and electrons [61]. At both COMET and Mu2e, the transport solenoid necessarily delivers \(\pi^{+}\) along with \(\mu^{+}\) to the target foils [46; 41]. The \(\pi^{+}\) decay much faster than \(\mu^{+}\), and can be separated with timing information and analyzed separately [41]. In addition to non-standard muon decays, the large \(\pi^{+}\) population in the validation dataset also enables a search for non-standard \(\pi^{+}\) decays. A phenomenologi cally important channel is the \(\pi^{+}\to e^{+}N\) decay, where \(N\) is a heavy neutral lepton (HNL). Motivated by its potential physics impact, we will use "Mu2e-X" as a shorthand for employing the Mu2e validation data for BSM searches. Mu2e is investigating the projected sensitivity of such a dataset internally [62]. The rest of the paper is organized as follows: In Section II we outline new physics models, and regions of parameter space, that predict rates of \(\mu^{+}\to e^{+}X\) to which Mu2e will be sensitive. We translate bounds on branching ratios to constraints on new physics model parameters, emphasizing the competitive reach of a \(\mu^{+}\to e^{+}X\) search relative to astrophysical constraints. In Section III we briefly describe the inputs and procedures underlying our sensitivity estimates for \(\mu^{+}\to e^{+}X\) and \(\pi^{+}\to e^{+}X\) searches. Finally, in Section IV we summarize our findings and comment on possible future applications for the Mu2e validation data. ## II Models of new physics We begin by discussing the theoretical motivation to search for two body decays \(\mu^{+}\to e^{+}X\) and \(\pi^{+}\to e^{+}X\). These are experimentally convenient because the predicted signature involves a monoenergetic \(e^{+}\). Models with three body decays are also of interest but their experimental projections require further study; we briefly discuss this case in Section IV. In what follows we consider several benchmark new physics models for which a \(\mu^{+}\) run at Mu2e could lead to a discovery or interesting limits. Axion like particles (ALPs) can be discovered through two body \(\mu^{+}\to e^{+}X\) decays, while MeV scale DM can be searched for either through two body or three body \(\mu^{+}\to(e^{+}+\text{invisible})\) decays. The rare \(\pi^{+}\to e^{+}X\) decay mode can probe heavy neutral leptons (HNLs), but must overcome a sizable muon decay in flight background. ### Axion-like models Any spontaneously broken (approximate) global \(U(1)\) symmetry results in a light (pseudo) Nambu-Goldstone boson in the low energy effective theory of the system. A particularly important example is the case of a spontaneously broken Peccei Quinn (PQ) symmetry giving rise to the QCD axion that can solve the strong CP problem and provide a cold dark matter candidate [63; 64; 65; 66]. Such particles extending the SM are generically referred to as ALPs (axion like particles) [67]. If the spontaneously broken \(U(1)\) is flavor non-universal, it can lead to sizable \(\mu\to ea\) decays for ALPs \(a\) with masses \(m_{a}<105\) MeV [68; 69; 61; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85]. To understand whether or not a \(\mu^{+}\) validation run could be sensitive to an interesting region of parameter space we explore three ALP benchmarks: i) a general ALP with anarchic couplings to leptons (i.e., all couplings to leptons are of similar size), Fig. 1, ii) a leptophilic ALP that can be a DM candidate, Fig. 2, and iii) the QCD axion with lepton flavor violating couplings, Fig. 3. The three benchmarks, along with other ALP models, were recently discussed in detail in Ref. [61]. Here we focus on the part of the phenomenology most relevant for \(\mu^{+}\to e^{+}X\). The effective Lagrangian describing the ALP couplings to the SM leptons (\(\ell_{i}\)) gluons (\(G_{\mu\nu}\)) and photons (\(F_{\mu\nu}\)) is given by1 Footnote 1: Note that \(C_{ii}^{V}\) couplings do not contribute, as can be seen from equations of motion. \[\begin{split}\mathcal{L}_{a}=&\,N_{\text{UV}}\frac{ \alpha_{s}}{8\pi}\frac{a}{f_{a}}G_{\mu\nu}\tilde{G}^{\mu\nu}+E_{\text{UV}}\frac{ \alpha_{\text{em}}}{4\pi}\frac{a}{f_{a}}F_{\mu\nu}\tilde{F}^{\mu\nu}\\ &+\sum_{i,j}\frac{\partial_{\mu}a}{2f_{a}}\overline{\ell}_{i} \gamma^{\mu}\left[C_{ij}^{V}+C_{ij}^{A}\gamma_{5}\right]\ell_{j}\,\end{split} \tag{1}\] where \(i,j=1,2,3\) are generational indices, color indices are suppressed, and the subscript UV denotes "ultraviolet". Since we are mostly interested in processes involving leptons, the equivalent couplings to quarks are set to zero. The derivative couplings are a hallmark of the pseudo Nambu-Goldstone boson (pNGB) nature of the ALP, i.e., we assume that the shift symmetry is softly broken only by the ALP mass, \(m_{a}\). All the couplings in (1) are of dimension 5 and are suppressed by the ALP decay constant, \(f_{a}\), which can be identified with the scale of spontaneous symmetry breaking. Figure 1: The 95% C.L. limits on a general ALP with anarchic couplings to all three generations of leptons. The present laboratory exclusions are denoted with solid lines, and future projections with dashed lines, assuming isotropic ALP production with axial couplings, see text for further details. Astrophysical constraints are shown as gray region, while the parameter space that could lead to displaced decays inside detector volume, \(c\tau_{a}<1\,\text{m}\) is shown as blue region. Adapted from Ref. [61]. For \(i\neq j\), the ALP couplings are flavor violating. In new physics models with no particular flavor structure the generic expectation would be that \(C_{ij}^{V/A}\) are all nonzero. If this is the case, the flavor changing neutral current (FCNC) constraints, either from \(\mu\to ea\), or from \(K\to\pi a\) decays in the case of couplings to quarks, impose very stringent constraints, \(f_{a}\gtrsim 10^{9}\) GeV and \(f_{a}\gtrsim 10^{12}\) GeV when assuming \(\mathcal{O}(1)\) flavor violating couplings to either leptons or quarks, respectively [61, 86]. The sensitivity of \(\mu\to ea\) to such high scales can be traced to the fact that on-shell production of an ALP is induced by dimension 5 operators, and thus \(\text{BR}(\mu\to ea)\propto(m_{\mu}/f_{a})^{2}\). This can be contrasted with the much weaker constraints on such models from \(\mu\to e\) conversion, which require two insertions of dimension 5 operators (the flavor violating coupling to leptons and the flavor conserving coupling to quarks), giving \(\text{BR}(\mu\to e)\propto(m_{\mu}/f_{a})^{4}\), i.e., a rate that is additionally suppressed by a factor \((m_{\mu}/f_{a})^{2}\sim 10^{-20}\) compared to \(\text{BR}(\mu\to ea)\). For quantitative analysis we first consider three benchmarks from Ref. [61], and then discuss implications for other ALP models: #### ii.1.1 ALP with anarchic couplings to leptons In the first benchmark case, the ALP is assumed to couple only to leptons with both flavor violating and flavor conserving couplings of similar size. For concreteness we assume that all axial couplings to lepton are the same and equal to \(C_{ij}^{A}=1\), the vector couplings are assumed to vanish, \(C_{ij}^{V}=0\), as do the direct couplings to photons and gluons, i.e., we set \(E_{\text{UV}}=N_{\text{UV}}=0\). The couplings to photons (gluons) are still generated radiatively at one loop (two loops) from couplings to leptons, but are not relevant for phenomenological studies. The ALP mass, \(m_{a}\), is treated as a free parameter. The projected 95% C.L. constraints on this benchmark are shown in Fig. 1 with red dashed line.2 Footnote 2: Here we appropriately rescale the results for 90% CL bounds from Section III to 95% CL interval, which all the shown bounds use. The present laboratory constrains are shown with solid green [87] and blue [88]. These constraints depend on the chiral structure of the ALP couplings, and are, for instance, significantly relaxed for \(V-A\) couplings in the case of constraints from Ref. [87]. The present (future) constrains from \(\tau\to\ell a\) decays are shown with solid (dashed) purple lines [89], while the astrophysics constraints are shown as gray excluded regions; see Ref. [61] for further details. In Fig. 1 we show with dashed orange and dark red curves the future sensitivities at MEGII-fwd, assuming realistic focusing [61], and the projected sensitivity at Mu3e [90], respectively. A similar reach in \(f_{a}\) could be also achieved by searching for \(\mu\to ea\gamma\) decays at MEG-II after one year of running in an alternative data taking strategy with reduced beam intensity and adjusted triggers [83], shown with brown dashed line (we show the upper range of the projected sensitivity band in [83]). We see that Mu2e-X has comparable reach to these other proposals to search for \(\mu\to ea\) transitions. #### ii.1.2 Leptophilic ALP as a DM candidate If an ALP is light enough it becomes cosmologically stable and can be a DM candidate. Fig. 2 shows the constraints for such a possibility with anarchic couplings to leptons, \(C_{ij}^{V/A}=1\), and no direct couplings to gluons \(N_{\text{UV}}=0\). The constraints from extragalactic background light bounds are shown for two cases \(E_{\text{UV}}=1\) (dashed blue line) and \(E_{\text{UV}}=0\) (light blue region), where regions to the right are excluded. The ALP DM (QCD-ALP DM) dashed line shows the parameter space for which the initial misalignment of the ALP field in the early universe, \(\theta_{0}=1\), leads to the correct relic DM abundance, assuming no temperature corrections to the ALP mass (i.e., thermal mass dependence equivalent to that of the QCD axion). The green solid line in Fig. 2 shows the current best bound on the isotropic LFV ALP [87], the brown dashed line denotes the most optimistic projected reach from \(\mu\to ea\gamma\) decays at MEG-II after one year of running, while the red dashed line shows the expected reach of Mu2e, which is comparable to the MEGII-fwd projection including focusing enhancement and Mu3e. The expected reach is well above the existing and future bounds that rely on couplings between ALPs and electrons, shown as color shaded regions, and can probe parameter space where the flavor violating ALP is a viable DM candidate. Figure 2: The 95% C.L. limits on a leptophilic ALP that can be a DM candidate, as well as the reach of a \(\mu^{+}\) run (red dashed line, labeled Mu2e-X), see main text for details. Mu2e-X, MEGII-fwd, and Mu3e have similar projected sensitivities, and we represent all of them with a single line. Adapted from Ref. [61]. The relevant space in Fig. 2 is to the left of the blue region enclosed by the solid blue line, which delineates the parameter space leading to ALP decaying within the present Hubble time. The region to the right of the dashed blue lines is excluded by the extragalactic diffuse background light measurements for \(E_{\rm UV}=0,1\), as denoted. The dark blue region shows the X-rays constraints for \(E_{\rm UV}=0\)[93; 94]. The gray shaded regions are excluded by the star cooling bounds, and the ADMX results [95; 96; 97]. The light green region is excluded by the S2 only analysis of XENON1T [98] and Panda-X [99]. The purple shaded region shows the future reach of axion-magnon conversion experiment QUAX [100; 101; 102]. The cyan colored region shows the future sensitivity of SPHEREx experiment that relies on ALP couplings to photons, assuming ALP decays exclusively to two photons [103], while the yellow regions show the future sensitivities of resonant microwave cavity searches: ADMX [104], CAPP [105], KLASH [106], and ORGAN [107], as well as the searches using dielectric haloscope MADMAX [108] or (light blue region) using dielectric stacks [109]. The \(\mu^{+}\to e^{+}a\) limit using Mu2e-X is complementary to all these searches. #### iii.1.3 Lepton flavor violating QCD axion Mu2e calibration data can also be sensitive to a QCD axion that solves the strong CP problem. The QCD axion will have flavor violating couplings, if the PQ symmetry is not flavor universal [110; 68]. The mass of such a flavor violating QCD axion still arises entirely from the QCD anomaly, \(m_{a}=5.691(51)\mu\text{eV}\big{(}10^{12}\,\text{GeV}/f_{a}\big{)}\)[111], and is thus effectively massless in \(\mu\to ea\) decays. The flavor violating QCD axion is also a viable cold dark matter candidate. If the axion relic abundance is due to the misalignment mechanism, the \(\theta_{0}\sim\mathcal{O}(1)\) misalignment angle leads to the observed DM relic density for axion decay constants in the range \(f_{a}\sim 10^{(11-13)}\) GeV. For smaller decay constants, within the reach of LFV experiments, the axion relic from the standard misalignment contribution is under-abundant unless the relic abundance is due to some non-trivial dynamics. In Fig. 3 we show constraints on a particular DFSZ-like model [112; 113] of the QCD axion with LFV couplings [61] (tilted solid green line). The field content of the theory consists of the SM fermions, two Higgs doublets, \(H_{1,2}\), and a complex scalar \(S\) that is a gauge singlet. The model contains an anomalous global \(U(1)\) PQ symmetry under which all the scalars are charged. It is broken once \(S\) obtains a vacuum expectation value (vev), giving rise to the light pNGB - the QCD axion. The PQ charges of the SM leptons are generation dependent such that \(H_{2}\) couples only to second and third generation leptons, while the \(H_{1}\) lepton Yukawa interactions couple first generation to second and third generation leptons. The generation dependent PQ charges then translate to flavor violating axion couplings to leptons. The PQ charges of quarks are universal and thus the axion has flavor diagonal couplings to quarks. The constraints in Fig. 3 are shown for a particular benchmark where the QCD axion couplings to the leptons have \(V-A\) chiral structure, and where the flavor violating couplings involving \(\tau\)-leptons are assumed to be suppressed (see Ref. [61] for details). We see that the sensitivity obtainable at Mu2e-X will probe parameter space well beyond the present astrophysics bound from white dwarf cooling constraints (solid green line), improving on the present laboratory bounds from searches for \(\mu\to ea\) decays. The \(\mu\to ea\) lines are vertical, since they are insensitive to the axion coupling to photons, \(g_{a\gamma\gamma}=-0.59\times\alpha_{\rm em}/(2\pi f_{a})\). Figure 3: The 95 % C.L. limits on lepton flavor violating QCD axion for the assumed \(V-A\) forms of couplings. The mass of the QCD axion, \(m_{a}\), is inversely proportional to the coupling constant \(f_{a}\). The vertical axis refers to the axion coupling to photons, \(g_{a\gamma\gamma}\propto\alpha_{\rm em}/(2\pi f_{a})\), where an additional constant coefficient depends on the particular model. The benchmark \(V-A\) LFV QCD axion model is indicated by the tilted solid green line. Also shown are two other QCD axion models not involving LFV, the KSVZ [91; 92] (dark blue) and the DFSZ-II (blue) model, having slightly different couplings to photons. The current excluded ranges of \(g_{a\gamma\gamma}\) as function of \(m_{a}\) are shown as shaded gray regions, and future projected limits as dashed gray lines. The \(\mu\to ea\) limits, which are independent of \(g_{a\gamma\gamma}\), but assume sizable LFV couplings, exclude values of \(m_{a}\) to the right of the dashed vertical green lines under these assumptions (but thus do not apply to KSVZ and DFSZ-II models). The solid green vertical line refers to the limit from white dwarf (WD) cooling constraint which assumes sizable axion coupling to electrons. The sensitivity derived from Mu2e calibration data (dotted vertical green line) will probe parameter space beyond this limit. Adapted from Ref. [61]. Other possible ALP models The above examples by no means exhaust the set of possible models that could be searched for via \(\mu\to eX\) decays. Importantly, the flavor structures of flavor violating couplings in the above three examples were fixed externally. In some models the pattern of flavor violating couplings is instead determined by the dynamics of the new physics model itself. An example is the "axiflavon" model, in which the QCD axion \(a\) is responsible both for generating the observed flavor structure of the SM as well for solving the strong CP problem [68, 69]. The axiflavon is a representative of an entire class of "familon" theories [110, 111, 112, 113, 114, 115, 116] in which the ALP is associated with a spontaneously broken horizontal family symmetry, e.g., of a Froggatt-Nielsen type [117] or from a nonabelian global horizontal group such as SU(2) [72]. In these scenarios a large \(\mu-e\) CLFV coupling is _predicted_ such that a search for \(\mu^{+}\to e^{+}X\) can test these models and offer an avenue to discovery (see recent work on testing at Mu2e such models with heavy familons [118]). Here, we argue that Mu2e-X is in fact capable of probing important parameter space across a wide range of familon masses. The \(\mu\to eX\) transition can also probe dynamical models of neutrino mass generation, where \(X\) is the Majoron, a pNGB of a spontaneously broken lepton number [119, 120, 121, 122]. In TeV-scale see-saw mechanism the neutrino masses are parametrically suppressed while CLFV couplings are not [123, 124, 125, 126, 127, 128, 129, 130, 131]. The parametric suppression of neutrino masses is technically natural and can emerge from an approximate symmetry of a generalized lepton number \(U_{L^{\prime}}(1)\) under which the CLFV couplings are invariant, while the neutrino masses are not, and must be proportional to a small symmetry breaking parameter. This can then result in a potentially observable \(\mu\to eX\) decays. As outlined in [61] and also shown in Figs. 1 and 3, the ability of laboratory experiments to probe branching ratios of \(\text{BR}(\mu\to eX)\lesssim 10^{-5}\) results in constraints on ALP couplings that for generic flavor structures supersede the already stringent bounds from astrophysical sources. This allows experiments such as Mu3e [90], MEGII-fwd [61], and, as we argue here, Mu2e, to provide leading constraints on ALP models. ### Heavy Neutral Leptons Models with heavy neutral leptons (HNL) [132, 133, 134, 135, 136, 137, 138, 139], i.e., sterile neutrinos with masses in the MeV to few GeV range, have received substantial attention over the past fifteen years in the context of light dark sectors [139, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152]. Couplings between HNLs, \(N\) (with mass \(m_{N}\)), and SM neutrinos, \(\nu\), offer one of three renomalizable "portals" between a dark sector and the SM [153, 154], \[\mathcal{L}\supset y_{N}(LH)N+\text{h.c.}, \tag{2}\] where \(L\) is the SM lepton doublet, \(H\) is the Higgs doublet, and we suppress flavor indices. After electroweak symmetry breaking the Yukawa interaction (2) induces mixing between HNLs and SM neutrinos, through which dark sector degrees of freedom may imprint themselves on experimental data. For \(\pi^{+}\to e^{+}N\) the relevant mixing parameter in the extended PMNS matrix [155, 156] is \(U_{eN}=\langle N|\nu_{e}\rangle\). Searching for HNLs is of interest both for minimal and extended dark sectors [137, 139]. In principle, either \(\pi^{+}\to\ell^{+}N\) or \(\mu^{+}\to e^{+}N\nu\) decays can be used to search for HNLs, provided \(N\) is light enough to be produced in these decays. Mu2e is both a muon and a pion factory, and large populations of both particles are delivered to the stopping target. The challenge in searching for \(\mu^{+}\to e^{+}N\nu\) decays is that the background due to SM muon decays is also three body. Fitting for the spectral distortion from the HNL in the observed Michel spectrum is in principle possible, but made more challenging by the complicated energy dependent acceptances in Mu2e due to the helical tracker. Furthermore, such a search would require a detailed understanding of background spectra and radiative corrections to the Michel spectrum. Constraints on HNL models are conventionally studied in a single-flavor mixing paradigm with constraints appearing in the \(m_{N}-|U_{ GaN}|\) plane with \(\alpha\in\{e,\mu,\tau\}\) labeling the lepton flavor that the HNL couples to. In the case of pion decay to \(e^{+}N\) the relevant parameter is \(U_{eN}\), and the range of HNL masses that can be probed (in principle) is \(m_{N}\in[0,m_{\pi}-m_{e}]\). The branching ratio, for \(m_{e}\ll m_{N}\lesssim m_{\pi}\), is given by \[\text{BR}(\pi^{+}\to e^{+}N)=|U_{eN}|^{2}\frac{m_{N}^{2}(m_{\pi}^{2}-m_{N}^{2} )^{2}}{m_{\mu}^{2}(m_{\pi}^{2}-m_{\mu}^{2})^{2}}\,. \tag{3}\] The dominant background is due to \(\mu^{+}\to e^{+}\nu\nu\) decays, which can be significantly reduced with timing and ge Figure 4: Projections for mass dependent 90%-CL sensitivity to HNL mixing with electron flavor from Mu2e-X in a pion configuration, see Section III for details. Existing limits come from PIENU [157] and their related bump hunt PIE2 [158] (see also [159] for a compilation). ometric cuts [45]. In Fig. 4 we show a compilation of limits from existing experiments and overlay projections for Mu2e-X in a configuration that could be used during a \(\pi^{+}\to e^{+}\nu\) calibration of the detector response. Since the sensitivity to an HNL is highly mass dependent, _cf._ Eq. (3), we focus on the region \(m_{N}/m_{\pi}\lesssim\mathcal{O}(1)\), and leave the very light HNL mass range, \(m_{N}<20\) MeV for future dedicated studies. ### Dark \(Z^{\prime}\) models Another class of models that may be searched for at Mu2e-X are the BSM models that contain a light flavor violating \(Z^{\prime}\)[160; 161; 162; 163; 164; 165; 166; 167; 168; 169], decaying predominantly to invisible final states, \(Z^{\prime}\to\chi\bar{\chi}\). Here, \(\chi\) can be dark matter or a mediator to the dark sector [170; 171; 172]. The effective Lagrangian, assuming renormalizable interactions, is given by3[160] Footnote 3: The \(\mu\to eZ^{\prime}\) decays could in general also occur through dimension 5 dipole operators, see, e.g., Ref. [162]. \[\begin{split}\mathcal{L}\supset& Z^{\prime}_{\mu}g ^{\prime}\bar{\chi}\gamma^{\mu}\chi+Z^{\prime}_{\mu}\sum_{f,i,j}\Big{[}g^{ \prime}c^{ij}_{f_{L}}\big{(}\bar{f}^{(i)}_{L}\gamma^{\mu}f^{(j)}_{L}\big{)} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+g^{\prime }c^{ij}_{f_{R}}\big{(}\bar{f}^{(i)}_{R}\gamma^{\mu}f^{(j)}_{R}\big{)}\Big{]},\end{split} \tag{4}\] where we assumed that \(\chi\) is a Dirac fermion, while the sum runs over all the SM fermions, \(f=u,d,\ell,\nu\), and the generation indices \(i,j=1,\ldots,3\),. Assuming \(\chi\) is light enough that \(Z^{\prime}\to\chi\bar{\chi}\) decays are kinematically allowed, these will dominate over the \(Z^{\prime}\) decays to SM fermions, as long as the corresponding effective coefficients are small, \(|c^{ij}_{f_{L,R}}|\ll 1\). A concrete realization of such a scenario is a \(Z^{\prime}\) that is the gauge boson of a dark \(U(1)_{X}\) under which the dark sector is charged, while the interactions of the SM fermions are induced through mixings with dark vector-like fermions. In general, this induces both flavor conserving and flavor violating couplings \(c^{ij}_{f_{L}}\), \(c^{ij}_{f_{R}}\). The \(\mu\to eZ^{\prime}\) decay width is given by [160] \[\begin{split}\Gamma(\mu\to eZ^{\prime})=\Big{[}&(c ^{12}_{\ell_{L}})^{2}+(c^{12}_{\ell_{R}})^{2}\Big{]}\\ &\qquad\times\frac{g^{\prime 2}}{32\pi}\frac{m^{3}_{\mu}}{m^{2}_{Z^{ \prime}}}\big{(}1-r_{Z^{\prime}}\big{)}^{2}\big{(}1+2r_{Z^{\prime}}\big{)}\,, \end{split} \tag{5}\] where we neglected the terms suppressed by \(m_{e}/m_{\mu}\), and shortened \(r_{Z^{\prime}}=m^{2}_{Z^{\prime}}/m^{2}_{\mu}\). The mass of the \(Z^{\prime}\) gauge boson is given by \(m_{Z^{\prime}}=g^{\prime}v^{\prime}\), where \(v^{\prime}\) is the vev that breaks the \(U(1)\) gauge symmetry. We see that \(\Gamma(\mu\to eZ^{\prime})\propto\Big{[}(c^{12}_{\ell_{L}})^{2}+(c^{12}_{\ell _{R}})^{2}\Big{]}/v^{\prime 2}\), and is vanishingly small if either \(c^{12}_{\ell_{L,R}}\to 0\), or if \(v^{\prime}\) is large. Another example of flavor violating light \(Z^{\prime}\) is the possibility that the \(U(1)_{X}\) is the horizontal symmetry responsible for the hierarchy of SM fermion masses, such as in the Froggatt-Nielsen model of Ref. [160]. In that case the FCNC bounds from other states in the theory require \(v^{\prime}\gtrsim 10^{7}\) GeV, while \(Z^{\prime}\) can be light if \(g^{\prime}\ll 1\). Both invisible decays, \(Z^{\prime}\to\nu\bar{\nu}\), and visible decays to SM fermions \(Z^{\prime}\to f\bar{f}\) need to be considered in the final state since both can have large branching ratios. The values of \(c^{ij}_{f_{L,R}}\) coefficients depend on the details of the numerical inputs in the model benchmark, but are in general \(\mathcal{O}(1)\) for diagonal and \(10^{-3}\)-\(10^{-1}\) for off-diagonal entries [160]. Let us close this section by mentioning the possibility of neutrino-induced CLFV couplings due to heavy neutral leptons. Models of this type have been studied in the context of neutrino portal dark matter [173; 174], and produce off-diagonal CLFV couplings via triangle diagrams, with flavor mixing from an (extended) PMNS matrix. The result is an off-diagonal flavor coupling given by (_cf._ Eq. (6.4) of [173]) \[c^{ij}_{L,R}\approx U_{iN}U^{*}_{jN}\frac{g^{2}}{4\pi}\frac{m^{2}_{N}}{m^{2}_{W} }\, \tag{6}\] where \(U_{iN}\) are the PMNS matrix elements between flavor \(i\) and the HNL, \(N\), and we have assumed that \(N\) is very nearly aligned with the mass basis. The search strategy we propose is model independent, relying only on the two-body final state kinematics. Unlike axion models, which are generically expected to be very light due to the approximate global \(U(1)\) symmetry, in many \(Z^{\prime}\) scenarios the dark vector is massive (\(m_{Z^{\prime}}\gtrsim 10\) MeV) to avoid BBN [175] bounds. A massive \(Z^{\prime}\) furnishes a theoretically well motivated candidate with \(m_{X}>20\) MeV that can be searched for in the Mu2e validation data. Figure 5: Estimated branching ratio limits (90% C.L) for \(\mu\to eX\) and \(\pi\to eX\) as a function of \(m_{X}\) for the Mu2e-X. The shape of the exclusion depends both on the acceptance as a function of energy, and on the background as a function of energy. See Table 1 for details of the inputs used in estimating projected sensitivity. ## III Projected sensitivities A search for a monoenergetic positron allows for a data driven background estimate in the signal window. For a parent particle, \(P\), of mass \(m_{P}\), the energy of the positron in a decay \(P\to e^{+}X\) is given by \[P_{e^{+}}=\frac{m_{P}}{2}\sqrt{\left(1-\frac{m_{X}^{2}}{m_{P}^{2}}+\frac{m_{e}^{ 2}}{m_{P}^{2}}\right)^{2}-\frac{4m_{e}^{2}}{m_{P}^{2}}}. \tag{7}\] In a statistically limited search the 90%-CL sensitivity to the branching ratio is given by \[\text{BR}_{90}(m_{X})=\left[1.28\times\sqrt{\mu_{\text{bkg}}}\right]\times \frac{1}{N_{P-\text{stop}}}\frac{1}{\epsilon_{P}(P_{e^{+}})}\, \tag{8}\] where \(\mu_{\text{bkg}}\) is the estimated background in the signal window, and \(N_{P-\text{stop}}\) is the number of stopped parent particles, i.e., pions or muons. For the efficiency/acceptance, \(\epsilon(P_{e^{+}})\), we take two different functional forms motivated by Fig. 4.5 (50% nominal \(B\)-field) and Fig. 6.1 (76% nominal \(B\)-field) in Ref. [45], respectively, \[\epsilon_{\mu}(P_{e^{+}}) =0.25\Big{(}\tfrac{P_{e^{+}}-38\text{ MeV}}{(55-38)\text{ MeV}} \Big{)}\Theta(P_{e^{+}}-P_{\text{thr}}^{\mu})\, \tag{9}\] \[\epsilon_{\pi}(P_{e^{+}}) =0.28\Big{(}\tfrac{P_{e^{+}}-55\text{ MeV}}{(70-55)\text{ MeV}} \Big{)}^{1.7}\Theta(P_{e^{+}}-P_{\text{thr}}^{\pi})\, \tag{10}\] where \(P_{\text{thr}}^{\pi}=55\) MeV and \(P_{\text{thr}}^{\mu}=38\) MeV. The number of background events for the muon decay at rest search is found by taking the tree-level Michel spectrum, \(\text{d}\Gamma/\text{d}x=2x^{2}(1-3x^{2})\), where \(x=2E_{e}/m_{\mu}\), and multiply by the bin width, which was taken to be given by \(\Delta E_{e}=1\) MeV. This gives the background estimate for the \(\mu^{+}\to e^{+}X\) search, \[\mu_{\text{bkg}}(P_{e^{+}})=\frac{1}{\Gamma}\frac{\text{d}\Gamma}{\text{d}E_ {e}}\times\epsilon_{\mu}(P_{e^{+}})\times\Delta E_{e}. \tag{11}\] For the \(\pi^{+}\to e^{+}X\) search we take \(\mu_{\text{bkg}}=4\times 10^{8}\)[45] in a bin of width \(\Delta E_{e}=1\) MeV. This background is dominantly composed of muons decaying in flight, and we take the spectrum to be flat from 55 MeV to 70 MeV. Resulting projections for the 90%-CL branching ratio limits are show in Fig. 5, given the inputs in Table 1. ## IV Conclusions and outlook If either Mu2e or COMET uses \(\mu^{+}\) and/or \(\pi^{+}\) decays at rest while validating their detector response they will have access to enormous samples of both species, potentially larger than all existing datasets by orders of magnitude. As we argued in this manuscript, there is strong potential for a rich complementary physics program using this data alone, even as a purely parasitic experiment, i.e., without independent optimizations beyond the needs of the Mu2e detector response validation. We advocate the use of both COMET and Mu2e's validation data to search for BSM physics, and argue that their potential impact on BSM searches is sufficiently compelling to warrant dedicated analyses; see Ref. [62] for efforts within Mu2e to realize this goal. In particular, we have identified two decay channels that are sensitive to well-motivated BSM physics, and that can be studied using detector response validation data: \(\mu^{+}\to e^{+}X\) and \(\pi^{+}\to e^{+}X\), where \(X\) is a light new physics particle. Both decays result in a monoenergetic positron. Timing information can be used to vary the \(\mu^{+}\) vs \(\pi^{+}\) purity of different samples [41]. When statistically limited sensitivity can be achieved in the \(\mu^{+}\to e^{+}X\) search, Mu2e-X can exceed both existing laboratory experiments and even astrophysical constraints by orders of magnitude. Mu2e-X or a comparable search using COMET could then serve as grounds for the discovery of a number of well motivated UV-completions. In the case of \(\mu\to eX\), \(X\) could be a QCD axion and dark matter candidate, whose lepton flavor violating couplings to muons and electrons offer its most promising detection prospects. This impressive reach suggests that a Mu2e \(\mu^{+}\) run should not be viewed merely as a calibration/validation tool, but will result in a valuable data sample with BSM discovery potential. Leveraging the full power of Mu2e's statistics will ultimately demand a detailed understanding of systematic uncertainties for signal regions close to the Michel edge (necessary for \(m_{X}\lesssim 20\) MeV); the ultimate reach will depend on detailed analyses by both Mu2e and COMET. At larger values of \(m_{X}\) the same search can be recast as a search for a massive \(Z^{\prime}\) with a dominantly invisible decay mode, for example if \(Z^{\prime}\to\chi\bar{\chi}\) dominates, where \(\chi\) is the dark matter. Our discussion is highly specialized to the case of two-body final states which leave a monoenergetic signal electron, since this provides an unambiguous experimental signature of new physics. It may be of interest to study the sensitivity of Mu2e for three body final states, whose positron energy spectra are continuous and which would appear as a distortion of the Michel spectrum. This is similar in spirit to previous searches carried out at PIENU, but may be more difficult at Mu2e. We note that the impressive branching ratio sensitivities that we estimate above for \(\pi^{+}\to e^{+}X\) and \(\mu^{+}\to e^{+}X\) are encouraging. They suggest that for more challenging signals perhaps branching ratios in the \((\text{few})\times 10^{-6}\) regime \begin{table} \begin{tabular}{l c c c c} Configuration & \(N_{P}\) & \(B/B_{0}\) & \(\mu_{\text{bkg}}\) & \(\epsilon_{P}(P_{e^{+}})\) \\ \hline Mu2e-X (\(\mu\)) & \(3\times 10^{13}\) & 0.5 & Eq. (11) & Eq. (9) \\ Mu2e-X (\(\pi\)) & \(2\times 10^{12}\) & 0.76 & \(4\times 10^{8}\) & Eq. (10) \\ \end{tabular} \end{table} Table 1: Parameters used to estimate sensitivities in this work: the number of stopped parents, \(P\in\{\mu,\pi\}\), the assumed operating \(B\)-field relative to nominal, \(B/B_{0}\), the number of background events expected in the signal region, \(\mu_{\text{bkg}}\), and the efficiency/acceptance \(\epsilon_{P}\) as a function of \(P_{e^{+}}\). The expected number of background events for the (\(\mu\))-configuration are estimated using the tree-level Michel spectrum, whereas in a (\(\pi\))-configuration they come from muon decay in flight [45]. may be accessible. At this level, rare decay modes such as \(\pi^{+}\to\mu^{+}e^{+}e^{-}\nu\) (current limit of \(\text{BR}<1.6\times 10^{-6}\)[176]), or \(\mu^{+}\to e^{+}\chi\bar{\chi}\), may be attainable. The ability of Mu2e to achieve this level of sensitivity will depend crucially on the control of systematic uncertainties. Even within the limited scope of two body final states, a search for \(\mu^{+}\to e^{+}X\) and/or \(\pi\to e^{+}X\) represents an extremely cost effective and impactful BSM physics program with exciting discovery prospects. We note that in the case of pions, the projections presented above suggest that Mu2e offers sensitivity to HNLs that will compete with the dedicated pion experiment PIONEER [177]. We hope that our study initiates further investigations into the untapped physics potential of both the Mu2e and COMET facilities, which will deliver unprecedented statistical samples of both muons and pions. For instance, in the \(\pi^{+}\to e^{+}X\) search, an optimized momentum degrader to suppress the background from muon decays in flight would allow the Mu2e calibration run to push further into the as-yet untouched parameter space for HNL mixing with electron neutrinos. In conclusion, even operating as a purely parasitic search for new physics, Mu2e-X can push into untouched parameter space, and provide impactful limits on theoretically well motivated models of new physics in only a few weeks of data taking. Projected limits from Mu2e are expected in a forthcoming publication [62], and we encourage COMET to similarly study the capabilities of their facility to search for light weakly coupled BSM particles. **Acknowledgements**: This work began from a series of discussions with Shihua Huang, David Koltick, and Pavel Murat. We acknowledge their contributions at various stages of this work and thank them for helping us understand the challenges that must be overcome at an experiment such as Mu2e. We thank Michael Hedges for his work on the \(\pi\to e\nu\) calibration at Mu2e. We thank Diego Redigolo for useful discussions and feedback regarding massless axion searches. We thank Matheus Hostert and Diego Redigolo for feedback on an early version of this manuscript. We thank Robert Bernstein, Stefano Miscetti, and the broader Mu2e collaboration for detailed feedback on the final version of this work and coordination with Ref. [62]. We thank Yoshi Kuno for communications regarding COMET. This work was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0019095. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. JZ acknowledges support in part by the DOE grants DE-SC0011784, DE-SC1019775, and the NSF grant OAC-2103889. RP acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and the Neutrino Theory Network Program Grant under Award Number DE-AC02-07CH11359, and by the Walter Burke Institute for Theoretical Physics. Part of this research was performed at the Kavli Institute for Theoretical Physics which is supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 and at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632.
2309.05058
Multimodal Fish Feeding Intensity Assessment in Aquaculture
Fish feeding intensity assessment (FFIA) aims to evaluate fish appetite changes during feeding, which is crucial in industrial aquaculture applications. Existing FFIA methods are limited by their robustness to noise, computational complexity, and the lack of public datasets for developing the models. To address these issues, we first introduce AV-FFIA, a new dataset containing 27,000 labeled audio and video clips that capture different levels of fish feeding intensity. Then, we introduce multi-modal approaches for FFIA by leveraging the models pre-trained on individual modalities and fused with data fusion methods. We perform benchmark studies of these methods on AV-FFIA, and demonstrate the advantages of the multi-modal approach over the single-modality based approach, especially in noisy environments. However, compared to the methods developed for individual modalities, the multimodal approaches may involve higher computational costs due to the need for independent encoders for each modality. To overcome this issue, we further present a novel unified mixed-modality based method for FFIA, termed as U-FFIA. U-FFIA is a single model capable of processing audio, visual, or audio-visual modalities, by leveraging modality dropout during training and knowledge distillation using the models pre-trained with data from single modality. We demonstrate that U-FFIA can achieve performance better than or on par with the state-of-the-art modality-specific FFIA models, with significantly lower computational overhead, enabling robust and efficient FFIA for improved aquaculture management.
Meng Cui, Xubo Liu, Haohe Liu, Zhuangzhuang Du, Tao Chen, Guoping Lian, Daoliang Li, Wenwu Wang
2023-09-10T15:52:56Z
http://arxiv.org/abs/2309.05058v2
# Multimodal Fish Feeding Intensity Assessment in Aquaculture ###### Abstract Fish feeding intensity assessment (FFIA) aims to evaluate the intensity change of fish appetite during the feeding process, which is vital in industrial aquaculture applications. The main challenges surrounding FFIA are two-fold. 1) robustness: existing work has mainly leveraged single-modality (e.g., vision, audio) methods, which have a high sensitivity to input noise. 2) efficiency: FFIA models are generally expected to be employed on devices. This presents a challenge in terms of computational efficiency. In this work, we first introduce an audio-visual dataset, called _AV-FFIA_. AV-FFIA consists of 27,000 labeled audio and video clips that capture different levels of fish feeding intensity. To our knowledge, AV-FFIA is the first large-scale multimodal dataset for FFIA research. Then, we introduce a multi-modal approach for FFIA by leveraging single-modality pre-trained models and modality-fusion methods, with benchmark studies on AV-FFIA. Our experimental results indicate that the multi-modal approach substantially outperforms the single-modality based approach, especially in noisy environments. While multimodal approaches provide a performance gain for FFIA, it inherently increases the computational cost, as it require independent single-modality based encoders to process the input data from the individual modalities. To overcome this issue, we further present a novel unified mixed-modality based method for FFIA, termed as _U-FFIA_. U-FFIA is a single model capable of processing audio, visual, or audio-visual modalities, by leveraging modality dropout during training and knowledge distillation from single-modality pre-trained models. We demonstrate that U-FFIA can achieve performance better than or on par with the state-of-the-art modality-specific FFIA models, with significantly lower computational overhead. Our proposed U-FFIA approach enables a more robust and efficient method for FFIA, with the potential to contribute to improved management practices and sustainability in aquaculture. To encourage further research, we release the _AV-FFIA_ dataset and the pre-trained model at [https://github.com/FishMaster93/U-FFIA](https://github.com/FishMaster93/U-FFIA). Fish feeding intensity assessment (FFIA), computer vision, acoustic technology, audio-visual fusion ## I Introduction Digital aquaculture plays an important role in enhancing operational efficiency, sustainability, and productivity in aquaculture [1, 2]. As the foundation of digital aquaculture technology, Fish Feeding Intensity Assessment (FFIA) is the task of evaluating the fish feeding intensity during the feeding process. FFIA offers significant benefits for real-world aquaculture applications by facilitating adjustments to bait dispensing and thereby minimizing feed waste. FFIA serves as a valuable tool for improving productivity in aquaculture [3]. Traditional methods of FFIA primarily rely on the experience of human observers and subjective interpretation [4]. These methods are time-consuming and labour-intensive to operate. With the recent advances in deep learning, data-driven approaches [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] have become the mainstream methods for FFIA. Existing approaches mainly use digital cameras to capture the corresponding images as input and according to the fish behaviour with discrete feeding intensity (e.g., _"None"_, "_Weak_", "_Medium_" and "_Strong_" [17, 14]), characterized as a classification problem modelled by Convolutional Neural Networks (CNNs). However, fish-feeding behaviour is a dynamic and continuous process. Single images are insufficient to capture the context of fish feeding intensity [8]. As an alternative, video-based methods have been proposed to exploit spatial and temporal visual information for FFIA, which offers rich context for capturing fish feeding behaviour. Nevertheless, the processing of video data is computationally demanding, requiring significant computational resources and memory, which makes it impractical for on-device applications that play a crucial role in aquaculture [10]. Furthermore, visual-based methods are highly vulnerable to variations in lighting conditions and noise resulting from water surface reflections [18, 19]. In comparison to visual-based measurements, acoustic measurements present an efficient alternative for FFIA. As acoustic measurements are unaffected by illumination changes and occlusions, they are a reliable solution for round-the-clock 24 hours monitoring. In addition, acoustic data is typically more compact and cost-effective to process compared to raw video data [20, 21]. Cui et al. [22] introduced an audio dataset _AFFIA3K_ for FFIA consisting of \(3000\) labelled audio clips and demonstrated the practicality of audio-based FIFA. Although audio-based models exhibit greater computational efficiency when compared to visual-based models, their performance tends to be comparatively lower. Since audio data lacks the capability to capture the behaviour of fish and physical characteristics compared with visual observation. Furthermore, recorded audio signals are influenced by various underwater acoustic noises, which introduces challenges to learning FFIA from audio data. The main challenges for FFIA can be categorized into two-fold: 1) Robustness: existing works have mainly relied on single-modality, making them sensitive to input noise; 2) Efficiency: FFIA models are generally expected to be employed on feeding machines [10, 14]. How to design an accurate and efficient model for FFIA is an ongoing question. 3) Dataset: The limited availability of large-scale multimodal FFIA datasets poses additional challenges in this area. In this work, we first introduce a large-scale audio-visual dataset for FFIA, called _AV-FFIA_, which comprises \(27\,000\) labelled audio and video clips. To our knowledge, _AV-FFIA_ is the first large-scale multimodal dataset for FFIA research. Compared to the existing _AFFIA3K_ dataset [22], _AV-FFIA_ is approximately \(9\) times larger and includes video clips corresponding to the audio clips. Despite there are numerous existing research on visual-based FFIA, most of them remain inaccessible. In contrast, our proposed _AV-FFIA_ dataset is publicly accessible and much larger. Human perception of the world is inherently multimodal, while the potential of multimodal FFIA is relatively unexplored. In this work, we introduce an audio-visual approach for FFIA by leveraging single-modality pre-trained models and modality-fusion methods. We conducted extensive benchmark studies on AV-FFIA, and experimental results indicate that the multi-modal approach significantly outperforms the single-modal approach, especially in noisy environments. Although the multimodal approach provides more robust performance for FFIA compared with single-modal approaches, it also raises several limitations. For example, the increased computational cost required to process data from multiple modalities simultaneously and the absence of any single modality can degrade the performance of the model. To achieve a robust and efficient solution for FFIA, we further present a novel unified mixed-modality based approach for FFIA, termed as _U-FFIA_. _U-FFIA_ is a single model capable of processing audio, visual or audio-visual modalities, which is achieved by modality dropout to simulate different combinations of input modalities. To achieve an efficient model, we perform preprocessing from the input part of audio and video to reduce redundant information. We first use SimPFs [23] to reduce the redundant information within the mel-spectrogram and use the pre-trained [24] audio models to extract audio features as a complementary modality for helping video with fewer frames. Then, we use the knowledge distillation [25] method, taking a pre-trained model with full audio and video frames as the teacher network, to enhance the performance of the audio and video student network that uses few frames. We demonstrate that _U-FFIA_ can achieve performance that is better than or on par with the state-of-the-art modality-specific FFIA models, with significantly lower computational overhead. Furthermore, we conduct extensive experiments on audio, visual, and audio-visual modalities in noisy environments, demonstrating the effectiveness and robustness of our proposed method in various scenarios. This paper extends our previous work presented at the MLSP 2022 conference [22]. Our contributions can be summarized as follows: (1) We introduce a new large-scale public audio-visual _AV-FFIA_ dataset, which is \(9\) times larger than _AF-FIA3K_. We further conduct comprehensive benchmark studies on audio-only and visual-only FFIA models; (2) We propose a new audio-visual approach for FFIA, which significantly improves the robustness of FFIA in noisy environments (e.g., audio bubbles noise, visual corruption). Extensive experiments of audio-visual FFIA under different noise conditions are performed to show the efficacy of the proposed method; (3) We introduced a novel unified model, capable of processing both multimodal and single modal input. Our model achieves SOTA performance on _AV-FFIA_, demonstrating its efficacy in noisy environments. Our work is a significant step forward in audio-visual based FFIA. This paper is organized as follows: The related works about FFIA are described in Section II. Section III introduces audio and video datasets of FFIA that have already been obtained. Section IV describes our benchmark studies on audio, visual and audio-visual fusion based deep learning framework for FFIA. Our proposed novel U-FFIA methods are described in Section V. Experimental details are shown in Section VI. Section VII presents the experimental results and provides an in-depth analysis of the factors contributing to the results. Section VIII concludes the contribution of this paper and discusses the future direction. ## II Related work Feeding is one of the most important variable costs in aquaculture [8]. To overcome the limitations of human-based observation and optimize feeding strategies to reduce costs, many aquaculture factories use automatic feeding machines for fish feeding [6, 14]. However, most feeding machines operate based on fixed thresholds or human experiences, lacking the ability to automatically adjust to fish feeding intensity [10]. FFIA can evaluate the intensity changes in fish appetite during the feeding process and optimize the control strategies of the feeding machine to avoid inadequate feeding or overfeeding, which reduces the feeding cost in industrial aquaculture. Deep learning has the advantages of scalability, adaptability, and robustness, making it widely used in FFIA [7]. In recent years, various deep-learning methods have been used for FFIA. In this section, we provide a summary of related topics and highlight the differences between our work and existing research in FFIA. ### _Visual-based FFIA methods_ As a non-invasive, cost-effective method, computer vision technology has been a popular method for FFIA [18]. In the early step, many researchers use traditional methods (such as background segmentation and target tracking) to assess the fish feeding index [17, 26]. The complex process of foreground segmentation leads to a decrease in computational efficiency and is easily affected by water surface fluctuations, reflective areas, etc [19]. Convolutional neural networks (CNNs) have been widely used in various image recognition tasks, including fish classification [27], counting [28], and tracking [29]. The CNN-based method has also been used in FFIA, e.g., MSIF-MobileNetV3 [7], EfficientNet-B2 [11], offering excellent performance. The fish feeding behaviour is a dynamic and continuous process that changes rapidly over time. A single image may not reflect the context information of fish feeding behaviour [16]. Videos can capture both spatial and temporal information on fish feeding behaviour, providing context information for fish feeding behaviour [14, 15]. Converting the original RGB video into an optical flow image sequence and then fed it into a 3D CNN is a common method for video-based FFIA, which outperforms image-based models [12, 14, 30]. However, processing video data is computationally demanding, making it impractical for on-device applications. In addition, uneven illumination and occlusion pose great challenges for video-based FFIA. ### _Audio-based FFIA methods_ Vocalization is an important fish behaviour with approximately \(1000\) fish species confirmed to produce sounds underwater, and sound signals are usually accompanied by activities such as feeding and reproduction [31, 32]. Fish can make a series of sounds (e.g., splashing sounds, patting tail sounds, swallowing and chewing sounds) during the feeding process, such as the feeding sounds produced by Rainbow Trout (\(0.02\)-\(25\) kHz) [33], Japanese Minnow (\(1\)-\(10\) kHz) [34], Atlantic horse mackerel (\(1.6\)-\(4\) kHz) [35], Yellowtail (\(4\)-\(6\) kHz) [36], and Microuterus Salmoide (\(1\)-\(20\) KHz) [37]. Fish feeding sounds vary with the intensity of food intake, therefore, we can analyze the fish feeding behaviour through the frequency spectrum of the feeding sound. The FFIA based on audio was initially proposed by [22], the audio signal is first transformed into acoustic features (i.e., log mel spectrograms) and then fed into a CNN-based model for FFIA. Similar work [38, 39] also demonstrates the feasibility of using audio as model input for FFIA. Compared with vision-based methods, acoustic measurements are more energy-efficient and involve lower computational costs (e.g., energy consumption, data storage cost) [40, 41], which are more suitable for on-device application [42, 23]. Although audio-based models exhibit greater efficiency compared to vision-based models [20], Cui et al., [22] found that their classification performance tends to be comparatively lower than video-based FFIA. On the one hand, audio cannot capture the full information of visually observed fish behaviour, missing visual cues such as body movements or interactions, thereby limiting a comprehensive understanding of fish behaviour. On the other hand, audio signals are sensitive to environmental noise (e.g., pump noise, water flow) and such noise can adversely impact on the detection and analysis of fish vocalizations or other related sounds. ### _Audio-Visual pattern recognition_ Multimodal Pattern recognition (PR) technologies have the advantage of capturing diverse features from different modalities [43]. Audio and visual fusion methods have become increasingly popular methods in PR technologies, which have been successfully applied in numerous fields, including audio-visual event localization [44, 45], audio-visual speech recognition [46, 47], audio-visual emotion recognition [48, 49], and animal behaviour analysis [50, 51, 52, 53, 54]. A single modality (e.g., audio or visual) can be tailored to suit specific types of data or tasks, requiring fewer computational resources and offering faster processing compared to multimodal models. However, single modalities have limitations as they provide a restricted perspective on data, considering only one type of input. Consequently, this approach may result in a lack of context and a partial understanding of the overall information. The audio-visual fusion methods can combine different modalities to provide a more comprehensive understanding of the data. In aquaculture, the complementary approach of using video and audio can be better adapted to the challenges brought by various environments. For example, in the case of poor lighting conditions, audio can be used as the main information, and vision is used as auxiliary information. However, processing multiple modalities simultaneously requires increased computational resources, including memory, processing power, and storage. This can impact system performance and scalability. To address these issues, we focus on the efficient unified model, which can be capable of processing both multimodal and single-modal input. It is worth mentioning that, there is no similar work on the audio-visual fusion method for FFIA, and we are the first to propose the audio-visual fusion method for FFIA. The potential of our proposed approach in aquaculture applications is significant, and it opens up new possibilities for advancing our understanding of FFIA. ## III Dataset Although the FFIA plays a crucial role in aquaculture, there is currently no publicly available dataset, which hinders further research and progress in this field. In this paper, we present _AV-FFIA_, a new audio and video dataset comprised of \(27\,000\) labelled fish feeding sound and video clips for FFIA. We will discuss the details in the following sections. ### _Audio-visual dataset_ We used Oplegnathus Punctatus (a kind of marine fish) as experimental subjects, which were farmed in a recirculating tank with a diameter in \(3\) meters and a depth in \(0.75\) meters, located in Yantai, Shandong Province, China. Each fish weighs about \(150\)g. A high-definition digital camera (Hikvision DS-2CD2T87E(D)WD-L) with a frame rate of \(25\) fps (\(1920\times 1080\)) and a high-frequency hydrophone (LST-DH01) with a sampling frequency of \(256\) kHz were used to collect the fish feeding video and audio data. The acquisition of video data and audio data were carried out at the same time. The high-definition digital camera was deployed on a tripod with a height of about \(2\) m to capture the video data (as shown in Fig. 1) and a digital hydrophone was used to capture fish feeding audio data underwater. In the process of data collection, we followed the feeding rules in the real aquaculture production environment to ensure that the fish adapts to the laboratory environment as soon as possible and reduces the appetite loss caused by environmental changes. We feed the fish twice a day at \(8\) am and \(5\) pm. The video and audio data collection duration is \(15\) minutes, where the feeding begins in the second minute, and the feeding process lasts about \(3\)-\(10\) minutes. Under the guidance of a fish feeding technician, we annotated the feeding video data as "_Strong_", "_Medium_", "_Weak_" and "_None_". We further divide each one-minute video and audio clip into \(30\) segments with each segment in two seconds. Finally, \(27\,000\) two-second video and audio clips are obtained, namely _AV-FFIA_, with \(27\,000\) video and audio clips for each category of fish feeding intensity. For each class of fish feeding intensity, we create the training, validation and testing set by randomly choosing the audio clips and corresponding video clips. In total, \(21\,000\) clips are used for training, \(2800\) clips for validation, and \(2800\) clips for testing. ### _Visualization of the AV-FFIA examples_ To enhance the comprehension of diverse fish feeding intensities, we have randomly selected a range of video footage and audio samples to exemplify the varying degrees of fish feeding intensities. Fig. 2 comprises video clips and corresponding audio mel-spectrograms that capture various feeding intensities. The video frames and mel-spectrograms exhibit a well-distributed range of feeding intensities, indicating the high quality of the data. The images demonstrate a direct correlation between the fish hunger intensity and the degree of fish aggregation, with increased hunger intensity leading to a higher level of fish aggregation. Furthermore, the continuous video footage reveals that fish swimming speed and feeding activities are intensified during periods of higher hunger intensity. The corresponding audio clips also exhibit a distinct energy spectrum generated by the fish during their feeding and chewing activities. As the feeding process of the fish concludes, the feeding activity of the fish gradually decreases, resulting in a return of fish aggregation degree and energy spectrum to normal levels. ## IV Benchmark Studies on _Av-Ffia_ To facilitate the study of the FFIA task, we introduce the _AV-FFIA_ dataset and conduct several benchmark studies on audio, visual, and audio-visual fusion modalities. This comprehensive dataset and benchmark provide researchers with a valuable resource to evaluate and compare the performance of different FFIA approaches across various modalities. ### _Audio-based FFIA_ CNN has proven to be highly effective in capturing complex spectral and temporal features, making it a widely-used approach in audio classification tasks, such as bird species classification and frog sounds classification [55, 56, 57]. We aim to provide valuable insights to future researchers working in the field of FFIA by exploring the most common CNN-based model for our _AV-FFIA_. Recently, large-scale self-supervised pre-trained models have significantly improved the performance of audio classification tasks, giving state-of-the-art performance in this area. For example, PANNs [24] and AST [58] have demonstrated remarkable results on the AudioSet [59]. Therefore, to explore the full potential of our _AV-FFIA_ dataset, we leverage the pre-trained models (PANNs, AST) and fine-tune them on our _AV-FFIA_ dataset. However, computational complexity is an important issue when systems are implemented on devices, especially in aquaculture. For this purpose, we adopted the MobileNetV2 model proposed in PANNs [24] as our backbone model. The utilization of MobileNetV2 allows us to strike a balance between model performance and computational efficiency in aquaculture applications. ### _Video-based FFIA_ Fish feeding behaviour is a dynamic and ever-changing process, making it challenging to accurately assess the feeding intensity from a single image, especially when distinguishing between subtle differences, such as between fish feeding intensities categories "_Medium_" and "_Weak_". To address this Fig. 1: Experimental systems for data collection. A hydrophone was underwater and the camera was deployed on a tripod with a height of about two meters to capture the video data. Fig. 2: Video frames and mel spectrogram visualizations of four different fish feeding intensity: “_Strong_”, “_Medium_”, “_Weak_” and “_None_”. challenge, we leverage the 3D CNNs as our model backbone for video classification. 3D CNNs have proven their effectiveness in various video tasks, such as action recognition [60] and human behaviour analysis [61], making them well-suited for our FFIA task. Notably, state-of-the-art models like I3D [14] and 3D-ResNet [8] have shown promising performance in FFIA, validating the suitability of 3D CNNs for this domain. To provide a robust benchmark study for FFIA, we employ the most common 3D CNNs as our classification model backbone and evaluate their performance on the _AV-FFIA_ dataset. Furthermore, in recent advancements, Transformer-based video classification models (e.g., ViViT [62] and 3D-ViT [63] ) have demonstrated state-of-the-art results on benchmark datasets like Kinetics-400 and Kinetics-600. We explore the potential of these Transformer-based models on our _AV-FFIA_ dataset. By doing so, we aim to gain valuable insights into the suitability of different model architectures for FFIA tasks. However, the processing of video data is computationally demanding, making it impractical for on-device applications. In order to tradeoff video classification performance and model size, we compared the commonly used video classification models and finally chose the Separable 3D CNN (S3D) model [64] as our video classification model. The core idea is to replace the original I3D convolutions [65] with spatial and temporal-separable 3D convolutions. Compared with the I3D model [14], S3D model has greatly reduced the number of model parameters and improved performance on Kinetics (a large-scale human action video dataset). Therefore, we used a pre-trained S3D model on Kinetics and fine-tuned it on the _AV-FFIA_ dataset. ### _Audio-visual fusion FFIA_ Although audio and visual-based methods achieved good performance, they still have some limitations, such as the sensitivity to the impact of noise and lack of contextual understanding. The audio-visual fusion method offers the advantage of presenting more comprehensive information for FFIA, which could be highly beneficial in the field of aquaculture. However, existing research on FFIA is limited to single-modality based approaches, which may not fully capture the complexity and dynamics of the feeding behaviour. Hence, we propose several audio-visual fusion methods for FFIA, which allow us to simultaneously capture both the visual and audio cues associated with the feeding behaviour. By leveraging these fusion approaches, we conducted extensive benchmark studies on _AV-FFIA_ and gained a deeper understanding of fish feeding behaviour in aquaculture. In this section, we present several audio-visual fusion methods and study the performance of these methods on the _AV-FFIA_ dataset. #### Iii-C1 Audio-visual fusion via multi-head self-attention The _multi-head self-attention_ fusion method is a straightforward and powerful approach that leverages the regular transformer [66] for processing multimodal inputs. The method involves independently splitting each frame and mel-spectrogram, following the encoding approach proposed in ViT [67]. By concatenating the audio and video tokens, a new sequence of tokens is formed. These concatenated tokens are then used as the input for the multi-head attention (MHA) module, which can be formulated as follows: \[\mathrm{MHA}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{Softmax}\left(\frac{ \mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}}\right)\mathbf{V}, \tag{1}\] where \(\mathbf{Q},\mathbf{K},\mathbf{V}\) are the query, key, and value matrices obtained using learnable projection weights \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\in\mathbb{R}^{d\times d}\) respectively. #### Iii-C2 Audio-visual fusion via multi-head cross-attention Cross-attention allows for the integration of both audio and visual information, leveraging the strengths of each modality. By fusing information from multiple modalities, the performance can be significantly improved compared to using a single modality alone. The operation can be written as: \[\mathbf{O}^{(\ell)}=\mathrm{MHA}\left(\mathbf{V}^{(\ell-1)},\mathbf{A}^{(\ell -1)},\mathbf{A}^{(\ell-1)}\right) \tag{2}\] where \(\mathbf{V}^{(\ell-1)}\) is a visual patch representation at layer \(\ell-1\), and the \(\mathbf{A}^{(\ell-1)}\) is the audio patch representation. The matrix \(\mathbf{Q}\) is obtained using learnable projection weights from video, while \(\mathbf{K}\) and \(\mathbf{V}\) are obtained using learnable projection weights from audio. Then, we use the Multi-Head Cross-Attention to calculate the new audio-visual fused representation \(\mathbf{O}^{(\ell)}\) as a weighted summation of the audio-visual features. #### Iii-C3 Fusion via attention bottlenecks Bottleneck attention [68] is a technique used in multimodal fusion to address the problem of attention bottlenecks, which can occur when the number of modalities or the dimensionality of the feature space is large. It can be represented as follows: \[\mathrm{z}_{\mathrm{fsn}}=\left[z_{\mathrm{fsn}}^{1},z_{\mathrm{fsn}}^{2}, \ldots,z_{\mathrm{fsn}}^{B}\right] \tag{3}\] where \(\mathrm{z}_{\mathrm{fsn}}\) means a small set of \(B\) fusion bottleneck tokens. We then restrict all cross-modal attention flow via these bottleneck tokens. More formally, for layer \(l\), we compute the token representations as follows: \[\begin{split}\left[\mathbf{z}_{\mathrm{rgb}}^{l+1}\|\mathbf{z} _{\mathrm{fsn}}^{l+1}\right]&=\mathrm{Transformer}\left(\left[ \mathbf{z}_{\mathrm{rgb}}^{l}\|\mathbf{z}_{\mathrm{fsn}}^{l}\right];\theta_{ \mathrm{rgb}}\right)\\ \left[\mathbf{z}_{\mathrm{spec}}^{l+1}\|\mathbf{z}_{\mathrm{fsn}}^ {l+1}\right]&=\mathrm{Transformer}\left(\left[\mathbf{z}_{ \mathrm{spec}}^{l}\|\mathbf{z}_{\mathrm{fsn}}^{l+1}\right];\theta_{\mathrm{ spec}}\right)\\ &\quad\mathbf{z}_{\mathrm{fsn}}^{l+1}=\mathrm{Avg}_{i}\left( \hat{\mathbf{z}}_{\mathrm{fsn}}^{l+1}\right)\end{split} \tag{5}\] where \(i\) indexes each modality, \(Transformer\) is the Transformer encoder, \(\mathbf{Z}_{\mathrm{rgb}}\) is video tokens and the \(\mathbf{Z}_{\mathrm{spec}}\) is audio spectrogram tokens, \(\left[\mathbf{z}_{\mathrm{rgb}}^{l}\|\mathbf{z}_{\mathrm{fsn}}^{l}\right]\) is the concatenation of the audio and video tokens, \(\theta_{\mathrm{rgb}}\) and \(\theta_{\mathrm{spec}}\) is video and audio parameters via the cross-transformer. In this case, the audio and video tokens can only exchange information via the bottleneck token \(\mathrm{z}_{\mathrm{fsn}}\) within a transformer layer. We first create modality specific temporary bottleneck fusion tokens \(\hat{\mathrm{z}}_{\mathrm{fsn}_{i}}\), which are updated separately and simultaneously with audio and visual information. The final fused tokens from each cross-modal update are then averaged in Equation (5). ### _Robustness of FFIA to acoustic and visual noise_ Fish behaviour is influenced by various environmental factors, including natural ambient noise and visual disturbances. Incorporating audio-visual noise in fish behaviour analysis helps improve the robustness of the models. For the audio input corruption, we injected a bubble and pump noise to the entire audio with different Signal-to-Noise Ratio (SNR) levels, -10 to 20 dB. For visual input corruption, there can be various noise types, additive noise, blur, colour distortion, occlusion, etc. We use the occlusion and Gaussian noise, that often occurs in actual aquaculture. For the audio-visual input corruption, both audio corruption and visual corruption are injected for random chunks of each stream so that both streams can be corrupted simultaneously or alternatively. ## V Unified Mixed-Modality Based FFIA The audio-visual fusion method has demonstrated a notable advantage over the single-modality based approaches, exhibiting robust performance for FFIA even in noisy environments (as shown in our experiments later). However, the need for customizing individual models for each modality has led to increased computational costs, and the absence of any single modality has negatively impacted the overall performance of the audio-visual fusion model. These factors pose significant challenges for FFIA tasks. To address these issues and design an efficient and robust solution for FFIA, we propose a novel unified mixed model, named _U-FFIA_. It is a single model capable of processing audio, visual or audio-visual modalities, which is achieved by modality dropout to simulate different combinations of input modalities. Although there are already existing audio-visual unified models, such as UAVM [69] for VGGSound classification and U-HUBERT [70] for audio-visual speech, these methods are unsupervised methods and demand a substantial volume of unlabeled data for pre-training. Additionally, the training process is intricate and computationally expensive, which may limit their practical applicability in the context of aquaculture. In contrast, our proposed model employs a supervised training approach and incorporates several innovative techniques to enhance its portability and performance. For example, we address the computational burden arising from temporally redundant audio features, such as mel-spectrograms, by utilizing SimPFs [23] to reduce redundant information. Furthermore, we leverage pre-trained audio models to extract audio features as a complementary modality for helping video with fewer frames. To enhance the performance of the audio and video student networks, we employ knowledge distillation, using a pre-trained model with full audio and video frames as the teacher network. In addition, we conduct extensive experiments on audio, visual, and audio-visual modalities in noisy environments, which demonstrate the effectiveness and robustness of our proposed method in various scenarios. The framework of our proposed model is shown in Fig. 3. ### _Audio-visual input_ We first use the audio pre-processing method to obtain the Mel spectrogram features of the audio signal. However, in the aquaculture environment, various sources of noise, such as bubbles and machine noise, can introduce redundancy in the mel-spectrogram data when used as input to CNNs. To tackle this issue and optimize computational efficiency, we adopt the simple pooling front-ends (SimPFs) [23, 71] method. This technique effectively reduces redundant information present in the input mel-spectrograms, thereby enhancing the overall computational efficiency of the model. The spectral pooling method of SimPF, which computes the discrete Fourier transform (DFT) of the audio frames \(\mathbf{F}^{a}\) and then crops the centre Fig. 3: Overview of the proposed method. We first use the pre-trained video encoder and audio encoder to extract the video and audio features. The audio encoder is a pre-trained PANNs (Pre-trained MobileNetv2 models on Audioset). The video encoder is simply a linear projection layer. We cut the features into non-overlap patches \(16\times 16\), and use a modality dropout to randomly select one modality on each step during the model training. The shared transformer encoder has \(6\) layers with \(8\) heads, embedding dimension \(768\), and FFN dimension \(1024\), using the pre-norm residual connection setup. with a bounding box with the shape of \((S,kN^{a})\) where \(S\) refers to the dimension of the spectral feature to get \(\tilde{\mathbf{F}}_{\text{crop}}^{a}\), \(k\) denotes the compression rate, ranging from 0 to 1. \(N^{a}\) is the audio frames. Then the output of the inverse discrete Fourier transform (IDFT) \(\tilde{\mathbf{F}}^{a}\) is taken as the compressed audio, as follows, \[\begin{split}\tilde{\mathbf{F}}^{a}&=\mathrm{DFT} \left(\mathbf{F}^{a}\right)\\ \tilde{\mathbf{F}}_{\text{crop}}^{a}&=\mathbf{F}^{a} \left(S,kN^{a}\right)\\ \hat{\mathbf{F}}^{a}&=\mathrm{IDFT}\left(\tilde{ \mathbf{F}}_{\text{crop}}^{a}\right)\end{split} \tag{6}\] In order to prevent the model from over-reliance on the video stream, we use the linear layer to encode the video input to force the video encoder to learn simple features and then split each frame following the encoding proposed in ViT. We use the pre-trained MobileNetV2 model as the audio encoder to extract the high-dimensional features of the input Mel spectrogram. Finally, we got the audio input \(h^{a}\in\mathbb{R}^{T_{a}\times d}\) and the video input \(h^{v}\in\mathbb{R}^{T_{v}\times d}\), where \(T_{a}\) is the number of audio features, \(T_{v}\) is the number of video features and \(d\) is the dimension of the features. ### _Fusion via efficient video with audio encoding_ Compared with the audio input, the video stream is three-dimensional (two spatial and one temporal) and contains a wealth of contextual and temporal information. However, the video contains high redundancy across multiple frames, which makes it computationally expensive to process [72]. We propose an efficient audio-visual fusion framework that uses audio cues for long-range video processing, which reduces the computational cost. For the video modality, we randomly downsample the video frames from the whole input video (50 frames) and get the video input of \(X^{v}\in\mathbb{R}^{N_{f}\times 3\times H\times W}\), where \(N_{f}\) is the number of frames, \(H\) is the height of the frame, and \(W\) is the width of the frame. Following the ViT, we split each frame into \(N\) no-overlapping patches, whose shape is \(P\times P\), and flatten those patches into sequences \(X_{p}\in\mathbb{R}^{N\times 3P^{2}}\). The position embedding is added to the patch embedding to retain positional information. A specialized CLS token \(\mathbf{V}_{cl}^{(0)}\) is prepended to the embedded patches. Finally, the sequences of embedding vectors serve as input to the model as \(\mathbf{h}^{v}\in\mathbb{R}^{T_{v}\times d}\). For the audio modality, we use the pre-trained audio model MobileNetV2 (pre-trained on AudioSet) as the audio encoder to extract the preprocessed mel spectrogram by the SimPFs and then we get the audio features. Afterwards, an average pooling is applied along the frequency dimension and then maximum and average operations are used along the time dimension. We sum the maximized and averaged features and then project them into the shared embedding space through a new multi-layer perception (MLP) block with two linear layers and a ReLU activation hidden layer. Finally, we got the audio embedding \(\mathbf{h}^{a}\in\mathbb{R}^{T_{a}\times d}\). We also prepend the specialized audio class token \(\mathbf{A}_{cl}^{(0)}\in\mathbb{R}^{1\times 768}\) to the audio-embedded patches. To efficiently incorporate temporal audio cues into static video frame representations to help reduce the computational cost of video frames, we use an audio-to-video attention algorithm. This method can be written as: \[\mathbf{S}_{t}^{(\ell)}=\mathrm{MHA}\left(\mathbf{V}_{t}^{(\ell-1)},\mathbf{V }_{t}^{(\ell-1)},\mathbf{V}_{t}^{(\ell-1)}\right)+\mathbf{V}_{t}^{(\ell-1)} \tag{7}\] \[\mathbf{O}_{t}^{(\ell)}=\mathrm{MHA}\left(\mathbf{S}_{t}^{(\ell-1)},\mathbf{ A}^{(\ell-1)},\mathbf{A}^{(\ell-1)}\right)+\mathbf{S}_{t}^{(\ell-1)} \tag{8}\] where \(\mathbf{V}_{t}^{(\ell-1)}\) is a visual patch representation, and \(\mathbf{S}_{t}^{(\ell)}\in\mathbf{R}^{(N+1)\times d}\) is the newly computed video representation, \(\mathbf{A}^{(\ell-1)}\in\mathbb{R}^{t\times d}\) is the audio representation at layer \(l-1\), and \(\mathbf{S}_{t}^{(\ell-1)}\in\mathbb{R}^{(N+1)\times d}\) is the video representation at layer \(l-1\). To calculate the new audio-visual representation \(\mathbf{O}_{t}^{(\ell)}\), we utilize the Multi-Head Cross-Attention mechanism, which performs a weighted summation of the audio-visual features. This allows the model to effectively incorporate long-range audio cues into the visual features. Due to the compact nature of the audio representation, this operation can be efficiently implemented. We perform temporal pooling over the CLS tokens \(\mathbf{AV}_{cl}^{(0)}\) across all video frames, resulting in the final audio-visual representation \(\mathbf{f}\in\mathbb{R}^{d}\). To produce the final output embedding vector, three different learnable CLS tokens are employed. During training, the classification head is implemented using three MLPs with one hidden layer. This decoding strategy enables the model to produce multiple outputs as required, making it suitable for dense tasks like knowledge distillation or contrastive learning. ### _Modality dropout_ In the context of multi-modal learning scenarios, it is common to encounter situations where one or more modalities may be absent or unavailable. For example, in tasks involving multi-modal recognition, certain videos may lack accompanying audio or textual information. To address this challenge, we develop a method called "modality dropout", motivated by the work in [73]. More specifically, this approach is designed to train the model in such a way that it can effectively handle missing modalities by randomly excluding specific modalities during the training process. With this method, the model can be prevented from over-relying on a particular modality, such as audio, which may lead to the neglect of valuable information from other modalities, like video. The modality dropout encourages the model to produce predictions irrespective of the modalities used as input, thereby enhancing its overall robustness. In this study, we adopt modality dropout to prompt the model to learn how to optimally utilize the available modalities while handling instances where certain modalities are missing. As a result of training with modality dropout, the model becomes proficient in leveraging the available modalities effectively, even when some modalities are not present. This approach is shown to improve performance and enhance robustness, as shown in our experiments. Moreover, our approach is capable of making predictions for three distinct input modalities using a single model. This simplifies the hyperparameter selection process and improves computational efficiency, which is highly promising for applications in aquaculture. In this study, we employ modality dropout as a technique to mask the complete features of one modality before fusing the audio and visual inputs into the transformer encoder. When both audio and video modalities are utilized as inputs, we assign a probability of \(p_{av}\) for their joint selection. Conversely, when only one modality is used, the audio input is selected with a probability of \(p_{a}\), while the video input is selected with a probability of \(1-p_{a}\). The feature fusion process with modality dropout is mathematically represented as follows: \[\mathbf{f}_{t}^{av}=\begin{cases}\mathrm{concat}\left(\mathbf{f}_{t}^{a}, \mathbf{f}_{t}^{v}\right)&\text{with }p_{av}\\ \mathrm{concat}\left(\mathbf{f}_{t}^{a},\mathbf{0}\right)&\text{with }\left(1-p_{av} \right)p_{a}\\ \mathrm{concat}\left(\mathbf{0},\mathbf{f}_{t}^{v}\right)&\text{with }\left(1-p_{av} \right)\left(1-p_{a}\right)\end{cases} \tag{9}\] where concat means the channel-wise concatenation. We use the modality dropout at each iteration during the self-training. ### _Unified model with knowledge distillation_ Since we employ a limited number of frames for both audio and visual inputs and a single model to train three modalities, there is a possibility that the performance might be comparatively lower than that of a customized single-modality based model. To address this limitation, we leverage the technique of knowledge distillation (KD) to enhance the performance of our audio and visual models. Knowledge distillation, initially proposed by Hinton et al. (2015)[25], is a model compression method wherein a large-scale model (Teacher) is compressed into a smaller model (Student) while preserving similar performance. This compression process results in improved inference speed, making it more convenient for practical industrial applications[74]. The essence of knowledge distillation is using the logits output by the teacher model (in this case, the pre-trained audio model, CNN14, and visual model, S3D) as the supervision information to guide the learning of the smaller student model and the teacher model uses all the audio and visual frames. During training, we freeze the teacher model to focus solely on training the student model using the knowledge distillation loss. By employing knowledge distillation, we effectively transfer the knowledge learned by the larger teacher model to the compact student model. This approach empowers the student model to achieve enhanced performance, thereby rendering it suitable for practical applications, particularly in domains such as aquaculture. We use the following loss for the student model training: \[\mathrm{Loss}\ =\lambda\,\mathrm{Loss}_{g}\left(\psi\left(Z_{s}\right),y\right)+ \left(1-\lambda\right)\mathrm{Loss}_{d}\left(\psi\left(Z_{s}\right),\psi\left( Z_{t}/\tau\right)\right) \tag{10}\] where \(\lambda\) is the balancing coefficient, \(y\) is the ground truth of FFIA, \(\mathrm{Loss}_{g}\) and \(\mathrm{Loss}_{d}\) are the ground truth and distillation losses, respectively, \(\psi\) is the activation function, \(Z_{s}\) and \(Z_{t}\) are the logits of the student and teacher model, respectively, and \(\tau\) is the temperature parameter. In this paper, we use the Kullback-Leibler divergence as \(Loss_{d}\). For cross-model KD, the teacher and student may have different logit distributions, thus we only apply \(\tau\) on the teacher logits to explicitly control the difference. ## VI Experiments ### _Audio data preparation_ #### Vi-A1 Audio data processing We use mel spectrogram as acoustic features, which has been widely used for audio classification[24]. The original audio sampling rate is \(256\) KHz, which can lead to computational and storage burdens. We down-sample the audio signals to \(64\) kHz to reduce the computational complexity. Then we calculate the Short-Time Fourier Transform (STFT) with a Hanning window of \(2048\) samples and a hop size of \(1024\) samples, and finally, we apply the mel filter banks with \(128\) bins. Therefore, for a 2-second audio signal, we have a mel spectrogram with the shape of \(128\times 128\). We use the SimPF method which is a simple non-parametric pooling operation, to reduce the redundant information within the mel-spectrogram, leading to a shape of \(64\times 128\) mel spectrogram. This streamlined mel spectrogram representation enhances the efficacy of audio classification tasks. To evaluate the noise robustness of the model, we add a bubble and pump noise to the entire audio with different SNR levels, -10 to 20 dB. #### Vi-A2 Loss and training During training, we use the SpecAugment method proposed in [75] to expand our training samples. With SpecAugment, the spectrogram can be modified by masking blocks of consecutive frequency channels, and masking blocks of time steps [75]. Frequency masking is applied such that \(f\) consecutive mel frequency bins \(\left[f_{0},f_{0}+f\right]\) are masked, where \(f\) is chosen from \(0\) to a frequency mask parameter \(f^{\prime}\) in terms of a uniform distribution, and \(f_{0}\) is chosen from \(\left[0,\,F-f\right]\), where \(F\) is the number of mel frequency bins [76]. The cross-entropy loss is particularly well-suited for classification problems. This loss function quantifies the dissimilarity between the predicted class probabilities and the ground truth class labels. By choosing the cross-entropy loss, we can effectively train our model to improve its classification performance. ### _Video data processing_ The original video comprises \(50\) frames, each with a resolution of \(2560\times 1440\), resulting in a substantial amount of redundant information across multiple frames. To address this issue and reduce computational complexity, we employ a downsampling technique. Specifically, we randomly select \(4\) frames from the video and resize each frame to a smaller dimension of \(224\times 224\). This resizing step aims to retain essential visual information while making the data more manageable for training purposes. By combining downsampling, random cropping, and colour augmentation techniques during training, we effectively utilize the high-resolution video data while promoting the model's ability to generalize well to diverse inputs. For visual noise corruption modelling, we add darkness and Gaussian noise with a maximum variance of 0.2. ### _Experimental setups_ #### Vi-C1 Audio experiment setup We use the pre-trained MobileNetV2 models on Audioset and fine-tune it on the _AV-FFIA_ dataset. The Adam optimizer [77] with a learning rate of \(0.001\) is used for training the model. The batch size is set to \(200\) and the number of epochs is \(20\). The training and evaluation are performed on a Nvidia-RTX-3090Ti-24GB GPU. #### Vi-C2 Video experiment setup We use a pre-trained S3D model [64] on Kinetics, and fine-tuning it on our AV-FFIA dataset. We use Adam optimizer with a learning rate of \(0.001\) for fine-tuning the model. The batch size is set to \(20\) and the number of epochs is \(20\). The training and evaluation are performed on a Nvidia-RTX-3090Ti-24GB GPU. #### Vi-C3 Audio-visual fusion setup We use the pre-trained MobileNetV2 model and S3D model to extract the audio and video features, respectively. Then, we split the audio and visual features following the encoding proposed in ViT. We use the ViT architecture (\(L=6\), \(N_{H}=8\), \(d=\)\(1024\)) as our backbone. We trained the model on the AV-FFIA Train data and evaluated them on the AV-FFIA Test data. The model is trained on Nvidia-RTX-3090Ti-24GB GPU with a batch size of \(20\), the number of epochs \(200\), and the Adam optimizer with a learning rate of \(0.0001\). #### Vi-C4 Unified mixed-model setup We use the pre-trained MobileNetV2 model to extract audio features and use an average pooling along the frequency dimension. For the video, we use the linear layer to encode the video input into simple features and then split each frame following the encoding proposed in ViT. We also use the ViT archietecture (\(L=6\), \(N_{H}=8\), \(d=\)\(1024\)) as our backbone. We use Adam optimizer with a learning rate of \(0.0001\) for training the model. The batch size is set to \(20\) and the number of epochs is \(400\). For Knowledge distillation during the training, we fix the hyperparameters \(\lambda=0.5\) and \(\tau=2.5\), which control the balance between the teacher and student models' contributions and the temperature of the softened probabilities, respectively. We use the cross-entropy (CE) loss as \(Loss_{g}\) and the softmax activation function during training. The training and evaluation are performed on a Nvidia-RTX-3090Ti-24GB GPU. ### _Evaluation metrics_ Accuracy refers to the number of correct predictions divided by the total number of predictions, which provides a straightforward and intuitive measure of the overall performance of a classification model. In most of the literature on the classification of FFIA, accuracy is a popular evaluation metric for classification. In order to compare with the methods in the previous literature, we also use accuracy as the performance metric in our evaluations. tuning them on our _AV-FFIA_ dataset. The results indicated that pre-trained models are better than retrained models and CNN-based models surpassed transformer models on the _AV-FFIA_ dataset. In contrast, the transformer models, originally designed for sequential data, such as natural language processing (NLP) tasks, heavily rely on self-attention mechanisms, which excel in modelling long-range dependencies but might struggle to discern fine-grained local patterns evident in audio spectrograms. Furthermore, we investigated the effectiveness of the SimPF (Spectral) method with varying compression coefficient settings on MobileNetV2 models. Encouragingly, even with a \(50\)% compression setting, we observed only a negligible \(0.01\%\) decrease in classification accuracy and the number of FLOPs reduced by half to \(66.979\) M. Visual analysis in Fig. 4 revealed that despite reducing the spectrogram resolution by half, the fundamental patterns of fish feeding sounds remained intact. As a result, we adopted this method as an effective means to reduce redundant information within the mel-spectrogram. ### _The benchmark results of video-based FFIA_ We compared commonly used video classification models, including I3D [65], S3D [64], and 3D-ViT [63] models, on our _AV-FFIA_ dataset. We randomly sampled \(20\) frames from the original \(50\) frames to reduce the large amount of redundant information in videos. Table II presents the performance of different video-based baseline models on the _AV-FFIA_ dataset, with S3D models achieving the highest accuracy of \(0.898\), outperforming other models. Compared with the performance of the S3D models, the performance of 3D-ViT [63] and ViViT [62] models reached approximately \(0.88\) and \(0.74\), respectively. This discrepancy can be attributed to the limitations of ViT-based architectures in capturing temporal information, potential overfitting due to inadequate training data, and differences in model complexity and parameter efficiency. S3D models are specifically designed to capture spatiotemporal information in videos using 3D convolutions. Additionally, they are often pre-trained on large-scale video datasets like Kinetics, which encompass diverse and representative video clips. One notable advantage of S3D models is their ability \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{3}{c}{} & \multicolumn{3}{c}{FLOPs} & \multicolumn{3}{c}{Parameters} \\ \cline{2-7} Model & A (M) & V (G) & AV (G) & A (M) & V (M) & AV (M) \\ \hline Self-attention & 210.346 & 68.193 & 68.404 & 39.781 & 45.710 & 47.695 \\ Cross-attention & n/a & n/a & 60.933 & n/a & n/a & 35.082 \\ MBT & n/a & n/a & 68.690 & n/a & n/a & 47.689 \\ **U-FFIA** & **105.336** & **15.401** & **15.488** & **21.032** & **19.506** & **21.626** \\ \hline \hline \end{tabular} \end{table} TABLE V: The FLOPs and parameters of different A-V fusion methods on the _AV-FFIA_ dataset. \begin{table} \begin{tabular}{c c c|c c c c c} \hline \hline \multicolumn{2}{c|}{} & \multicolumn{3}{c}{FLOPs} & \multicolumn{3}{c}{Parameters} \\ \cline{2-7} Model & A (M) & V (G) & AV (G) & A (M) & V (M) & AV (M) \\ \hline Self-attention & 210.346 & 68.193 & 68.404 & 39.781 & 45.710 & 47.695 \\ Cross-attention & n/a & n/a & 60.933 & n/a & n/a & 35.082 \\ MBT & n/a & n/a & 68.690 & n/a & n/a & 47.689 \\ **U-FFIA** & **105.336** & **15.401** & **15.488** & **21.032** & **19.506** & **21.626** \\ \hline \hline \end{tabular} \end{table} TABLE V: The FLOPs and parameters of different A-V fusion methods on the _AV-FFIA_ dataset. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & -10 dB & -5 dB & 0 dB & 10 dB & 20 dB \\ \hline MobileNetV1 & 0.652 & 0.639 & 0.699 & 0.737 & 0.761 \\ MobileNetV2 & 0.646 & 0.662 & 0.676 & 0.737 & 0.762 \\ ResNet18 & 0.671 & 0.671 & 0.711 & 0.743 & 0.768 \\ RestNet22 & 0.675 & 0.690 & 0.714 & 0.759 & 0.804 \\ CNN10 & 0.691 & 0.701 & 0.713 & 0.761 & 0.807 \\ CNN14 & **0.704** & 0.712 & 0.723 & 0.759 & 0.818 \\ Pre-CNN6 & 0.662 & 0.656 & 0.683 & 0.710 & 0.787 \\ Pre-CNN10 & 0.702 & 0.708 & 0.722 & 0.741 & 0.785 \\ Pre-Mobilevit & 0.623 & 0.623 & 0.695 & 0.727 & 0.776 \\ **MobileNetV2-PANNs** & 0.688 & **0.718** & **0.726** & **0.763** & **0.826** \\ \hline \hline \end{tabular} \end{table} TABLE III: The results of the different audio-based methods on the different SNR conditions. \begin{table} \begin{tabular}{c c|c|c|c c c c c} \hline \hline \multicolumn{2}{c|}{} & \multicolumn{3}{c}{FLOPs} & \multicolumn{3}{c}{Parameters} \\ \cline{2-9} Model & A (M) & V (G) & AV (G) & A (M) & V (M) & AV (M) \\ \hline Self-attention & 210.346 & 68.193 & 68.404 & 39.781 & 45.710 & 47.695 \\ Cross-attention & n/a & n/a & 60.933 & n/a & n/a & 35.082 \\ MBT & n/a & n/a & 68.690 & n/a & n/a & 47.689 \\ **U-FFIA** & **105.336** & **15.401** & **15.488** & **21.032** & **19.506** & **21.626** \\ \hline \hline \end{tabular} \end{table} TABLE V: The FLOPs and parameters of different A-V fusion methods on the _AV-FFIA_ dataset. to strike a balance between computational complexity and model performance. As evident from their fewer parameters (\(7.9\) million) and FLOPs (\(22.826\)G) compared to I3D [65] and 3D-ResNet [78] models, they are more efficient, making them a promising choice for applications in aquaculture. ### _The benchmark results of audio-visual based FFIA_ We compared various audio-visual fusion methods, such as cross-attention, self-attention, and MBT on the _AV-FFIA_ dataset. In Table IV, we present the results of different baseline methods based on audio-visual modalities on _AV-FFIA_. Compared with the performance achieved with individual modalities, we observed that all the audio-visual fusion methods surpassed any single modality, which indicates that the multimodal approach significantly outperforms the single-modality based approaches. In addition, we found that the robustness of the audio-visual approach surpasses the single modality based approaches under the noisy environment. Notably, the self-attention method achieved an accuracy of \(0.882\), while the cross-attention method achieved \(0.917\). We believe that the cross-attention method enables the model to selectively focus on relevant information in both audio and visual modalities, whereas the self-attention method treats all information equally. Consider the scenario of fish feeding, where the fish quickly consumes all the current feed, resulting in no sound being produced. However, the video still shows the fish highly aggregated and in a state of hunger. The cross-attention method allows the information from the audio modality to complement that from the visual modality in such cases. On the other hand, the Multimodal Bottleneck Transformer (MBT) method achieved a lower accuracy of only \(0.724\) compared to the cross-attention method. This could be attributed to attention bottlenecks that attend to each modality sequentially, leading to a failure to capture the complex relationships among modalities. Consequently, there may be information loss and a lack of direct interaction between modalities. ### _The unified model_ Table IV provides an in-depth analysis of our proposed model and several audio-visual fusion methods that are currently in use. Our _U-FFIA_ model achieved an accuracy of \(0.786\) and \(0.836\) for audio and video single modalities, respectively, and an accuracy of \(0.907\) for audio-visual fusion. It's worth noting that the use of fewer audio and video frames can slightly degrade the performance. However, knowledge distillation can significantly improve the performance of U-FFIA across both audio and video modes. Through the application of the knowledge distillation approach, we observed marked improvements in both audio and visual performance (\(0.824\) and \(0.857\), respectively). These enhancements place the individual modality based model on par with other audio-visual fusion methods, including Self-attention and Cross-attention, in terms of performance. Furthermore, we conducted a thorough analysis of computational complexity and model parameters across various models (refer to Table V). In comparison to a Transformer model employing all video frames, our proposed model with fewer video frames demonstrates a remarkable reduction of 57% in parameters, accompanied by a substantial 77% decrease in FLOPs, while maintaining a negligible 3% dip in performance. This efficiency is mirrored in the audio domain, where our model showcases a 48% decrease in model size in terms of the number of parameters and a 50% reduction in FLOPs, but with a marginal performance variance. Importantly, our proposed unified model exhibits comparable performance on audio-visual fusion while concurrently reducing the number of parameters and FLOPs by 54% and 77%, respectively. This compelling evidence underscores the efficiency of our proposed model. Additionally, our proposed model has better robustness, as shown in Table IV. Our proposed _U-FFIA_ model is a single model capable of processing different modalities and achieved good performance for _AV-FFIA_, in both effectiveness and robustness in various scenarios. ### _Ablation studies_ To gain a deeper understanding of the impact of modality dropout configurations on our _AV-FFIA_ dataset during training, we consider four different modality dropout configurations. These configurations are represented by the probabilities of using both audio and video streams \(P_{av}\), only the audio stream \(P_{a}\), and only the video stream \(P_{v}\), as shown in Table VI. The first configuration, denoted as (\(P_{av}\), \(P_{a}\), \(P_{v}\)) = (1.00, 0.00, 0.00), corresponds to not employing any modality dropout. When training the U-FFIA model, we observed that increasing the value of \(P_{av}\) in models trained with modality dropout resulted in slightly worse performance on the visual-only and audio-only test sets. We attribute this outcome to an imbalance in the training data, leading to overfitting in the audio-visual modality but insufficient training in the single modalities. In contrast, a slight increase in the probability within the video modality resulted in a slight improvement in video performance. This improvement can be attributed to the simple linear transformation undergone by the video modality's input, capturing shallow visual features. Furthermore, to reduce computational complexity, we randomly down-sampled the video, resulting in a reduced number of video frames. Consequently, achieving optimal performance may necessitate additional training iterations. For the audio modality, the high-dimensional features are extracted from pre-trained models, as a result, satisfactory performance could be achieved with fewer iterations. Nevertheless, all three modality configurations significantly outperformed models trained without modality dropout, showing the importance of modality dropout in training a single model for three distinct modalities. ## VIII Conclusion and Future Work In this paper, we have presented _U-FFIA_, a novel unified mixed-modality based method for FFIA. The _U-FFIA_ is a single model capable of processing audio, visual or audio-visual modalities for FFIA. We also introduced a large-scale audio-visual dataset for FFIA and conducted extensive benchmarking experiments involving audio, video, and audio-visual fusion techniques on our new dataset. The _U-FFIA_ model can achieve performance better than (or on par with) SOTA modality-specific FFIA models, especially under noisy environments. In addition, our proposed model achieved this level of performance while requiring significantly fewer computational resources, highlighting its efficiency and effectiveness for FFIA in aquaculture. In the future, we will expand the _AV-FFIA_ dataset of different fish species and design on-device models to suit the needs of aquaculture applications. ## IX Acknowledgment This work was supported by the Research and demonstration of digital cage integrated monitoring system based on underwater robot [China grant 2022YFE0107100], Digital Fishery Cross-Innovative Talent Training Program of the China Scholarship Council (DF-Project) and a Research Scholarship from the China Scholarship Council. Ethics approval for this study was obtained from the Welfare and Ethical Committee of China Agricultural University (Ref: AW30901202-5-1). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any author-accepted manuscript version arising.
2309.03429
A Liouville Theorem and Radial Symmetry for dual fractional parabolic equations
In this paper, we first study the dual fractional parabolic equation \begin{equation*} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = f(u(x,t))\ \ \mbox{in}\ \ B_1(0)\times\R , \end{equation*} subject to the vanishing exterior condition. We show that for each $t\in\R$, the positive bounded solution $u(\cdot,t)$ must be radially symmetric and strictly decreasing about the origin in the unit ball in $\R^n$. To overcome the challenges caused by the dual non-locality of the operator $\partial^\alpha_t+(-\Delta)^s$, some novel techniques were introduced. Then we establish the Liouville theorem for the homogeneous equation in the whole space \begin{equation*}\label{B} \partial^\alpha_t u(x,t)+(-\Delta)^s u(x,t) = 0\ \ \mbox{in}\ \ \R^n\times\R . \end{equation*} We first prove a maximum principle in unbounded domains for anti-symmetric functions to deduce that $u(x,t)$ must be constant with respect to $x.$ Then it suffices for us to establish the Liouville theorem for the Marchaud fractional equation \begin{equation*} \partial^\alpha_t u(t) = 0\ \ \mbox{in}\ \ \R . \end{equation*} To circumvent the difficulties arising from the nonlocal and one-sided nature of the operator $\partial_t^\alpha$, we bring in some new ideas and simpler approaches. Instead of disturbing the anti-symmetric function, we employ a perturbation technique directly on the solution $u(t)$ itself. This method provides a more concise and intuitive route to establish the Liouville theorem for one-sided operators $\partial_t^\alpha$, including even more general Marchaud time derivatives.
Yahong Guo, Lingwei Ma, Zhenqiu Zhang
2023-09-07T01:18:37Z
http://arxiv.org/abs/2309.03429v1
# A Liouville Theorem and Radial Symmetry for dual fractional parabolic equations ###### Abstract In this paper, we first study the dual fractional parabolic equation \[\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=f(u(x,t))\ \ \mbox{in}\ \ B_{1}(0)\times\mathbb{R},\] subject to the vanishing exterior condition. We show that for each \(t\in\mathbb{R},\) the positive bounded solution \(u(\cdot,t)\) must be radially symmetric and strictly decreasing about the origin in the unit ball in \(\mathbb{R}^{n}\). To overcome the challenges caused by the dual non-locality of the operator \(\partial_{t}^{\alpha}+(-\Delta)^{s},\) some novel techniques were introduced. Then we establish the Liouville theorem for the homogeneous equation in the whole space \[\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=0\ \ \mbox{in}\ \ \mathbb{R}^{n}\times\mathbb{R}.\] We first prove a maximum principle in unbounded domains for anti-symmetric functions to deduce that \(u(x,t)\) must be constant with respect to \(x.\) Then it suffices for us to establish the Liouville theorem for the Marchaud fractional equation \[\partial_{t}^{\alpha}u(t)=0\ \ \mbox{in}\ \ \mathbb{R}.\] To circumvent the difficulties arising from the nonlocal and one-sided nature of the operator \(\partial_{t}^{\alpha},\) we bring in some new ideas and simpler approaches. Instead of disturbing the anti-symmetric function, we employ a perturbation technique directly on the solution \(u(t)\) itself. This method provides a more concise and intuitive route to establish the Liouville theorem for one-sided operators \(\partial_{t}^{\alpha},\) including even more general Marchaud time derivatives. **Mathematics Subject classification (2020):** 35R11; 35B06, 47G30; 35B50; 35B53. **Keywords:** dual fractional parabolic equations; direct method of moving planes; narrow region principle; radial symmetry; monotonicity; Liouville theorem;. Introduction The primary objective of this paper is to investigate the qualitative properties of solutions to dual nonlocal parabolic equations associated with the operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\). More precisely, we first investigate the radial symmetry and monotonicity of solutions for the following equation in the unit ball \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=f(u(x,t))&\mbox{ in }\ B_{1}(0)\times\mathbb{R},\\ u(x,t)\equiv 0&\mbox{ in }\ B_{1}^{c}(0)\times\mathbb{R}.\end{array}\right. \tag{1.1}\] Then we establish the Liouville theorem for the homogeneous equation in the whole space \[\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=0\ \ \mbox{in}\ \ \mathbb{R}^{n} \times\mathbb{R}. \tag{1.2}\] The one-sided nonlocal time derivative \(\partial_{t}^{\alpha}\) considered here is known as the Marchaud fractional derivative of order \(\alpha\), defined as \[\partial_{t}^{\alpha}u(x,t)=C_{\alpha}\int_{-\infty}^{t}\frac{u(x,t)-u(x, \tau)}{(t-\tau)^{1+\alpha}}d\tau, \tag{1.3}\] with \(0<\alpha<1,C_{\alpha}=\frac{\alpha}{\Gamma(1-\alpha)}\) and \(\Gamma\) represents the Gamma function. From the definition, such fractional time derivative depends on the values of function from the past, sometime also denoted as \((D_{\rm left})^{\alpha}\). The spatial nonlocal elliptic pseudo-differential operator, the fractional Laplacian \((-\Delta)^{s}\) is defined as \[(-\Delta)^{s}u(x,t)=C_{n,s}P.V.\int_{\mathbb{R}^{n}}\frac{u(x,t)-u(y,t)}{|x-y| ^{n+2s}}dy. \tag{1.4}\] where \(0<s<1\), \(C_{n,s}:=\frac{4^{s}\Gamma\left(\frac{n+2s}{2}\right)}{\pi^{n/2}|\Gamma(-s)|}\) is a normalization positive constant and \(P.V.\) stands for the Cauchy principal value. In order to guarantees that the singular integral in (1.3) and (1.4) are well defined, we assume that \[u(x,t)\in\left(\mathcal{L}_{2s}\cap C_{loc}^{1,1}(\mathbb{R}^{n})\right) \times\left(C^{1}(\mathbb{R})\cap\mathcal{L}_{\alpha}^{-}(\mathbb{R})\right),\] Here, the slowly increasing function spaces \(\mathcal{L}_{2s}\) and \(\mathcal{L}_{\alpha}^{-}(\mathbb{R})\) are defined respectively by \[\mathcal{L}_{2s}:=\left\{v\in L_{loc}^{1}(\mathbb{R}^{n})\ |\int_{\mathbb{R}^{n}} \frac{|v(x)|}{1+|x|^{n+2s}}dx<+\infty\right\}\] and \[\mathcal{L}_{\alpha}^{-}(\mathbb{R}):=\left\{v\in L_{loc}^{1}(\mathbb{R})\ |\int_{-\infty}^{t}\frac{|v(\tau)|}{1+|\tau|^{1+\alpha}}d\tau<+\infty\ \mbox{for each}\ t\in\mathbb{R}\right\}.\] A typical application of equation in (1.1) is in modeling continuous time random walks [30], which generalizes Brownian random walks. This fractional kinetic equation introduces nonlocality in time, leading to history dependence due to unusually large waiting times, and nonlocality in space, accounting for unusually large jumps connecting distant regions, such as Levy flights. In applications within financial field, it can also be used to model the waiting time between transactions is correlated with the ensuring price jump (cf. [35]). Another model is presented in [21] to simulate transport of tracer particles in plasma, where the function \(u\) is the probability density function for tracer particles which represents the probability of finding a particle at time \(t\) and position \(x\), the right hand side \(f\) is a source term. In this case, the nonlocal space operator \((-\Delta)^{s}\) accounts for avalanche-like transport that can occur, while the Marchaud time derivative \(\partial_{t}^{\alpha}\) accounts for the trapping of the trace particles in turbulent eddies. It is worth mentioning that the nonlocal operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) in problem (1.1) can be reduced to the local heat operator \(\partial_{t}-\Delta\) as \(\alpha\to 1\) and \(s\to 1\). The method of moving planes, initially introduced by Alexandroff in [24] and simplified by Berestycki and Nirenberg [3], is a widely used technique for studying the monotonicity of solutions to local elliptic and parabolic equations. However, this approach can not be applied directly to psuedo-differential equations involving the fractional Laplacian due to its nonlocality. To circumvent this difficulty, Cafferelli and Silvestre [5] introduced an extension method that can turn a non-local equation to a local one in higher dimensions. Thereby the traditional method of moving planes designed for local equations can be applied for the extended problem to establish the well-posedness of solutions, and a series of interesting results have been obtained in [6, 11, 12, 15, 17, 18, 26, 27, 29] and the references therein. However, this method is exclusively applicable to equations involving the fractional Laplacian and sometimes additional restrictions may need to be imposed on the problems, while it will not be necessary in dealing with the fractional equations directly. To remove these restrictions, Chen, Li, and Li [11] introduced a direct method of moving planes nearly ten years later. This method significantly simplify the proof process and has been widely applied to establish the symmetry, monotonicity, non-existence of solutions for various elliptic equations and systems involving the fractional Laplacian, the fully nonlinear nonlocal operators, the fractional p-Laplacians as well as the higher order fractional operators, we refer to [9, 10, 16, 19, 28, 32] and the references therein. Recently, this method has also been gradually made use of studing the geometric behavior of solutions for fractional parabolic equations with the general local time derivative \(\partial_{t}u(x,t)\). (cf. [8, 14, 25, 40] and the references therein). In particular, the authors of [8] established symmetry and monotonicity of positive solutions on a unit ball for the classical parabolic problem \[\left\{\begin{array}{ll}\partial_{t}u(x,t)+(-\Delta)^{s}u(x,t)=f(u(x,t))& \mbox{in}\ \ B_{1}(0)\times\mathbb{R},\\ u(x,t)\equiv 0&\mbox{in}\ \ B_{1}^{c}(0)\times\mathbb{R}.\end{array}\right. \tag{1.5}\] However, so far as we know, there is still a lack of research on the geometric properties of solutions to nonlocal parabolic equations (1.1) with the Marchaud fractional time derivative \(\partial_{t}^{\alpha}u(x,t)\) and the fractional Laplacian\((-\Delta)^{s}\). Recently, Guo, Ma and Zhang [23] employed a suitable sliding method, first introduced by Berestycki and Nirenberg [3], to demonstrate the generalized version of Gibbons' conjecture in the setting of the dual nonlocal parabolic equation \[\partial_{t}^{\alpha}u(x,t)+\mathcal{L}u(x,t)=f(t,u(x,t))\ \ \mbox{in}\ \ \mathbb{R}^{n}\times\mathbb{R}.\] Here the spatial nonlocal elliptic operators of integro-differential type is defined as \[\mathcal{L}u(x,t)=P.V.\int_{\mathbb{R}^{n}}\left[u(x,t)-u(y,t)\right]\cdot K( x,y)dy. \tag{1.6}\] Chen and Ma [13] carried out a suitable direct method of moving planes to obtain the monotonicity of positive solutions for the following problem \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=f(u (x,t))&\mbox{in}\ \mathbb{R}^{n}_{+}\times\mathbb{R},,\\ u(x,t)\equiv 0&\mbox{in}\ \ (\mathbb{R}^{n}\setminus\mathbb{R}^{n}_{+}) \times\mathbb{R}.\end{array}\right.\] Therefore, our first main interest here is to apply a direct method of moving planes to establish the radial symmetry and monotonicity of solutions to problem (1.1) in the unit ball. Our second main objective is to establish the Liouville theorem of equation (1.2). The classical Liouville theorem states that any bounded harmonic function defined in the entire space \(\mathbb{R}^{n}\) must be identically constant. This theorem plays a crucial role in deriving a priori estimates and establishing the qualitative properties of solutions, including their existence, nonexistence, and uniqueness. As a result, it has been extensively studied in the analysis of partial differential equations and this area of study has been further extended to various types of elliptic and fractional elliptic equations, even to \(k\)-Hessian equations using diverse methods, including Harnack inequalities, blow-up and compactness arguments, as well as Fourier analysis (cf. [7, 4, 15, 19, 22, 34, 38] and the references therein). In the context of nonlocal homogeneous parabolic equation (1.2), when restricted the domain of \(t\) to \((-\infty,0],\) Widder [39] proved that all bounded solutions \(u(x,t)\) must be constant in case of \(\alpha=1,s=1;\) while for \(\alpha=1,0<s<1,\) Serra [37] showed that the solutions with some growth condition is a constant. In recent times, Ma, Guo and Zhang [31] demonstrated that the bounded entire solutions of the homogeneous master equation \[(\partial_{t}-\Delta)^{s}u(x,t)=0\text{ in }\mathbb{R}^{n}\times\mathbb{R}, \tag{1.7}\] must be constant. Here the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\) was first proposed by Riesz [36], and it can be defined pointwise using the following singular integral: \[(\partial_{t}-\Delta)^{s}u(x,t):=C_{n,s}\int_{-\infty}^{t}\int_{\mathbb{R}^{n }}\frac{u(x,t)-u(y,\tau)}{(t-\tau)^{n+2s}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}dyd\tau,\] where \(0<s<1,\)\(C_{n,s}=\frac{1}{(4\pi)^{n/2}|\Gamma(-s)|}\). It is essential to emphasize that in [31], we first established the maximum principles for operators \((\partial_{t}-\Delta)^{s}\) to conclude that any bounded solution \(u(x,t)\) must be constant with respect to the spatial variable \(x\). i.e. \(u(x,t)=u(t).\) This will simplify equation (1.7) to a one-sided one-dimensional fractional equation \[\partial_{t}^{\alpha}u(t)=0\text{ in }\mathbb{R}. \tag{1.8}\] Then we obtained that the bounded solution \(u(t)\) must be constant with respect to \(t\) by employing the method of Fourier analysis which is applicable to more general distributions beyond bounded functions. While this method does not fully capture the one-sided nature of the operator \(\partial_{t}^{\alpha}.\) Taking inspiration from these findings, our second main objective is to develop an alternative and more straightforward method to generalize the Liouville theorem to the dual fractional parabolic operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) in the whole space. Now we explain the novelty and challenges of our approach in deriving the radial symmetry of solutions for problem (1.1) in the unit ball and the Liouville theorem for equation (1.2) in the whole space by analysing the characteristics of the one-sided fractional time operator \(\partial_{t}^{\alpha}\) and the (double-sided) fractional Laplacian \((-\Delta)^{s}\). In comparison with [8] for the operator \(\partial_{t}+(-\Delta)^{s}\) and [31] for the operator \((\partial_{t}-\Delta)^{s},\) a notable difference in this paper is that all perturbations are novel and constructed from different scaling and shifting of smooth cut-off functions \(\eta_{k}\) to match the dual fractional parabolic operators \(\partial_{t}^{\alpha}+(-\Delta)^{s}.\) Then by applying the **Translation and Rescaling Invariance** \[\mathcal{L}\left[u\left(\frac{x-\bar{x}}{r}\right)\right]=\frac{1}{r^{\beta}} \mathcal{L}u\left(\frac{x-\bar{x}}{r}\right), \tag{1.9}\] to the specific operators \(\mathcal{L}=(-\Delta)^{s}\) with \(\beta=2s\) and \(\mathcal{L}=\partial_{t}^{\alpha}\) with \(\beta=\alpha\), we derive \[\mathcal{L}\eta_{k}\lesssim\frac{1}{r^{\beta}},\] which is a key estimate in proving the maximum principle for anti-symmetry functions with respect to \(x\) as well as the Liouville theorem for the Marchaud fractional operator \(\partial_{t}^{\alpha}\). Utilizing these essential tools, we can develop the direct method of moving planes and obtain the Liouville theorem for the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\). From one aspect, we point out the distinction between the local time derivatives \(\partial_{t}\) and the nonlocal operator \(\partial_{t}^{\alpha}\) in the process of establishing the radial symmetry of solutions for equation (1.1) through the direct method of moving planes combined with the limiting argument. Differing from traditional approaches employed for classical parabolic equations (1.5)(cf. [8]), we repeatedly use the following two key **observations** arising from the nonlocal and one-sided nature of the one-dimensional fractional time operator \(\partial_{t}^{\alpha}\). **Observation A**. If \(u(\bar{t})=\min\limits_{t\in\mathbb{R}}u(t)\) (or \(\max\limits_{t\in\mathbb{R}}u(t)\)), then \(\partial_{t}^{\alpha}u(\bar{t})\leq 0\) (or \(\geq 0\)). **Observation B**. Assume that \(u(\bar{t})=\min\limits_{t\in\mathbb{R}}u(t)\) (or \(\max\limits_{t\in\mathbb{R}}u(t)\)). Then \(\partial_{t}^{\alpha}u(\bar{t})=0\) if and only if \[u(t)\equiv u(\bar{t})\text{ in }t<\bar{t}.\] From another standpoint, we emphasize the different challenges between the one-sided operator \(\partial_{t}^{\alpha}\) and the fractional Laplacian \((-\Delta)^{s}\) in the process of deriving the Liouville theorem for homogeneous equation (1.2). It is well-known that the (double-sided) fractional Laplacian \((-\Delta)^{s}\) satisfies the **Reflection Invariance** (or chain rule) \[(-\Delta)^{s}\left[u(x^{\lambda})\right]=(-\Delta)^{s}u\left(x^{\lambda} \right), \tag{1.10}\] where \(x^{\lambda}=(2\lambda-x_{1},x^{\prime})\) denote the reflection of \(x\) with respect to the hyperplane \(x_{1}=\lambda\). However this is no longer valid for the fractional time derivative \(\mathcal{L}=\partial_{t}^{\alpha}\) due to its one-sided nature. Indeed, if we denote \((D_{\text{left}})^{\alpha}:=\partial_{t}^{\alpha}\) and \(t^{\lambda}=2\lambda-t\), then instead of (1.10) we obtain \[(D_{\text{left}})^{\alpha}\left[u(t^{\lambda})\right]=(D_{\text{right}})^{ \alpha}u\left(t^{\lambda}\right).\] Here \((D_{\text{right}})^{\alpha}\) is also a fractional derivative that based on the values of the function in the future, defined as \[(D_{\text{right}})^{\alpha}u(t):=C_{\alpha}\int_{t}^{+\infty}\frac{u(t)-u( \tau)}{(\tau-t)^{1+\alpha}}d\tau.\] The property (1.10) plays a crucial role in establishing the symmetry of solutions with respect to spatial planes and further deriving the Liouville theorem. Let us compare equation (1.2) with the classical fractional parabolic equation \[\partial_{t}u(x,t)+(-\Delta)^{s}u(x,t)=0\text{ in }\mathbb{R}^{n}\times \mathbb{R}. \tag{1.11}\] By establishing the maximum principle for anti-symmetric function \(w(x,t)=u(x^{\lambda},t)-u(x,t)\), we conclude that any bounded solution \(u(\cdot,t)\) is symmetric with respect to any hyperplane in \(\mathbb{R}^{n}\) for each fixed \(t\in\mathbb{R}\), i.e., \[u(x,t)=u(t)\text{ in }\mathbb{R}^{n}\times\mathbb{R},\] and hence \(\partial_{t}u(t)=0.\) From this one can derive immediately \(u\), a bounded solution of equation (1.11), is a constant. However for the dual fractional parabolic equation (1.2), it still need to further prove the Liouville theorem for Marchaud fractional equation (1.8). Due to the lack of reflection invariance (1.10) for one-sided operator \(\partial_{t}^{\alpha}\), one can not establish a maximum principle for the antisymmetric function \(w(t)=u(t^{\lambda})-u(t)\) in the same way as with double-sided operators like the fractional Laplacian. To circumvent this difficulty, in this paper, we introduce some new ideas and simpler approaches. Inspired by the aforementioned **Observation B** satisfied by the one-sided operator itself, we directly begin with the definition of operator \(\partial_{t}^{\alpha}\) and employ a perturbation technique on the solution \(u(t)\) itself instead of on the anti-symmetric function \(w(t)\). It provides a more concise and intuitive method for establishing the Liouville theorem for one-sided operators \(\partial_{t}^{\alpha}\). This is precisely a novel aspect of our work. In contrast to the Fourier analysis method used in our recent work [31], this refined approach highlights more directly the distinctions between one-sided and double-sided operators. Needless to say, focusing on the nonlocal time operator, we work mainly with \((D_{\rm left})^{\alpha}\). While all our results are equally valid for the right fractional time derivitive \((D_{\rm right})^{\alpha}\). In addition, it is notable to emphasize that the proofs presented here for the radial symmetry and monotonicity of solutions as well as the Liouville theorem can be adapted to various nonlocal equations involving the spatial nonlocal elliptic operators \(\mathcal{L}\) as defined in (1.6) and the general fractional time derivative (cf. [1, 2]) of the form \[\int_{-\infty}^{t}[u(t)-u(s)]\mathcal{K}(t,s)ds.\] Provided that the kernel \(\mathcal{K}\) here and \(K\) in (1.6) possesses some radial decreasing property. Before presenting the main results of this paper, we introduce the notation that will be used throughout the subsequent sections. Let \(x_{1}\) be any given direction in \(\mathbb{R}^{n}\), \[T_{\lambda}=\{(x_{1},x^{\prime})\in\mathbb{R}^{n}\ |\ x_{1}=\lambda,\lambda \in\mathbb{R}\}\] be a moving planes perpendicular to the \(x_{1}\)-axis, \[\Sigma_{\lambda}=\{x\in\mathbb{R}^{n}\ |\ x_{1}<\lambda\}\] be the region to the left of the hyperplane \(T_{\lambda}\) in \(\mathbb{R}^{n}\) and \[\Omega_{\lambda}=\Sigma_{\lambda}\cap B_{1}(0).\] Furthermore, we denote the reflection of \(x\) with respect to the hyperplane \(T_{\lambda}\) as \[x^{\lambda}=(2\lambda-x_{1},x_{2},\ldots,x_{n}).\] Let \(u_{\lambda}(x,t)=u(x^{\lambda},t)\), we define \[w_{\lambda}(x,t)=u_{\lambda}(x,t)-u(x,t).\] It is evident that \(w_{\lambda}(x,t)\) is an antisymmetric function of \(x\) with respect to the hyperplane \(T_{\lambda}\). We are now ready to illustrate the main results of this paper. **Theorem 1.1**.: _Let \(u(x,t)\in\left(C^{1,1}(B_{1}(0))\cap C(\overline{B_{1}(0)})\right)\times C^{1 }(\mathbb{R})\) be a positive bounded solution of_ \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=f(u (x,t))&\text{in}\ \ B_{1}(0)\times\mathbb{R},\\ u(x,t)\equiv 0&\text{in}\ \ B_{1}^{c}(0)\times\mathbb{R}.\end{array}\right. \tag{1.12}\] _Suppose that \(f\in C^{1}([0,+\infty))\) satisfies \(f(0)\geq 0\) and \(f^{\prime}(0)\leq 0.\) Then for each \(t\in\mathbb{R},\)\(u(\cdot,t)\) is radially symmetric and strictly decreasing about the origin in \(B_{1}(0)\)._ The Theorem 1.1 is proved by using the direct method of moving plane for dual fractional operators \(\partial_{t}^{\alpha}+(-\Delta)^{s}\), which primarily relies on the following narrow region principle for anti-symmetric functions. **Theorem 1.2**.: _Let \(\Omega\) be a bounded domain containing in the slab \(\{x\in\Sigma_{\lambda}\ |\ \lambda-l<x_{1}<\lambda\}\). Assume that \(w(x,t)\in\left(\mathcal{L}_{2s}\cap C^{1,1}_{loc}(\Omega)\right)\times\left(C^ {1}(\mathbb{R})\cap\mathcal{L}_{\alpha}^{-}(\mathbb{R})\right)\) is bounded from below in \(\overline{\Omega}\times\mathbb{R}\) and for each \(t\in\mathbb{R}\), \(w(\cdot,t)\) is lower semi-continuous up to the boundary \(\partial\Omega\). Suppose_ \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}w(x,t)+(-\Delta)^{s}w(x,t)=c(x, t)w(x,t),&\quad(x,t)\in\Omega\times\mathbb{R},\\ w(x,t)\geq 0,&\quad(x,t)\in(\Sigma_{\lambda}\backslash\Omega)\times\mathbb{R},\\ w(x,t)=-w(x^{\lambda},t),&\quad(x,t)\in\Sigma_{\lambda}\times\mathbb{R}.\end{array}\right. \tag{1.13}\] _where the coefficient function \(c(x,t)\) is bounded from above. Then_ \[w(x,t)\geq 0\ \text{in}\ \Sigma_{\lambda}\times\mathbb{R}, \tag{1.14}\] _for sufficiently small \(l\). Furthermore, if \(w(x,t)\) vanishes at some point \((x^{0},t_{0})\in\Omega\times\mathbb{R}\), then_ \[w(x,t)\equiv 0\ \text{in}\ \mathbb{R}^{n}\times(-\infty,t_{0}]. \tag{1.15}\] It is worth noting that in theorem 1.2, \(\Omega\) is a bounded narrow domain within \(\Sigma_{\lambda}\) and \(c(x,t)\) is just bounded from above. However for the whole unbounded region \(\Sigma_{\lambda}\) restricted to \(w>0\), especially when \(c(x,t)\) is nonpositive, we will also have the second Maximum Principle for anti-symmetric functions with respect to \(x\). This serves as a fundamental tool in establishing the Liouville theorem for the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\). **Theorem 1.3**.: _Assume that \(w(x,t)\in\left(\mathcal{L}_{2s}\cap C^{1,1}_{loc}(\Sigma_{\lambda})\right) \times\left(C^{1}(\mathbb{R})\cap\mathcal{L}_{\alpha}^{-}(\mathbb{R})\right)\) is bounded from above in \(\Sigma_{\lambda}\times\mathbb{R}\) and satisfies_ \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}w(x,t)+(-\Delta)^{s}w(x,t)\leq 0,&\quad\text{in}\ \ \{(x,t)\in\Sigma_{\lambda}\times\mathbb{R}\ |\ w(x,t)>0\}\,,\\ w(x,t)=-w(x^{\lambda},t),&\quad\text{in}\ \ \Sigma_{\lambda}\times\mathbb{R}. \end{array}\right. \tag{1.16}\] _Then_ \[w(x,t)\leq 0\ \text{in}\ \Sigma_{\lambda}\times\mathbb{R}. \tag{1.17}\] Since \(w(x,t)=u(x^{\lambda},t)-u(x,t)\) is an anti-symmetric function with respect to \(x\), Theorem 1.3 only yields that a bounded entire solution \(u(x,t)\) of homogeneous equation associated with the operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) in the whole space \(\mathbb{R}^{n}\times\mathbb{R}\) must be constant with respect to the spatial variable \(x\), i.e. \(u(x,t)=u(t)\). To further show that it is also a constant with respect to the time variable \(t\), it suffices for us to establish a Liouville theorem involving a one-sided Marchaud fractional time operator \(\partial_{t}^{\alpha}\) as the following. **Theorem 1.4**.: _Let \(u(t)\in C^{1}(\mathbb{R})\) be a bounded solution of_ \[\partial_{t}^{\alpha}u(t)=0\ \ \text{in}\ \ \mathbb{R}. \tag{1.18}\] _Then it must be constant._ As an immediate applications of the maximum principle in unbounded domains as stated in Theorem 1.3 and the Liouville Theorem for the Marchaud operator \(\partial_{t}^{\alpha}\) in \(\mathbb{R}\), Theorem 1.4, we derive the second main result in this paper -- Liouville Theorem for the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) in the whole space. **Theorem 1.5**.: _Let \(u(x,t)\in C^{1,1}_{loc}(\mathbb{R}^{n})\times C^{1}(\mathbb{R})\) be a bounded solution of_ \[\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=0\ \ \mbox{in}\ \ \mathbb{R}^{n}\times \mathbb{R}. \tag{1.19}\] _Then it must be constant._ _Remark 1.6_.: The above theorem can be regarded as a generalization of the classical Liouville theorem for the fractional elliptic and parabolic equation involving the Laplacian in the whole space, where the boundedness condition may not be optimal but is still reasonable. Relaxing this boundedness condition is the focus of our upcoming work. The remaining of this paper is organized as follows. In Sec.2, we first demonstrate two maximum principle: the narrow domain principle (Theorem 1.2) and the maximum principle in unbounded domains (Theorem 1.3 ) applicable to the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\). Based on the narrow domain principle, we then carry out a direct method of moving planes for the nonlocal operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) to prove the radial symmetry of solutions announced in Theorem 1.1 in Sec.3. Moving on to Sec.4, we initially establish the Liouville theorem for the Marchaud operator \(\partial_{t}^{\alpha}\) (Theorem 1.4), and subsequently, in combination with the maximum principle in unbounded domains developed in Sec.2, we prove the Liouville Theorem for the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) as stated in Theorem 1.5. Throughout this paper, we use \(C\) to denote a general constant whose value may vary from line to line. ## 2 Maximum Principles for Antisymmetric functions In this section, we will demonstrate various maximum principles for antisymmetric functions, including Theorem 1.2 and Theorem 1.3. We will explain in the subsequent part how these principles play vital roles in carrying out a direct method of moving planes to establish the symmetry and monotonicity of solutions. ### Narrow region principle in bounded domains Our first key tool is a narrow region principle for antisymmetric functions in bounded domains, which plays a crucial role in deriving the radial symmetry and monotonicity of solutions for the dual fractional equation. Proof of Theorem 1.2.: First we argue by contradiction to derive (1.14). If not, since \(\Omega\) is bounded, \(w\) is bounded from below in \(\Omega\times\mathbb{R}\) and \(w(\cdot,t)\) is lower semi-continuous up to the boundary \(\partial\Omega\) for each fixed \(t\in\mathbb{R}\), there must exist \(x(t)\in\Omega\) and \(m>0\) such that \[\inf_{(x,t)\in\Omega\times\mathbb{R}}w(x,t)=\inf_{t\in\mathbb{R}}w(x(t),t)=-m<0. \tag{2.1}\] Then there exists a minimizing sequence \(\{t_{k}\}\subset\mathbb{R}\) and a sequence \(\{m_{k}\}\nearrow m\) such that \[w(x(t_{k}),t_{k})=-m_{k}\searrow-m\ as\ k\to\infty.\] Since the infimum of \(w\) with respect to \(t\) may not be attained, we need to perturb \(w\) with respect to \(t\) such that the infimum \(-m\) can be attained by the perturbed function. For this purpose, we introduce the following auxiliary function \[v_{k}(x,t)=w(x,t)-\varepsilon_{k}\eta_{k}(t),\] where \(\varepsilon_{k}=m-m_{k}\) and \(\eta_{k}(t)=\eta(t-t_{k})\) with \(\eta\in C_{0}^{\infty}(-1,1)\), \(0\leq\eta\leq 1\) satisfying \[\eta(t)=\begin{cases}1,&|t|\leq\frac{1}{2},\\ 0,&|t|\geq 1.\end{cases}\] Clearly \(supp\eta_{k}\subset(-1+t_{k},1+t_{k})\) and \(\eta_{k}(t_{k})=1\). By (2.1) and the exterior condition in (1.13), we have \[v_{k}(x(t_{k}),t_{k}) = -m\,,\] \[v_{k}(x,t)=w(x,t) \geq -m\ \text{in}\ \Omega\times(\mathbb{R}\backslash(-1+t_{k},1+t_{k}))\,,\] \[v_{k}(x,t)\geq-\varepsilon_{k}\eta_{k}(t) > -m\ \text{in}\ (\Sigma_{\lambda}\backslash\Omega)\times\mathbb{R}\,.\] Since \(w\) is lower semi-continuous on \(\overline{\Omega}\times\mathbb{R}\), then \(v_{k}\) must attains its minimum value which is at most \(-m\) at \(\Omega\times(-1+t_{k},1+t_{k})\), that is, \[\exists\ \{(\bar{x}^{k},\bar{t}_{k})\}\subset\Omega\times(-1+t_{k},1+t_{k})\ \ s.t.\ \ -m- \varepsilon_{k}\leq v_{k}(\bar{x}^{k},\bar{t}_{k})=\inf_{\Sigma_{\lambda} \times\mathbb{R}}v_{k}(x,t)\leq-m. \tag{2.2}\] Consequently, \[-m\leq w(\bar{x}^{k},\bar{t}_{k})\leq-m_{k}<0.\] Now applying (2.2), the definition of \(v_{k}\) and the anti-symmetry of \(w\) in \(x\), we derive \[\partial_{t}^{\alpha}v_{k}(\bar{x}^{k},\bar{t}_{k}) = C_{\alpha}\int_{-\infty}^{\bar{t}_{k}}\frac{v_{k}(\bar{x}^{k}, \bar{t}_{k})-v_{k}(\bar{x}^{k},\tau)}{(\bar{t}_{k}-\tau)^{1+\alpha}}d\tau\leq 0\,.\] \[(-\Delta)^{s}v_{k}(\bar{x}^{k},\bar{t}_{k}) = C_{n,s}P.V.\int_{\mathbb{R}^{n}}\frac{v_{k}(\bar{x}^{k},\bar{t}_{ k})-v_{k}(y,\bar{t}_{k})}{|\bar{x}^{k}-y|^{n+2s}}dy\] \[= C_{n,s}P.V.\int_{\Sigma_{\lambda}}\frac{v_{k}(\bar{x}^{k},\bar{t }_{k})-v_{k}(y,\bar{t}_{k})}{|\bar{x}^{k}-y|^{n+2s}}dy+C_{n,s}\int_{\Sigma_{ \lambda}}\frac{v_{k}(\bar{x}^{k},\bar{t}_{k})-v_{k}(y^{\lambda},\bar{t}_{k})}{ |\bar{x}^{k}-y^{\lambda}|^{n+2s}}dy\] \[\leq C_{n,s}\int_{\Sigma_{\lambda}}\frac{2v_{k}(\bar{x}^{k},\bar{t}_{ k})-v_{k}(y,\bar{t}_{k})-v_{k}(y^{\lambda},\bar{t}_{k})}{|\bar{x}^{k}-y^{ \lambda}|^{n+2s}}dy\] \[= 2C_{n,s}w_{k}(\bar{x}^{k},\bar{t}_{k})\int_{\Sigma_{\lambda}} \frac{1}{|\bar{x}^{k}-y^{\lambda}|^{n+2s}}dy\] \[\leq -\frac{Cm_{k}}{l^{2s}}.\] It follows that \[\partial_{t}^{\alpha}v_{k}(\bar{x}^{k},\bar{t}_{k})+(-\Delta)^{s}v_{k}(\bar{x }^{k},\bar{t}_{k})\leq-\frac{Cm_{k}}{l^{2s}}. \tag{2.3}\] In addition, substituting \(v_{k}\) into the differential equation in (1.13) and using the assumption \(c(x,t)\leq C_{0}\), we obtain \[\partial_{t}^{\alpha}v_{k}(\bar{x}^{k},\bar{t}_{k})+(-\Delta)^{s}v_{k}(\bar{x }^{k},\bar{t}_{k})=c(\bar{x}^{k},\bar{t}_{k})w(\bar{x}^{k},\bar{t}_{k})- \varepsilon_{k}\partial_{t}^{\alpha}\eta_{k}(\bar{t}_{k})\geq-C_{0}m-C \varepsilon_{k}. \tag{2.4}\] Then a combination of (2.3) and (2.4) yields that \[-C_{0}m\leq-\frac{Cm_{k}}{l^{2s}}+C\varepsilon_{k}\to-\frac{Cm}{l^{2s}},\] as \(k\to\infty\), which is a contradiction for sufficiently small \(l\). Hence we complete the proof of (1.14). Next, we show the validity of (1.15). If \(w(x,t)\) vanishes at \((x^{0},t_{0})\in\Omega\times\mathbb{R}\), then by (1.14), we derive that \[w(x^{0},t_{0})=\min_{\Sigma_{\lambda}\times\mathbb{R}}w(x,t)=0.\] The equation in (1.13) obviously implies that \[\partial_{t}^{\alpha}w(x^{0},t_{0})+(-\Delta)^{s}w(x^{0},t_{0})=0. \tag{2.5}\] On the other hand, since \(w(x,t)\geq 0\) in \(\Sigma_{\lambda}\times\mathbb{R}\) and \[|x^{0}-y^{\lambda}|>|x^{0}-y|\text{ provided }y\in\Sigma_{\lambda},\] we obtain \[(-\Delta)^{s}w(x^{0},t_{0}) = C_{n,s}P.V.\int_{\mathbb{R}^{n}}\frac{-w(y,t_{0})}{|x^{0}-y|^{n+ 2s}}dy \tag{2.6}\] \[= C_{n,s}P.V.\int_{\Sigma_{\lambda}}w(y,t_{0})\left[\frac{1}{|x^{0} -y^{\lambda}|^{n+2s}}-\frac{1}{|x^{0}-y|^{n+2s}}\right]dy\] \[\leq 0\] and \[\partial_{t}^{\alpha}w(x^{0},t_{0})\ =\ C_{\alpha}\int_{-\infty}^{t_{0}}\frac{- w(x^{0},\tau)}{(t_{0}-\tau)^{1+\alpha}}d\tau\leq 0. \tag{2.7}\] So it follows from (2.5), (2.1) and (2.7) that \[0=\partial_{t}^{\alpha}w(x^{0},t_{0})=C_{\alpha}\int_{-\infty}^{t_{0}}\frac{- w(x^{0},\tau)}{(t_{0}-\tau)^{1+\alpha}}d\tau,\] then we must have \[w(x^{0},\tau)\equiv 0=\min_{\Sigma_{\lambda}\times\mathbb{R}}w(x,t),\text{ for } \forall\tau\in(-\infty,t_{0}],\] that is, for each \(\tau\in(-\infty,t_{0}]\), \(w(x,t)\) attains zero at \((x^{0},\tau)\in\Omega\times\mathbb{R}\). Now, repeating the previous process, we further obtain \[0=(-\Delta)^{s}w(x^{0},\tau)\ =\ C_{n,s}P.V.\int_{\Sigma_{\lambda}}w(y,\tau) \left[\frac{1}{|x^{0}-y^{\lambda}|^{n+2s}}-\frac{1}{|x^{0}-y|^{n+2s}}\right]dy.\] Together with the anti-symmetry of \(w(y,\tau)\) with respect to \(y\), we derive \[w(y,\tau)\equiv 0\text{ for }\forall y\in\mathbb{R}^{n}.\] Therefore, \[w(y,\tau)\equiv 0\text{ in }\mathbb{R}^{n}\times(-\infty,t_{0}].\] This completes the proof of Theorem 1.2. ### Maximum principle in unbounded domains We now prove Theorem 1.3, the maximum principle for antisymmetric functions in unbounded domains. This is also an essential ingredient in proving the Liouville theorem for the dual fractional operator. Proof of Theorem 1.3.: We argue by contradiction. If (1.17) is not true, since \(w(x,t)\) is bounded from above in \(\Sigma_{\lambda}\times\mathbb{R}\), then there exists a constant \(A>0\) such that \[\sup_{(x,t)\in\Sigma_{\lambda}\times\mathbb{R}}w(x,t):=A>0. \tag{2.8}\] Since the domain \(\Sigma_{\lambda}\times\mathbb{R}\) is unbounded, the supremum of \(w(x,t)\) may not be attained in \(\Sigma_{\lambda}\times\mathbb{R}\), however, by (2.8), there exists a maximizing sequence \(\{(x^{k},t_{k})\}\subset\Sigma_{\lambda}\times\mathbb{R}\) such that \[w(x^{k},t_{k})\to A\mbox{ as }k\to\infty.\] More accurately, there exists a sequence \(\{\varepsilon_{k}\}\searrow 0\) such that \[w(x^{k},t_{k})=A-\varepsilon_{k}>0. \tag{2.9}\] Now we introduce a perturbation of \(w\) near \((x^{k},t_{k})\) as following \[v_{k}(x,t)=w(x,t)+\varepsilon_{k}\eta_{k}(x,t)\mbox{ in }\mathbb{R}^{n}\times \mathbb{R}, \tag{2.10}\] where \[\eta_{k}(x,t)=\eta\left(\frac{x-x^{k}}{r_{k}/2},\frac{t-t_{k}}{(r_{k}/2)^{2s/ \alpha}}\right),\] with \(r_{k}=dist(x_{k},T_{\lambda})>0\) and \(\eta\in C_{0}^{\infty}(\mathbb{R}^{n}\times\mathbb{R})\) is a cut-off smooth function satisfying \[\left\{\begin{array}{r@{\quad}l}0\leq\eta\leq 1\ \ \mbox{in}\ \ \ &\mathbb{R}^{n}\times\mathbb{R}\,,\\ \quad\eta=1\ \ \mbox{in}\ \ \ &B_{1/2}(0)\times[-\frac{1}{2},\frac{1}{2}] \,,\\ \quad\eta=0\ \ \mbox{in}\ \ \ &(\mathbb{R}^{n}\times\mathbb{R})\setminus(B_{1}(0) \times[-1,1])\.\end{array}\right.\] Denote \[Q_{k}(x^{k},t_{k}):=B_{r_{k}/2}(x^{k})\times\left[t_{k}-\left(\frac{r_{k}}{2} \right)^{2s/\alpha},t_{k}+\left(\frac{r_{k}}{2}\right)^{2s/\alpha}\right] \subset\Sigma_{\lambda}\times\mathbb{R}.\] By (2.8), (2.9) and (2.10), we have \[v_{k}(x^{k},t_{k}) = A\,,\] \[v_{k}(x,t)=w(x,t) \leq A\mbox{ in }\left(\Sigma_{\lambda}\times\mathbb{R}\right)\setminus Q _{k}(x^{k},t_{k})\,,\] \[v_{k}(x,t)=\varepsilon_{k}\eta_{k}(x,t) < A\mbox{ on }\ T_{\lambda}\times\mathbb{R}\,.\] Since \(w\) is upper semi-continuous on \(\overline{\Sigma}_{\lambda}\times\mathbb{R}\), then \(v_{k}\) must attains its maximum value which is at least \(A\) at \(\overline{Q_{k}(x^{k},t_{k})}\subset\Sigma_{\lambda}\times\mathbb{R}\), that is, \[\exists\ \{(\bar{x}^{k},\bar{t}_{k})\}\subset\overline{Q_{k}(x^{k},t_{k})} \quad s.t.\quad A+\varepsilon_{k}\geq v_{k}(\bar{x}^{k},\bar{t}_{k})=\sup_{ \Sigma_{\lambda}\times\mathbb{R}}v_{k}(x,t)\geq A, \tag{2.11}\] where we have used (2.8) and (2.10). Now, applying (2.11), we derive \[w(\bar{x}^{k},\bar{t}_{k}) \geq A-\varepsilon_{k}>0\,,\] \[\partial_{t}^{\alpha}v_{k}(\bar{x}^{k},\bar{t}_{k}) = C_{\alpha}\int_{-\infty}^{\bar{t}_{k}}\frac{v_{k}(\bar{x}^{k}, \bar{t}_{k})-v_{k}(\bar{x}^{k},\tau)}{(\bar{t}_{k}-\tau)^{1+\alpha}}d\tau \geq 0\,.\] Next, we derive a contradiction by estimating the value of \((-\Delta)^{s}v_{k}\) at the maximum point \((\bar{x}^{k},\bar{t}_{k})\) of \(v_{k}\) in \(\Sigma_{\lambda}\times\mathbb{R}\). On one hand, taking into account of differential inequality in (1.16), (2.10) and translation and scaling invariance of the operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\)(see (1.9), we obtain \[(-\Delta)^{s}v_{k}(\bar{x}^{k},\bar{t}_{k}) = (-\Delta)^{s}w(\bar{x}^{k},\bar{t}_{k})+\varepsilon_{k}(-\Delta)^{ s}\eta_{k}(\bar{x}^{k},\bar{t}_{k}) \tag{2.12}\] \[\leq -\partial_{t}^{\alpha}w(\bar{x}^{k},\bar{t}_{k})+\varepsilon_{k}( -\Delta)^{s}\eta_{k}(\bar{x}^{k},\bar{t}_{k})\] \[\leq \varepsilon_{k}\left[\partial_{t}^{\alpha}\eta_{k}(\bar{x}^{k}, \bar{t}_{k})+(-\Delta)^{s}\eta_{k}(\bar{x}^{k},\bar{t}_{k})\right]\] \[\leq C\frac{\varepsilon_{k}}{r_{k}^{2s}}.\] On the other hand, starting from the definition of operator \((-\Delta)^{s}\) and utilizing the antisymmetry of \(w\) in \(x\) as well as the fact \(|\bar{x}^{k}-y^{\lambda}|>|\bar{x}^{k}-y|\) and (2.11), we compute \[\begin{array}{lll}(-\Delta)^{s}v_{k}(\bar{x}^{k},\bar{t}_{k})&=&C_{n,s}P.V. \int_{\mathbb{R}^{n}}\frac{v_{k}(\bar{x}^{k},\bar{t}_{k})-v_{k}(y,\bar{t}_{k})} {|\bar{x}^{k}-y|^{n+2s}}dy\\ &=&C_{n,s}P.V.\int_{\Sigma_{\lambda}}\frac{v_{k}(\bar{x}^{k},\bar{t}_{k})-v_{k} (y,\bar{t}_{k})}{|\bar{x}^{k}-y|^{n+2s}}dy+C_{n,s}\int_{\Sigma_{\lambda}}\frac{ v_{k}(\bar{x}^{k},\bar{t}_{k})-v_{k}(y^{\lambda},\bar{t}_{k})}{|\bar{x}^{k}-y^{ \lambda}|^{n+2s}}dy\\ &\geq&C_{n,s}\int_{\Sigma_{\lambda}}\frac{2v_{k}(\bar{x}^{k},\bar{t}_{k})-v_{ k}(y,\bar{t}_{k})-v_{k}(y^{\lambda},\bar{t}_{k})}{|\bar{x}^{k}-y^{\lambda}|^{n+2s} }dy\\ &\geq&C_{n,s}2\left(v_{k}(\bar{x}^{k},\bar{t}_{k})-\varepsilon_{k}\right)\int _{\Sigma_{\lambda}}\frac{1}{|\bar{x}^{k}-y^{\lambda}|^{n+2s}}dy\\ &\geq&\frac{C(A-\varepsilon_{k})}{r_{k}^{2s}}.\end{array} \tag{2.13}\] Finally, a combination of (2.12) and (2.13) yields that \[A-\varepsilon_{k}\leq C\varepsilon_{k},\] which leads to a contradiction for sufficiently large \(k\). Hence we conclude that (1.17) is valid. ## 3 Radial symmetry of solutions In this section, we employ the narrow region principle (Theorem 1.2) as a fundamental tool to initiate the direct moving plane method, then by combining perturbation techniques and limit arguments, for the dual fractional equation \[\partial_{t}^{\alpha}u(x,t)+(-\Delta)^{s}u(x,t)=f(u(x,t))\ \ \mbox{in}\ \ B_{1}(0)\times\mathbb{R},\] under suitable assumptions on the nonlinear term \(f\), we show that the solution \(u(\cdot,t)\) with the vanishing exterior condition is radially symmetric and strictly decreasing with respect to the origin in a unit ball. Proof of Theorem 1.1.: Let \(x_{1}\) be any direction and for any \(\lambda\in\mathbb{R}\), we define \(T_{\lambda},\ \Sigma_{\lambda},\ \Omega_{\lambda},\ x^{\lambda},\ w_{\lambda}\) as described in section 1. Substituting the definition of \(w_{\lambda}\) into the equation (1.12), we have \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}w_{\lambda}(x,t)+(-\Delta)^{s} w_{\lambda}(x,t)=c_{\lambda}(x,t)w_{\lambda}(x,t),&(x,t)\in\Omega_{\lambda} \times\mathbb{R},\\ w_{\lambda}(x,t)\geq 0,&(x,t)\in(\Sigma_{\lambda}\backslash\Omega_{\lambda}) \times\mathbb{R},\\ w_{\lambda}(x,t)=-w_{\lambda}(x^{\lambda},t),&(x,t)\in\Sigma_{\lambda} \times\mathbb{R}.\end{array}\right. \tag{3.1}\] where the weighted function \[c_{\lambda}(x,t)=\frac{f(u_{\lambda}(x,t))-f(u(x,t))}{u_{\lambda}(x,t)-u(x,t)}\] is bounded in \(\Omega_{\lambda}\times\mathbb{R}\) due to \(f\in C^{1}\left([0,+\infty)\right).\) Now we carry out the direct method of moving plane which is devided into two steps as outlined below. **Step 1**. Start moving the plane \(T_{\lambda}\) from \(x_{1}=-1\) to the right along the \(x_{1}\)-axis. When \(\lambda\) is sufficiently closed to \(-1\), \(\Omega_{\lambda}\) is a narrow region. Then by applying the narrow region principle, Theorem 1.2, to problem (3.1), we deduce that \[w_{\lambda}(x,t)\geq 0\ \mbox{in}\ \Sigma_{\lambda}\times\mathbb{R}. \tag{3.2}\] This provides a starting point to move the plane \(T_{\lambda}\). **Step 2**. Continuing to move the plane \(T_{\lambda}\) towards the right along the \(x_{1}\)-axis until reaching its limiting position as long as inequality (3.2) holds. Denote \[\lambda_{0}:=\sup\{\lambda<0\mid w_{\mu}(x,t)\geq 0,(x,t)\in\Sigma_{\mu}\times \mathbb{R}\text{ for any }\mu\leq\lambda\}.\] We are going to employ the contradiction argument to verify that \[\lambda_{0}=0. \tag{3.3}\] Otherwise, if \(\lambda_{0}<0\), according to the definition of \(\lambda_{0}\), there exists a sequences of negative numbers \(\{\lambda_{k}\}\) with \(\{\lambda_{k}\}\searrow\lambda_{0}\) and a sequence of positive numbers \(\{m_{k}\}\searrow 0\) such that \[\inf_{\Omega_{\lambda_{k}}\times\mathbb{R}}w_{\lambda_{k}}(x,t)=\inf_{\Sigma_ {\lambda_{k}}\times\mathbb{R}}w_{\lambda_{k}}(x,t)=-m_{k}.\] It implies that for each fixed \(k>0\), there exists a point \((x^{k},t_{k})\in\Omega_{\lambda_{k}}\times\mathbb{R}\) such that \[-m_{k}\leq w_{\lambda_{k}}(x^{k},t_{k})=-m_{k}+m_{k}^{2}<0.\] Since \(\mathbb{R}\) is an unbounded interval, the infimum of \(w_{\lambda_{k}}\) with respect to \(t\) may not be attained. In order to estimate \(\partial_{t}^{\alpha}w_{\lambda_{k}}\), we need to introduce a perturbation of \(w_{\lambda_{k}}\) near \(t_{k}\) as follows \[v_{k}(x,t)=w_{\lambda_{k}}(x,t)-m_{k}^{2}\eta_{k}(t)\text{ in }\Sigma_{ \lambda_{k}}\times\mathbb{R}, \tag{3.4}\] where \(\eta_{k}(t)=\eta(t-t_{k})\) with \(\eta\in C_{0}^{\infty}(-1,1)\) be a cut-off function as in the proof of Theorem 1.2. Based on the above analysis and the exterior condition in (3.1) satisfied by \(w_{\lambda_{k}}\), we have \[\left\{\begin{array}{rcl}v_{k}(x^{k},t_{k})&=&-m_{k}\,,\\ v_{k}(x,t)=w_{\lambda_{k}}(x,t)&\geq&-m_{k}\text{ in }\Omega_{\lambda_{k}} \times(\mathbb{R}\backslash(-1+t_{k},1+t_{k}))\,,\\ v_{k}(x,t)\geq-m_{k}^{2}\eta_{k}(x,t)&>&-m_{k}\text{ in }(\Sigma_{\lambda_{k}} \backslash\Omega_{\lambda_{k}})\times\mathbb{R}\,.\end{array}\right.\] Since \(u\) is continuous on \(\overline{\Omega}_{\lambda_{k}}\times\mathbb{R}\), then \(v_{k}\) must attains its minimum value which is at most \(-m_{k}\) at \(\Omega_{\lambda_{k}}\times(-1+t_{k},1+t_{k})\), that is, \[\exists\ \{(\bar{x}^{k},\bar{t}_{k})\}\subset\Omega_{\lambda_{k}}\times(-1+t_{ k},1+t_{k})\ \ s.t.\ \ -m_{k}-m_{k}^{2}\leq v_{k}(\bar{x}^{k},\bar{t}_{k})=\inf_{\Sigma_{ \lambda_{k}}\times\mathbb{R}}v_{k}(x,t)\leq-m_{k},\] which implies that \[-m_{k}\leq w_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})\leq-m_{k}+m_{k}^{2}<0. \tag{3.5}\] Similar to the process of Theorem 1.2, we have \[\begin{array}{rcl}\partial_{t}^{\alpha}v_{k}(\bar{x}^{k},\bar{t}_{k})+(- \Delta)^{s}v_{k}(\bar{x}^{k},\bar{t}_{k})&\leq 2C_{n,s}w_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})\int_{\Sigma_{\lambda_{k}}}\frac{1}{|\bar{x}^{k}-y^{\lambda_{k}}| ^{n+2s}}dy\\ &\leq-\frac{C(m_{k}-m_{k}^{2})}{dist(\bar{x}^{k},T_{\lambda_{k}})^{2s}}. \end{array} \tag{3.6}\] Furthermore, it follows from the differential equation in (3.1) and (3.5) that \[\begin{array}{rcl}\partial_{t}^{\alpha}v_{k}(\bar{x}^{k},\bar{t}_{k})+(- \Delta)^{s}v_{k}(\bar{x}^{k},\bar{t}_{k})&=c_{\lambda_{k}}(\bar{x}^{k},\bar{t}_ {k})w_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})-m_{k}^{2}\partial_{t}^{\alpha} \eta_{k}(\bar{t}_{k})\\ &\geq-c_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})m_{k}-Cm_{k}^{2}.\end{array}\] Here we may assume \(c_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})\geq 0\) without loss of generality. Otherwise, a contradiction can be derived from (3.6). Consquently, \[-c_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})-Cm_{k}\leq-\frac{C(1-m_{k})}{dist(\bar {x}^{k},T_{\lambda_{k}})^{2s}}\leq-\frac{C(1-m_{k})}{2^{2s}}, \tag{3.7}\] by virtue of \(m_{k}\to 0\) as \(k\rightarrow\infty\), we derive that for sufficiently large \(k\), \[c_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})\geq C_{0}>0.\] This implies that there exists some \(\xi_{k}\in\big{(}u_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k}),u(\bar{x}^{k},\bar{t }_{k})\big{)}\) such that \[f^{\prime}(\xi_{k})\geq C_{0}.\] Thus, owing to (3.5) and the assumption \(f^{\prime}(0)\leq 0\), after extracting a subsequence, we obtain \[u(\bar{x}^{k},\bar{t}_{k})\geq C_{1}>0, \tag{3.8}\] for sufficiently large \(k\). In order to simplify the notation, we denote \[\tilde{w}_{k}(x,t)=w_{\lambda_{k}}(x,t+\bar{t}_{k})\text{ and }\tilde{c}_{k}(x,t )=c_{\lambda_{k}}(x,t+\bar{t}_{k}).\] It follows from Arzel\(\grave{a}\)-Ascoli theorem that there exist two continuous function \(\tilde{w}\) and \(\tilde{c}\) such that \[\lim_{k\rightarrow\infty}\tilde{w}_{k}(x,t)=\tilde{w}(x,t)\] and \[\lim_{k\rightarrow\infty}\tilde{c}_{k}(x,t)=\tilde{c}(x,t)\] uniformly in \(B_{1}(0)\times\mathbb{R}\). Moreover, taking into account of the equation \[\partial_{t}^{\alpha}\tilde{w}_{k}(x,t)+(-\Delta)^{s}\tilde{w}_{k}(x,t)= \tilde{c}_{k}(x,t)\tilde{w}_{k}(x,t),\quad\text{in }\Omega_{\lambda_{k}}\times \mathbb{R},\] we conclude that the limit function \(\tilde{w}\) satisfies \[\partial_{t}^{\alpha}\tilde{w}(x,t)+(-\Delta)^{s}\tilde{w}(x,t)=\tilde{c}(x,t )\tilde{w}(x,t),\quad\text{in }\Omega_{\lambda_{0}}\times\mathbb{R}. \tag{3.9}\] As mentioned in (3.7), combining the uniform boundedness of \(c_{\lambda_{k}}(\bar{x}^{k},\bar{t}_{k})\) with \(\Omega_{\lambda_{k}}\subset B_{1}(0)\) and \(\lambda_{k}\rightarrow\lambda_{0}\), we may assume that \(\bar{x}^{k}\to x^{0}\in\Sigma_{\lambda_{0}}\cap\overline{B}_{1}(0).\) Then applying (3.5) and the continuity on \(u\), we obtain \[\tilde{w}(x^{0},0)=0=\inf_{\Sigma_{\lambda_{0}}\times\mathbb{R}}w_{\lambda_{0 }}(x,t)=\inf_{\Sigma_{\lambda_{0}}\times\mathbb{R}}\tilde{w}(x,t). \tag{3.10}\] Substituting this into the limit equation (3.9), it yields \[0 =\partial_{t}^{\alpha}\tilde{w}(x^{0},0)+(-\Delta)^{s}\tilde{w}(x ^{0},0)\] \[=C_{\alpha}\int_{-\infty}^{0}\frac{-\tilde{w}(x^{0},\tau)}{(- \tau)^{1+\alpha}}d\tau+C_{n,s}P.V.\int_{\Sigma_{\lambda_{0}}}\tilde{w}(y,0) \left[\frac{1}{|x^{0}-y^{\lambda}|^{n+2s}}-\frac{1}{|x^{0}-y|^{n+2s}}\right]dy.\] As a result of (3.10), the antisymmetry of \(\tilde{w}(x,t)\) with respect to \(x\) and the fact that \(|x^{0}-y^{\lambda}|>|x^{0}-y|\), we conclude \[\tilde{w}(x,t)\equiv 0,\ \ (x,t)\in\mathbb{R}^{n}\times(-\infty,0]. \tag{3.11}\] Correspondingly, we define \[u_{k}(x,t)=u(x,t+\bar{t}_{k}).\] Similar to the previous discussion regarding \(\tilde{w}_{k}\), we also have \[\lim_{k\to\infty}u_{k}(x,t)=\tilde{u}(x,t),\] and \[\partial_{t}^{\alpha}\tilde{u}(x,t)+(-\Delta)^{s}\tilde{u}(x,t)=f\left(\tilde {u}(x,t)\right)\ \ \ \mbox{in}\ B_{1}(0)\times\mathbb{R}. \tag{3.12}\] In addition, by using (3.8), we infer that \[\tilde{u}(x^{0},0)=\lim_{j\to\infty}u(\bar{x}^{j},\bar{t}_{j})\geq C_{1}>0. \tag{3.13}\] Next, we will show that \[\tilde{u}(x,0)>0\ \mbox{in}\ B_{1}(0). \tag{3.14}\] If this is not true, according to the exterior condition and the interior positivity of \(u\), then there exists a point \(\bar{x}\in B_{1}(0)\) such that \[\tilde{u}(\bar{x},0)=\inf_{\mathbb{R}^{n}\times\mathbb{R}}\tilde{u}(x,t)=0,\] which, together with limit equation (3.12) and the assumption \(f(0)\geq 0\), leads to \[0=(-\Delta)^{s}\tilde{u}(\bar{x},0)=C_{n,s}P.V.\int_{\mathbb{R}^{n}}\frac{- \tilde{u}(y,0)}{|\bar{x}-y|^{n+2s}}dy.\] Thus, \(\tilde{u}(x,0)\equiv 0\) in \(\mathbb{R}^{n}\) due to \(u\geq 0.\) This contradicts (3.13) and thus verifies the assersion (3.14). Due to the condition \(\tilde{u}(x,0)\equiv 0\) in \(B_{1}^{c}(0)\), (3.14) and \(\lambda_{0}<0\), we further conclude that there must exists a point \(\tilde{x}\in B_{1}^{c}(0)\) such that \(\tilde{x}^{\lambda_{0}}\in B_{1}(0)\) and \[\tilde{w}(\tilde{x},0)=\tilde{u}(\tilde{x}^{\lambda_{0}},0)-\tilde{u}(\tilde {x},0)=\tilde{u}(\tilde{x}^{\lambda_{0}},0)>0.\] However, this contradicts (3.11). Hence, we have established that the limiting position must be \(T_{0}\). By choosing \(x_{1}\) arbitrarily and considering the definition of \(\lambda_{0}\), we deduce that \(u(\cdot,t)\) must be radially symmetric and monotone nonincreasing about the origin in the unit ball \(B_{1}(0)\). Now we are ready to demonstrate the strict monotonicity, more specifically, it is sufficient to prove that \[w_{\lambda}(x,t)>0,\ \forall\lambda\in(-1,0). \tag{3.15}\] If not, then there exists some \(\lambda_{0}\in(-1,0)\) and a point \((x^{0},t_{0})\in\Omega_{\lambda_{0}}\times\mathbb{R}\) such that \[w_{\lambda_{0}}(x_{0},t_{0})=\min_{\Sigma_{\lambda_{0}}\times\mathbb{R}}w_{ \lambda_{0}}=0.\] Combining the differential equation in (3.1) with the definition of the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\), similar to the previous argument, we must have \[w_{\lambda_{0}}(x,t)\equiv 0\ \mbox{in}\ \Sigma_{\lambda_{0}}\times(-\infty,t_{0}].\] This is a contradiction due to the fact that \(u(\cdot,t)>0\) in \(B_{1}(0)\) and \(u(\cdot,t)\equiv 0\) in \(B_{1}^{c}(0)\) for each fixed \(t\in\mathbb{R}.\) Hence, we verify the assertion (3.15) and thus complete the proof of Theorem 1.1. Liouville Theorem In this section, we begin by employing perturbation techniques and analyzing the nonlocal one-sided nature of the one-dimensional operator \(\partial_{t}^{\alpha}\) to establish the Liouville theorem for the Marchaud fractional time operator \(\partial_{t}^{\alpha}\), Theorem 1.4. Directly following this, by incorporating the maximum principle in unbounded domain as stated in Theorem 1.3, we will be able to derive our second main result, Theorem 1.5. ### Liouville Theorem for the Marchaud fractional time operator \(\partial_{t}^{\alpha}\) Let us begin by recalling the definition of the Marchaud derivitive \[\partial_{t}^{\alpha}u(t)=C_{\alpha}\int_{-\infty}^{t}\frac{u(t)-u(\tau)}{(t- \tau)^{1+\alpha}}d\tau. \tag{4.1}\] Now we show that a bounded solution of equation \(\partial_{t}^{\alpha}u(t)=0\) in \(\mathbb{R}^{n}\) must be constant. Proof of Theorem 1.4.: The proof goes by contradiction. Since \(u(t)\) is bounded in \(\mathbb{R}\), we may assume that \[M:=\sup_{t\in\mathbb{R}}u(t)>\inf_{t\in\mathbb{R}}u(t)=:m. \tag{4.2}\] Now we divide the proof into three cases based on whether the maximum and minimum values are attained and proceed to derive a contradiction for each case. **Case 1**: The extrema (maximum and minimum) of \(u\) are both attained in \(\mathbb{R}\). Suppose that u attains its maximum at \(\bar{t}\) and its minimum at \(\underline{t}\) with \(\underline{t}<\bar{t}.\) Owing to equation (1.18) and the nonlocal one-sided nature of \(\partial_{t}^{\alpha}\), see (4.1), we have \[u(t)\equiv u(\underline{t})=m\text{ for }t<\underline{t}\] and \[u(t)\equiv u(\bar{t})=M\text{ for }t<\bar{t}.\] This contracdicts the assumption \(\underline{t}<\bar{t}\). We can derive a similar contradiction in the case \(\overline{t}<\underline{t}\). **Case 2**: Only one of the extrema (maximum or minimum) of \(u\) is attained in \(\mathbb{R}\). Without loss of generality, we may assume that \(u\) attains its maximum at \(t_{0}\) and there exists a minimizing sequence \(\{t_{k}\}\searrow-\infty\) such that \[\lim_{k\to\infty}u(t_{k})=m. \tag{4.3}\] Then applying equation (1.18) and the definition of \(\partial_{t}^{\alpha}\) (4.1), we have \[u(t)\equiv u(t_{0})=M\text{ for }t<t_{0},\] which contradicts (4.3) due to the continuity of \(u\). **Case 3**: The extrema (maximum and minimum) of \(u\) are both unattainable. We assume without loss of generality that there exist a minimizing sequence \(\{\underline{t}_{k}\}\searrow-\infty\) and a maximizing sequence \(\{\bar{t}_{k}\}\searrow-\infty\) and a sequence \(\{\varepsilon_{k}\}\searrow 0\) such that \[u(\overline{t}_{k})=M-\varepsilon_{k}\] and \[u(\underline{t}_{k})=m+\varepsilon_{k}.\] By extracting subsequences, we may assume \(\overline{t}_{k}-\underline{t}_{k}>1\). Now we introduce a perturbation of \(w\) near \(\underline{t}_{k}\) and \(\overline{t}_{k}\) as following \[v_{k}(t)=u(t)+\varepsilon_{k}\eta_{k}(t)\ \text{in}\ \mathbb{R},\] where \[\eta_{k}(t)=\eta\left(\frac{t-\overline{t}_{k}}{r_{k}}\right)-\eta\left(\frac {t-\underline{t}_{k}}{r_{k}}\right),\] with \(r_{k}=\frac{1}{4}(\overline{t}_{k}-\underline{t}_{k})>0\) and \(\eta\in C_{0}^{\infty}(\mathbb{R})\) is a cut-off smooth function as described in the proof of Theorem1.2. Clearly, \(supp\eta_{k}\subset(-r_{k}+\underline{t}_{k},r_{k}+\underline{t}_{k})\cup(- r_{k}+\overline{t}_{k},r_{k}+\overline{t}_{k})\) and there holds \[\eta_{k}(\overline{t}_{k})=1,\ \eta_{k}(\underline{t}_{k})=-1,\] \[\eta_{k}(t)=-\eta\left(\frac{t-\overline{t}_{k}}{r_{k}}\right)\leq 0\ \text{in}\ \mathbb{R}\backslash(-r_{k}+\overline{t}_{k},r_{k}+\overline{t}_{k})\] and \[\eta_{k}(t)=\eta\left(\frac{t-\underline{t}_{k}}{r_{k}}\right)\geq 0\ \text{in}\ \mathbb{R}\backslash(-r_{k}+\underline{t}_{k},r_{k}+\underline{t}_{k}).\] Then we have \[\left\{\begin{array}{rcl}v_{k}(\overline{t}_{k})&=&M,&v_{k}(\underline{t}_{ k})=m\,,\\ v_{k}(t)&\leq&M\ \text{in}\ \mathbb{R}\backslash(-r_{k}+\overline{t}_{k},r_{k}+ \overline{t}_{k})\,,\\ v_{k}(t)&\geq&m\ \text{in}\ \mathbb{R}\backslash(-r_{k}+\underline{t}_{k},r_{k}+ \underline{t}_{k})\,.\end{array}\right.\] Subsequently, \(v_{k}\) must attain its maximum value, which is at least \(M\), at \([-r_{k}+\overline{t}_{k},r_{k}+\overline{t}_{k}]\) and also attain its minimum value, which is at most \(m\), at \([-r_{k}+\underline{t}_{k},r_{k}+\underline{t}_{k}]\), more specifically, \[\exists\ \{\bar{s}_{k}\}\subset[-r_{k}+\overline{t}_{k},r_{k}+\overline{t}_{k}] \quad s.t.\quad M+\varepsilon_{k}\geq v_{k}(\bar{s}_{k})=\sup_{t\in\mathbb{R}} v_{k}(t)\geq M.\] and \[\exists\ \{\underline{s}_{k}\}\subset[-r_{k}+\underline{t}_{k},r_{k}+ \underline{t}_{k}]\quad s.t.\quad m-\varepsilon_{k}\leq v_{k}(\underline{s}_{ k})=\inf_{t\in\mathbb{R}}v_{k}(t)\leq m.\] Consequently, \[\partial_{t}^{\alpha}v_{k}(\bar{s}_{k}) = C_{\alpha}\int_{-\infty}^{\bar{s}_{k}}\frac{v_{k}(\bar{s}_{k})- v_{k}(\tau)}{(\bar{s}_{k}-\tau)^{1+\alpha}}d\tau\] \[\geq C_{\alpha}\int_{-\infty}^{\underline{s}_{k}}\frac{v_{k}(\bar{s} _{k})-v_{k}(\tau)}{(\bar{s}_{k}-\tau)^{1+\alpha}}d\tau\] \[= C_{\alpha}\left\{\int_{-\infty}^{\underline{s}_{k}}\frac{v_{k}( \bar{s}_{k})-v_{k}(\underline{s}_{k})}{(\bar{s}_{k}-\tau)^{1+\alpha}}d\tau+ \int_{-\infty}^{\underline{s}_{k}}\frac{v_{k}(\underline{s}_{k})-v_{k}(\tau)}{ (\bar{s}_{k}-\tau)^{1+\alpha}}d\tau\right\}\] \[\geq C_{\alpha}\left\{(M-m)\int_{\underline{s}_{k}-r_{k}}^{ \underline{s}_{k}}\frac{1}{(\overline{s}_{k}-\tau)^{1+\alpha}}d\tau+\int_{- \infty}^{\underline{s}_{k}}\frac{v_{k}(\underline{s}_{k})-v_{k}(\tau)}{( \underline{s}_{k}-\tau)^{1+\alpha}}d\tau\right\}\] \[\geq \frac{C_{0}}{r_{k}{}^{\alpha}}+\partial_{t}^{\alpha}v_{k}( \underline{s}_{k}).\] In addition, owing to the equation in (1.18), we utilize the rescaling and translation for \(\partial_{t}^{\alpha}\eta\) (see (1.9)), it is easily derived \[\partial_{t}^{\alpha}v_{k}(\bar{s}_{k}),\ \partial_{t}^{\alpha}v_{k}(\bar{s}_{k}^{ \lambda})\sim\frac{\varepsilon_{k}}{{r_{k}}^{\alpha}}. \tag{4.5}\] It follows from (4.4) and (4.5) that \[C\varepsilon_{k}\geq C_{0}-C\varepsilon_{k},\] which leads to a contradiction for sufficiently large \(k\). In conclusion, we verifies (4.2) and thus completes the proof of Theorem 1.4. ### Liouville Theorem for the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\) In the rest of this section, we employ the Maximum principle (Theorem 1.4) for antisymmetric functions in unbounded domains, along with the Liouville theorem for the Marchaud fractional time operator \(\partial_{t}^{\alpha}\) just established in Section 4.1, to complete the proof of the Liouville theorem (Theorem 1.5) for the dual fractional operator \(\partial_{t}^{\alpha}+(-\Delta)^{s}\). Proof of Theorem 1.5.: For each fixed \(t\in\mathbb{R}\), we first claim that \(u(\cdot,t)\) is symmetric with respect to any hyperplane in \(\mathbb{R}^{n}\). Let \(x_{1}\) be any given direction in \(\mathbb{R}^{n}\), and we keep the notation \(T_{\lambda},\Sigma_{\lambda},w_{\lambda}(x,t)\), \(u_{\lambda}(x,t)\), \(x^{\lambda}\) defined in section 1. For any \(\lambda\in\mathbb{R}\), on account of equation (1.19), we derive \[\left\{\begin{array}{ll}\partial_{t}^{\alpha}w_{\lambda}(x,t)+(-\Delta)^{s} w_{\lambda}(x,t)=0,&\mbox{ in }\ \Sigma_{\lambda}\times\mathbb{R},\\ \\ w_{\lambda}(x,t)=-w_{\lambda}(x^{\lambda},t),&\mbox{ in }\ \Sigma_{\lambda} \times\mathbb{R}.\end{array}\right.\] It follows from Theorem 1.3 that \[w_{\lambda}(x,t)\equiv 0\mbox{ in }\Sigma_{\lambda}\times\mathbb{R}.\] As a result, the arbitrariness of \(\lambda\) indicates that \(u(\cdot,t)\) exhibits symmetry with respect to any hyperplane perpendicular to the \(x_{1}\)-axis. Moreover, since the selection of the \(x_{1}\) direction is arbitrary, we conclude that \(u(\cdot,t)\) is symmetric with respect to any hyperplane in \(\mathbb{R}^{n}\) for each fixed \(t\in\mathbb{R}\). Thus, we deduce that \(u(x,t)\) depends only on \(t\), i.e., \[u(x,t)=u(t)\mbox{ in }\mathbb{R}^{n}\times\mathbb{R}.\] Now equation (1.19) reduce to the following one-dimensional one-sided fractional equation \[\partial_{t}^{\alpha}u(t)=0\ \mbox{ in }\ \mathbb{R}.\] Then Theorem 1.4 yields that \(u(t)\) must be constant. Thus, we have confirmed that the bounded solution of equation (1.19) must be constant. This completes the proof of Theorem 1.5. **Acknowledgement** The work of the second author is partially supported by the National Natural Science Foundation of China (NSFC Grant No.12101452) and the work of the third author is partially supported by the National Natural Science Foundation of China (NSFC Grant No.12071229).
2309.16742
Supervised Learning Models for Early Detection of Albuminuria Risk in Type-2 Diabetes Mellitus Patients
Diabetes, especially T2DM, continues to be a significant health problem. One of the major concerns associated with diabetes is the development of its complications. Diabetic nephropathy, one of the chronic complication of diabetes, adversely affects the kidneys, leading to kidney damage. Diagnosing diabetic nephropathy involves considering various criteria, one of which is the presence of a pathologically significant quantity of albumin in urine, known as albuminuria. Thus, early prediction of albuminuria in diabetic patients holds the potential for timely preventive measures. This study aimed to develop a supervised learning model to predict the risk of developing albuminuria in T2DM patients. The selected supervised learning algorithms included Na\"ive Bayes, Support Vector Machine (SVM), decision tree, random forest, AdaBoost, XGBoost, and Multi-Layer Perceptron (MLP). Our private dataset, comprising 184 entries of diabetes complications risk factors, was used to train the algorithms. It consisted of 10 attributes as features and 1 attribute as the target (albuminuria). Upon conducting the experiments, the MLP demonstrated superior performance compared to the other algorithms. It achieved accuracy and f1-score values as high as 0.74 and 0.75, respectively, making it suitable for screening purposes in predicting albuminuria in T2DM. Nonetheless, further studies are warranted to enhance the model's performance.
Arief Purnama Muharram, Dicky Levenus Tahapary, Yeni Dwi Lestari, Randy Sarayar, Valerie Josephine Dirjayanto
2023-09-28T08:41:12Z
http://arxiv.org/abs/2309.16742v4
Supervised Learning Models for Early Detection of Albuminuria Risk in Type-2 Diabetes Mellitus Patients ###### Abstract Diabetes, especially T2DM, continues to be a significant health problem. One of the major concerns associated with diabetes is the development of its complications. Diabetic nephropathy, one of the chronic complications of diabetes, adversely affects the kidneys, leading to kidney damage. Diagnosing diabetic nephropathy involves considering various criteria, one of which is the presence of a pathologically significant quantity of albumin in urine, known as albuminuria. Thus, early prediction of albuminuria in diabetic patients holds the potential for timely preventive measures. This study aimed to develop a supervised learning model to predict the risk of developing albuminuria in T2DM patients. The selected supervised learning algorithms included Naive Bayes, Support Vector Machine (SVM), decision tree, random forest, AdaBoost, XGBoost, and Multi-Layer Perceptron (MLP). Our private dataset, comprising 184 entries of diabetes complications risk factors, was used to train the algorithms. It consisted of 10 attributes as features and 1 attribute as the target (albuminuria). Upon conducting the experiments, the MLP demonstrated superior performance compared to the other algorithms. It achieved accuracy and f1-score values as high as 0.74 and 0.75, respectively, making it suitable for screening purposes in predicting albuminuria in T2DM. Nonetheless, further studies are warranted to enhance the model's performance. diabetes, albuminuria, supervised learning, machine learning, deep learning ## I Introduction Diabetes continues to be one of the most challenging non-communicable diseases worldwide. It is a chronic metabolic disorder characterized by high blood sugar levels caused by problems in insulin production, sensitivity of cells' response to insulin, or both [1]. There are four types of diabetes, namely type-1, type-2, gestational type, and other types. However, type-2 diabetes (T2DM) dominates all other diabetes types [2], accounting for more than 90% of all diabetes cases. The high prevalence of T2DM is strongly associated with the unhealthy modern lifestyle, including unhealthy eating habits, smoking, obesity, and a lack of physical activity, as well as internal predisposition factors such as race and family history [3]. The predominant challenge associated with diabetes stems from the array of complications that can arise when diabetes is not adequately controlled. Among these unwanted complications, one particularly notable issue is kidney complication, which falls under the category of microvascular complications, affecting the smaller blood vessels [4, 5]. This specific complication is commonly referred to as diabetic nephropathy and accounts for approximately 14.0% of diabetes-related complications [4]. Diabetic nephropathy is considered as a type of Chronic Kidney Disease (CKD). According to the Kidney Disease Improving Global Outcomes (KDIGO) 2012 guidelines, CKD is established when there are markers of kidney damage and/or a Glomerular Filtration Rate (GFR) \(<\) 60 mL/min/1.73m\({}^{2}\) that lasts for at least \(\geq\) 3 months. The kidney damage markers for CKD include the presence of pathologically high quantities of urinary albumin excretion (albuminuria), the presence of urine sediment abnormalities, structural abnormalities detected by imaging, and a history of kidney transplantation [6]. As mentioned in the preceding paragraph, the presence of albuminuria can be indicative of a kidney problem. Albumin in urine can signal an issue with the kidney filtration function. Albuminuria can be divided into two categories: microalbuminuria and macroalbuminuria. Microalbuminuria is diagnosed when the albumin-creatinine ratio is \(>\) 30 mg/24h and \(<\) 300 mg/24h, while macroalbuminuria is diagnosed when the albumin excretion is \(>\) 300 mg/24h in a 24-hour urine collection sample [7]. As albuminuria can serve as a signal of kidney problems, it becomes essential for diabetes patients to be aware of their risk of developing this condition. Therefore, the primary objective of this study is to develop a supervised learning model capable of predicting the risk of albuminuria development in diabetes patients, particularly those with T2DM. The primary contributions of this paper can be summarized as follows: * Development of a supervised model capable of predicting early albuminuria in patients with type 2 diabetes mellitus (T2DM). * Identification of the optimal supervised algorithm for early albuminuria detection in T2DM patients. ## II Related Work Recently, there has been a growing interest among researchers in using machine learning approaches to predict albuminuria. This interest arise from the urgency of developing early risk prediction tools for the disease, as it can lead to increased "costs" if left undetected. To our knowledge, two studies conducted by Khitan et al. [8] and Lin et al. [9] have used machine learning approaches for predicting albuminuria. Khitan et al. [8] in their study used machine learning approaches to predict the risk of albuminuria in person with diabetes. Their study incorporated 13 predictive factors, including measures such as subtotal lean mass, subtotal fat mass, diabetes duration, age, HbA1c levels, creatinine levels, triglyceride levels, total cholesterol levels, HDL cholesterol levels, maximum exercise capacity, systolic and diastolic blood pressure, and ankle brachial index. They conducted their study on 1330 subjects and used a variety of machine learning algorithms, including random forest, gradient boost, logistic regression, support vector machines, multilayer perceptron, and a stacking classifier. The results showed that the multilayer perceptron (MLP) exhibited the highest performance with an AUC (Area Under the Curve) value of 0.67. Furthermore, the model demonstrated a precision of 0.61, recall of 0.67, and an accuracy of 0.62, as determined from the confusion matrix presented in the paper. In another study within this domain, Lin et al. [9] aimed to predict microalbuminuria in the Chinese population using machine learning approaches. Their study involved 3,294 subjects ranging in age from 16 to 93 years. They used the "glm" package in the R software to construct their machine learning model. Their model achieved a specificity of 0.9 and an accuracy of 0.63, although the sensitivity was relatively low at 0.2. Despite these outcomes, the study's conclusions highlighted systolic and diastolic blood pressure, fasting blood glucose levels, triglyceride levels, gender, age, and smoking as potential predictors of microalbuminuria among the patient population. ## III Methodology ### _Dataset_ In this study, we used our private dataset consisting of data on the risk of diabetes complications, which was collected from a primary healthcare facility in DKI Jakarta, Indonesia. The dataset comprises 184 records, each consisted of 10 features and 1 target variable (Table I) (Figure 1). All records are sourced from patients with T2DM. The features are all numerical, whereas the target variable is categorical. Prior to analysis, all data were carefully examined and cleaned to remove any missing values or measurement errors. To ensure the security and privacy of the medical data, all information was anonymized. * **durasi_dm:** This attribute refers to the length of time since the patient's initial diabetes diagnosis. The duration is measured in years. * **bmi:** This attribute refers to the patient's current Body Mass Index (BMI), which is measured in kg/m\({}^{2}\). Fig. 1: Dataset distribution * **hdl:** This attribute refers to the current level of High-Density Lipoprotein (HDL) in the bloodstream, measured in mg/dL using standard laboratory methods. * **ldl:** This attribute refers to the current level of Low-Density Lipoprotein (LDL) in the bloodstream, measured in mg/dL using standard laboratory methods. * **tg:** This attribute refers to the current level of triglyceride in the bloodstream, measured in mg/dL using standard laboratory methods. * **kol_tot:** This attribute refers to the current level of total cholesterol in the bloodstream, measured in mg/dL using standard laboratory methods. * **gdp:** This attribute refers to the current level of fasting plasma glucose in the bloodstream, measured in mg/dL using standard laboratory methods. * **TDS:** This attribute refers to the current systolic blood pressure measured in mmHg using an ambulatory blood pressure device. * **a1c:** This attribute refers to the current level of HbA1c measured using standard laboratory methods. * **cr:** This attribute refers to the current level of creatinine, measured in mg/dL using standard laboratory methods. * **kid_group:** This attribute serves as the target label and describes the grouping of kidney disease. It is a categorical attribute comprising of two categories: normal and albuminuria. The determination of albuminuria label was based on the KDIGO 2012 criteria. However, instead of treating microalbuminuria and macroalbuminuria as separate categories, we classified them both under the umbrella term of "albuminuria". The use of the aforementioned features was rationalized based on the complex nature of their interaction with kidney damage, as shown in Figure 2[10, 11, 12, 13]. Figure 2 illustrates the simplified mechanism of diabetic nephropathy, with obesity playing a central role. Elevated BMI increases the likelihood of obesity, which subsequently acts as a risk factor for developing diabetes and hypertension through a complex pathway. The intricate sequence involves the increase of plasma glucose and HbA1c in diabetes, leading to microvascular damage and subsequent kidney damage. The duration of diabetes increases the risk of such damage. On the other hand, chronic high blood pressure resulting from obesity can lead to hypertension, causing microvascular damage and putting the individual at risk of kidney damage. Additionally, kidney damage can, in turn, induce hypertension, creating an inner loop-like mechanism that worsens the condition. Furthermore, obesity also serves as a risk factor for lipid profile issues, such as an increase in LDL, TG, and cholesterol, and a decrease in HDL, posing a risk of dyslipidemia. Dyslipidemia, in turn, indirectly contributes to the kidney damage. ### _Design of Experiment_ This study aimed to evaluate the performance of supervised learning algorithms in predicting the risk of developing albuminuria in patients with T2DM patients. We evaluated several supervised learning algorithms, including 6 machine learning algorithms and 1 deep learning algorithm. The machine learning algorithms used were Naive Bayes, Support Vector Machine (SVM), decision tree, random forest, AdaBoost, and XGBoost. Among these machine learning algorithms, random forest, AdaBoost, and XGBoost are ensemble algorithms. The deep learning algorithm employed in this study is the Multi-Layer Perceptron (MLP). The use of deep learning in this experimental design, as compared to machine learning algorithms, was intended to evaluate its potential performance considering the limited size of the dataset. Table II presents the complete experimental design used in this study. We used the scikit-learn library[14], version 1.0.2, as our primary machine learning and deep learning toolkit. Additionally, we employed the xgboost library[15], version 1.6.2, specifically designed for implementing the XGBoost algorithm. Fig. 2: Simplified diabetic nephropathy mechanism The dataset is split into training and test datasets using the train_test_split function provided by scikit-learn. Since the dataset size is relatively small, we opted for a train-test ratio of 0.75:0.25. ### _Evaluation Strategy_ We used precision (1), recall (2), accuracy (3), and f1-score (4) as the evaluation metrics for our study. A competent model is expected to exhibit high values for precision, recall, accuracy, and f1-score. \[precision=\frac{TP}{TP+FP} \tag{1}\] \[recall=\frac{TP}{TP+FN} \tag{2}\] \[accuracy=\frac{TP+TN}{TP+FP+FN+TN} \tag{3}\] \[F1\text{-}score=\frac{2\times precision\times recall}{precision+recall} \tag{4}\] ### _Ethics Approval_ This study has been ethically approved by the Health Ethics Committee of Cipto Mangunkusumo Hospital, Faculty of Medicine Universitas Indonesia, Jakarta, Indonesia number KET-246/UN2.F1/ETIK/PPM.00.02/2022. ## IV Result Figure 3 shows the distribution of the train and test datasets used in our study. The test ratio is set at 0.25, resulting in 138 records for the train dataset and 46 records for the test dataset. As depicted in Figure 3, the dataset exhibits a relatively uniform distribution. Although several data points appear to be potential outliers, we made the decision to retain them deliberately to introduce bias to the learning algorithms and thus reduce the risk of overfitting. We train each supervised learning algorithm with its defined parameters, as shown in Table II, using the training dataset, which results in trained models. These models are then tested using the test dataset, producing predicted labels. The performance of the models is evaluated by comparing these predicted labels with their corresponding true labels, which serve as the ground truth. The model evaluation results are shown in Table III. As depicted in the table, the machine learning algorithms did not yield satisfactory results. Among the various machine learning algorithms experimented with, Naive Bayes outperformed the others. It demonstrated an accuracy of up to 0.65 when predicting the test dataset. In contrast, the remaining machine learning algorithms exhibited prediction accuracies of no more than 0.5. Even the ensemble models failed to surpass Naive Bayes in predicting the test dataset. However, it's worth noting that none of these machine learning algorithms achieved an f1-score higher than 0.5, suggesting that the model is insufficient for predicting the risk of albuminuria in T2DM patients. The superior results were obtained from the deep learning algorithm, specifically the Multi-Layer Perceptron (MLP), which achieved an accuracy and f1-score of 0.74 and 0.71, respectively. This algorithm outperformed the machine learning algorithms, which only achieved accuracy and f1-scores of up to 0.65 and 0.55, respectively. Additionally, the MLP algorithm exhibited the highest precision and recall scores compared to the other algorithms, scoring 0.68 and 0.75, respectively. This result outperformed the study by Khitan et al. [8] and Lin et al. [9], indicating that the algorithm might be acceptable for predicting the risk of albuminuria among T2DM patients. However, further improvements are needed, particularly in terms of the dataset size and variety, to achieve better results. To gain a better understanding of the model evaluation results, we conducted a visual error analysis on the prediction outcomes of the MLP model. To facilitate visualization, we used the Principal Component Analysis (PCA) method to reduce the features from 10 to 2 dimensions. Subsequently, we used square and triangle markers to represent the normal and albuminuria labels, respectively, while using red and green colors to indicate false and true predictions, respectively. The visualization of the model evaluation results can be observed in Figure 4. As shown in Figure 4, the false predictions could have either a normal or albuminuria label. However, the interesting point revealed by the visualization in the figure is that the falsely predicted labels are spread out but relatively close to the adjacent cluster that forms the true predictions. This indicates that the data characteristics of 'normal' and 'albuminuria' at some points have little difference, which might cause the algorithm to experience difficulty in creating separate boundaries between the labels, leading to false predictions and resulting in lower accuracy. This phenomenon may be explained by the nature of the patient data. For several patients, the risk of developing albuminuria might not be strongly correlated with the features in the dataset. For example, there could be a patient with uncontrolled diabetes, indicated by high blood glucose and high lipid profile, but not developing albuminuria, while another patient with normal glucose levels and normal lipid profile developing albuminuria. This complexity arises because the human body is complex, and the risk of developing a disease may be influenced by multiple risk factors that are not apparent in the dataset. Therefore, one possible solution to improve the model's accuracy is to increase the dataset size and variety, allowing the learning algorithms to better understand the complex patterns present in such data. Despite the complex nature of the patient data, the out Fig. 4: Error analysis Fig. 3: Distribution of the train-test dataset performing performance of the MLP algorithm might be beneficial due to its architecture. The MLP might consist of several to even thousands of hidden layers. The MLP uses the backpropagation algorithm to update its weights based on the learned data [16]. This enables the MLP to solve complex problems relatively easily compared to other traditional machine learning algorithms when dealing with such complex data. ## V Conclusion We have developed a supervised learning model to predict the risk of developing albuminuria in patients with T2DM. Among the various supervised learning models examined in this study, the MLP algorithm demonstrated superior performance in terms of precision, recall, accuracy, and f1-score. Specifically, the algorithm achieved values of 0.68, 0.75, 0.74, and 0.71 for precision, recall, accuracy, and f1-score, respectively. To further enhance the model's performance, we recommend augmenting the dataset with additional data to increase its size and diversity. Additionally, we propose conducting further research into the utilization of deep learning algorithms like MLP to effectively handle the complexities inherent in patient data. ## Acknowledgment This study was conducted as part of the ID-CALDERA (Indonesia's Cardiovascular Risk Stratification in Diabetes Using Smartphone-Based Retinal Imaging) project, which aims to develop AI-based models and technologies for predicting diabetic complications at an earlier stage. The vision and mission of ID-CALDERA are centered on reducing the impact and burden of diabetic complications. We extend our gratitude to everyone who has provided support for our work, whether individually or institutionally.
2302.14541
Generalized exponentially bounded integrated semigroups
The main subject of this paper is the analysis of sequences of exponentially bounded integrated semigroups which are related to Cauchy problems \begin{equation}\label{jed} \frac{\partial}{\partial t}u(t,x)-a(D)u(t,x)=f(t,x), \quad u(0,x)=u_0(x), \quad t\geq 0, \ x\in \mathbb R^d, \end{equation} with a distributional initial data $u_0$ and a distributional right hand side $f$ through a sequence of equations with regularized $u_0$ and $f$ and a sequence of (pseudo) differential operators $a_n(D)$ instead of $a(D)$. Comparison of sequences of infinitesimal generators and the determination of corresponding sequences of integrated semigroups are the main subject of the paper. For this purpose, we introduce association, the relation of equivalence for infinitesimal generators on one side and the corresponding relations of equivalence of integrated semigroups on another side. The order of involved assumptions on generators essentially characterize the mutual dependence of sequences of infinitesimal generators and the corresponding sequences of integrated semigroups.
Marko Kostic, Stevan Pilipovic, Milica Zigic
2023-02-28T13:05:43Z
http://arxiv.org/abs/2302.14541v2
# Generalized exponentially bounded ###### Abstract The main subject of this paper is the analysis of sequences of exponentially bounded integrated semigroups which are related to Cauchy problems \[\frac{\partial}{\partial t}u(t,x)-a(D)u(t,x)=f(t,x),\quad u(0,x)=u_{0}(x), \quad t\geq 0,\ x\in\mathbb{R}^{d}, \tag{1}\] with a distributional initial data \(u_{0}\) and a distributional right hand side \(f\) through a sequence of equations with regularized \(u_{0}\) and \(f\) and a sequence of (pseudo) differential operators \(a_{n}(D)\) instead of \(a(D)\). Comparison of sequences of infinitesimal generators and the determination of corresponding sequences of integrated semigroups are the main subject of the paper. For this purpose, we introduce association, the relation of equivalence for infinitesimal generators on one side and the corresponding relations of equivalence of integrated semigroups on another side. The order of involved assumptions on generators essentially characterize the mutual dependence of sequences of infinitesimal generators and the corresponding sequences of integrated semigroups. ## 1 Introduction This paper aims to provide an approach to the sequences of infinitesimal generators and the corresponding sequences of integrated semigroups, usually obtained through a process of regularization, as a framework for solving singular Cauchy problems within spaces of generalized functions. General theory of integrated semigroups, introduced by Arendt [1], was already stated in a large number of monographs. We refer to a fundamental monograph [2] and references therein for the historical background. For the applications, especially in population biology and population persistence, we refer to [17] and [13]. The authors of quoted monographs are the leading ones in the field with a plenty of strong papers which can be found in the bibliography of these monographs. Actually, contributions to the theory of integrated semigroups were given by many excellent papers which can be easily found. We do not mention them because it is very likely that many important works will be omitted. Here we mention that one of coauthors has written several papers and monographs related to various kinds of semigroups. We refer to [10], [11] and references therein. Concerning generalized \(C_{0}\)-semigroups through regularization, we refer to [14], where, in the frame of Colombeau theory [3] of generalized function algebras, were discussed relations between nets of infinitesimal generators and the corresponding nets of \(C_{0}\)-semigroups, with applications to a certain class of nonlinear wave equations. In relation to [14], our approach in this paper is different; instead of nets, we simplify the exposition using sequences as a main tool and instead of technically more complex definitions of Colombeau theory we directly introduce sequences of solutions of regularized Cauchy problems with strong singularities. Roughly speaking, our results are related to approximations of infinitesimal generators and the corresponding integrated semigroups, through the sequences of such operators and integrated semigroups. We only deal with one time integrated semigroups although it is clear that the paper can be extended to \(k\)-times integrated semigroups. In this way, we avoid the distributional semigroups for which we know that every one of them is a \(k\)-times integrated semigroup for a certain \(k\) (cf. [12], [19]). Our approach is motivated by revisiting well known results for one time integrated semigroups which correspond to infinitesimal generators given as Fourier multipliers with symbols of the class \(S^{m}_{1,0}\), \(m\in\mathbb{N}\), on the Lebesgue space \(L^{p}(\mathbb{R}^{d})\), \(p\geq 1\) and those which correspond to symbols \(i|\xi|^{m},m\in\mathbb{N}\) given in Section 8.3 of [2], see also [7], [8]. In the main part of the paper we analyse the relations between sequences of infinitesimal generators and corresponding sequences of integrated semigroups in a sense that a certain perturbation of a sequence of infinitesimal generators results by a perturbation of the corresponding sequence of integrated semigroups which we estimate and classify. This is done by the introduction of associated sequences, as in the algebraic theory of generalized functions, cf. [3], [6]. The paper is organized as follows. Notation is the standard one for the real numbers, as well as for the Lebesgue \(L^{p}\)-spaces, Schwartz test function and distribution spaces. In Introduction, Subsection 1.1, we introduce sequence spaces over the pivot Banach space \(X\) as a framework for the further investigations. The moderate growth of involved sequences is the essential assumption of all sequence spaces considered in the paper. Section 2 is related to sequences of closed linear operators defined on \(X\) (Subsection 2.1) since they are under suitable conditions infinitesimal generators of sequences of exponentially bounded integrated semigroups (Subsection 2.2). For this purpose we impose additional conditions on operators and call them sequences of infinitesimal generators. In Section 3, we revisit some examples presented in the monograph [2], given in Section 8.3, related to a class of pseudo-differential operators. This section illustrates just a few possibilities for the applications in finding a solution to (1), in the form of a sequence, although the singular equation from which we started does not have a solution in a classical analysis setting. If a problem (1) has a classical solution, one must have that obtained sequence of solutions, also called very weak solution (cf. [5], [16]), converges to this solution in the same setting. Essential interest is to find out whether a very weak solution has a subsequence which converges in the sense of distributions, that is in the weak sense. Usually, this is a weak solution to (1). Our approach is exposed in Section 4 where the relations between sequences of infinitesimal generators and corresponding sequences of exponentially bounded integrated semigroups are discussed. This is achieved by analysing associated sequences, the ones for which the norm of their difference tends to zero. In this way, we introduce relations of equivalences in the corresponding spaces of moderate sequences given in Section 2 and in Section 4. A classical result on perturbation given in [9] fits well in the given approach. ### Notation Let \((X,\|\cdot\|_{X})\) be a Banach space and \((\mathcal{L}(X),\|\cdot\|_{\mathcal{L}(X)})\) be a space of linear continuous mappings on \(X,\) with values in \(X.\) For a sequence \((x_{n})_{n}\in X^{\mathbb{N}},\)\(\mathbb{N}\) is the set of natural numbers, we say that it is moderate, and write \((x_{n})_{n}\in\mathcal{E}_{X}^{M},\) if there exists \(a\in\mathbb{R}\) such that \(\|x_{n}\|_{X}=\mathcal{O}(n^{a}),\) which means \(\|x_{n}\|_{X}\leq Cn^{a},\)\(n>n_{0}\) for some \(C>0.\) With \(\mathcal{L}(X)\) instead of \(X,\) we define \(\mathcal{E}_{\mathcal{L}(X)}^{M}\). Note that \(\mathcal{L}(X)\) is considered as a Banach algebra with respect to the operation of composition. Directly from the definition, one can deduce the following: \((R_{n})_{n}\in\mathcal{E}_{\mathcal{L}(X)}^{M}\) if and only if there exists \((M_{n})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}\) such that \(\|R_{n}x\|_{X}\leq M_{n}\|x\|_{X},\)\(x\in X.\) Denote by \(\mathcal{C}^{M}([0,\infty);X)\) the space of vector valued sequences of continuous mappings \(F_{n}:[0,\infty)\ni t\mapsto F_{n}(t)\in X,\)\(n\in\mathbb{N},\) with the property \[(\exists a\in\mathbb{R})\quad\sup_{t\geq 0}\|F_{n}(t)\|_{X}=\mathcal{O}(n^{a}), \quad n\to\infty. \tag{2}\] We will consider the case when \(X=L^{p}(\mathbb{R}^{d}),\)\(p\in(1,\infty),\) but our special interest is the case when we have above \(\mathcal{L}(X)\) instead of \(X\). Then one obtains the space of sequences \((S_{n})_{n}\) of strongly continuous mappings \(S_{n}:[0,\infty)\to(\mathcal{L}(X),\|\cdot\|_{\mathcal{L}(X)}),\)\(n\in\mathbb{N},\) denoted by \(\mathcal{C}^{M}([0,\infty);\mathcal{L}(X)).\) The introduced sequences will be also denoted by \((S_{n}(t))_{n},\)\(t\geq 0,\) to emphasize the role of \(t.\) Clearly, \(\mathcal{C}^{M}([0,\infty);\mathcal{L}(X))\) is an algebra under composition. The space of sequences of continuous mappings \(F_{n}:[0,\infty)\to(X,\|\cdot\|_{X}),\) with the property \(\sup_{t\geq 0}\|e^{-\omega t}F_{n}(t)\|_{X}=\mathcal{O}(n^{a}),\)\(n\to\infty,\) for some \(\omega>0\) and some \(a\in\mathbb{R},\) is denoted by \(\mathcal{C}^{M}_{\exp}([0,\infty);X).\) It is also an algebra. Again, we emphasize the case when \(\mathcal{L}(X)\) is instead of \(X,\) and write \((S_{n})_{n}\in\mathcal{C}^{M}_{\exp}([0,\infty);\mathcal{L}(X))\) if \[\sup_{t\geq 0}\|e^{-\omega t}S_{n}(t)\|_{\mathcal{L}(X)}=\mathcal{O}(n^{a}), \ n\to\infty,\ \text{for some}\ \omega>0\ \text{and some}\ a\in\mathbb{R}. \tag{3}\] Note that for every \((S_{n})_{n}\in\mathcal{C}^{M}_{\exp}([0,\infty);\mathcal{L}(X))\) and every \(t_{0}\in[0,\infty)\) we have \((S_{n}(t_{0}))_{n}\in\mathcal{E}_{\mathcal{L}(X)}^{M}.\) Let us note that for all sequences under consideration we have to assume that their properties hold for \(n>n_{0}\) since only the behaviour, as \(n\to\infty,\) is important. In the sequel, we will not explicitly point out this fact and just assume that a certain property holds for every \(n\in\mathbb{N}.\) Actually, it is not a restriction since we can always change the first \(n_{0}\) elements by the \((n_{0}+1)\)-th element. ## 2 Generalized exponentially bounded integrated semigroups ### Sequences of generators Let \((A_{n})_{n}\) be a sequence of closed linear operators acting on \(X\) and \(D_{A_{n}}\) be a domain for \(A_{n},\)\(n\in\mathbb{N}.\) Let \((R(\lambda,A_{n}))_{n}\) be a sequence of resolvents that corresponds to \((A_{n})_{n}\) and \(\rho(A_{n}),\)\(n\in\mathbb{N},\) be their resolvent sets. Assume: * There exists \(D\subset X,\)\(D\neq\emptyset,\) such that \(D_{A_{n}}=D,\)\(n\in\mathbb{N}.\) * There exists \(\omega>0\) such that \((\omega,\infty)\subset\rho(A_{n}),\)\(n\in\mathbb{N}.\) * \((R(\lambda,A_{n}))_{n}\in\mathcal{E}_{\mathcal{L}(X)}^{M},\)\(\lambda\in(\omega,\infty).\) Denote by \(\mathcal{A}_{3}^{M}\) the set of sequences which satisfy (G1) - (G3). We call \((A_{n})_{n}\) a sequence of generators. We define a domain \(\mathbf{D}_{A}\) for \((A_{n})_{n}\in\mathcal{A}_{3}^{M},\) that is \((A_{n})_{n}:\mathbf{D}_{A}\subset\mathcal{E}_{X}^{M}\rightarrow\mathcal{E}_{ X}^{M},\) as \[\mathbf{D}_{A}=\left\{(x_{n})_{n}\in\mathcal{E}_{X}^{M}\,:\,x_{n}\in D,\,n\in \mathbb{N}\ \ \wedge\ (A_{n}x_{n})_{n}\in\mathcal{E}_{X}^{M}\right\}.\] **Proposition 2.1**.: _Let \((A_{n})_{n}\) be a sequence of generators and \((y_{n})_{n}\in\mathcal{E}_{X}^{M}.\) Then \((R(\lambda,A_{n})y_{n})_{n}=(x_{n})_{n}\in\mathbf{D}_{A},\)\(\lambda\in(\omega,\infty).\) Conversely, if \((x_{n})_{n}\in\mathbf{D}_{A}\), then there exists \((y_{n})_{n}\in\mathcal{E}_{X}^{M}\) so that \(R(\lambda,A_{n})y_{n}=x_{n},\)\(n\in\mathbb{N},\) for \(\lambda\in(\omega,\infty).\)_ Proof.: Let \(\lambda\in(\omega,\infty).\) It is clear that \(x_{n}=R(\lambda,A_{n})y_{n}\in D\) for every \(n\in\mathbb{N}\) and that \((x_{n})_{n}=(R(\lambda,A_{n})y_{n})_{n}\in\mathcal{E}_{X}^{M},\)\(\lambda\in(\omega,\infty).\) Finally, since \((x_{n})_{n},(y_{n})_{n}\in\mathcal{E}_{X}^{M}\) we have \((A_{n}x_{n})_{n}=(\lambda x_{n}-y_{n})_{n}\in\mathcal{E}_{X}^{M}.\) For the converse assertion, just note that if \(y_{n}=\lambda x_{n}-A_{n}x_{n},\)\(n\in\mathbb{N},\) then \((y_{n})_{n}\in\mathcal{E}_{X}^{M}\) and \(R(\lambda,A_{n})y_{n}=x_{n},\)\(n\in\mathbb{N}.\) A range \(\mathbf{R}_{\lambda,A},\)\(\lambda\in(\omega,\infty),\) of the sequence of resolvents \((R(\lambda,A_{n}))_{n},\) that corresponds to the sequence of generators \((A_{n})_{n},\) is defined as: \[\mathbf{R}_{\lambda,A}=\left\{(x_{n})_{n}\in\mathcal{E}_{X}^{M}:\text{there exists $(y_{n})_{n}\in\mathcal{E}_{X}^{M}$ so that $(R(\lambda,A_{n})y_{n})_{n}=(x_{n})_{n}$}\right\}.\] Since for every \(\lambda,\lambda^{\prime}\in(\omega,\infty)\) one obtains \(\mathbf{R}_{\lambda,A}=\mathbf{R}_{\lambda^{\prime},A},\) we will use notation \(\mathbf{R}_{A}=\mathbf{R}_{\lambda,A},\)\(\lambda\in(\omega,\infty).\) Now we state the direct consequence of Proposition 2.1. **Corollary 2.2**.: \(\mathbf{R}_{A}=\mathbf{D}_{A}.\) Generalized exponentially bounded integrated semigroup and sequences of strong infinitesimal generators **Definition 2.1**.: _Let \((A_{n})_{n}\in\mathcal{A}_{3}^{M}.\) It is called a sequence of infinitesimal generators if there exists a sequence \((S_{n})_{n}\in\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X)),\) that is (3) holds, and_ \[R(\lambda,A_{n})=\lambda\int_{0}^{\infty}e^{-\lambda t}S_{n}(t)\,dt,\quad \lambda\in(\omega,\infty),\quad n\in\mathbb{N}; \tag{4}\] \((S_{n})_{n}\) _is called a sequence of exponentially bounded integrated semigroups (in short, g.e.i.s. or in plural g.e.i.s.'s) generated by \((A_{n})_{n}.\)_ **Remark 2.3**.: If (3) holds, then by Theorem 3.1 of Arendt [1], the necessary and sufficient condition for \((S_{n})_{n}\) to be a g.e.i.s. is that \(R(\lambda,A_{n}),\,\lambda\in(\omega,\infty),\) is a pseudoresolvent, for every \(n\in\mathbb{N}.\) A direct application of Theorem 2.5.1 in [2] gives in Theorem 2.4 below the existence of a sequence of exponentially bounded integrated semigroups \(S_{n},\,n\in\mathbb{N},\) with the generators \(A_{n},\,n\in\mathbb{N},\) for which the conditions (G1) - (G3) hold, as well as the next one, \[\sup_{\mathrm{Re}\,\lambda>\omega}\|\lambda^{b}R(\lambda,A_{n})\|_{\mathcal{L }(X)}\leq M_{n},\ n\in\mathbb{N},\ \ \text{for some}\ \ b>0\ \text{and}\ \ (M_{n})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}. \tag{5}\] **Theorem 2.4**.: _Let \((A_{n})_{n}\in\mathcal{A}_{3}^{M}\) so that it satisfies condition (5).Then there exists a sequence \((S_{n})_{n}\) of exponentially bounded integrated semigroups such that_ \[R(\lambda,A_{n})=\lambda\int_{0}^{\infty}e^{-\lambda t}S_{n}(t)\,dt,\quad \lambda\in(\omega,\infty),\quad n\in\mathbb{N},\] _and \((S_{n})_{n}\in\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X)).\) More precisely, the growth condition for \((S_{n})_{n}\) is given by_ \[\sup_{t>0}\|e^{-\omega t}t^{-b}S_{n}(t)x\|_{X}\leq M_{n}^{\prime},\ n\in \mathbb{N},\ \ \text{for some}\ (M_{n}^{\prime})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}.\] Proof.: Let \(x\in X\). Then assumption (5) implies that for every \(n\in\mathbb{N},\) \[S_{n}(t)x=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{(\alpha+ir)t}\frac{R( \alpha+ir,A_{n})x}{\alpha+ir}\,dr,\quad t\geq 0,\] where \(\alpha>\omega\) and \(S_{n}(\cdot)x\in C([0,\infty),X)\) (the space of continuous functions \([0,\infty)\to X\)). Moreover, \[\sup_{t>0}\|e^{-\omega t}t^{-b}S_{n}(t)x\|_{X}\leq M_{n}^{\prime},\quad n\in \mathbb{N},\] for some \((M_{n}^{\prime})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}.\) This is a direct consequence of Theorems 3.2.8 and 2.5.1 in [2] where, with fixed \(n,\) we have to put \(q(\lambda)=\lambda^{b-1}R(\lambda,A_{n})x\) and use this theorems for \(f(\cdot)=S_{n}(\cdot)x\). Namely, as in [2] at the very end of the proof of Theorem 2.5.1, with \(R>0,\) one has, \[\|S_{n}(t)\|_{\mathcal{L}(X)}\leq\frac{M_{n}e^{\alpha t}}{\pi bR^{b}}+\frac{M_ {n}e^{\alpha t}}{\pi R^{b}}\int_{0}^{\pi/2}e^{Rt\cos\theta}\,d\theta,\quad t>0,\] where \[M_{n}=\sup_{\operatorname{Re}\lambda>\omega}\|\lambda^{b}R(\lambda,A_{n})\|_{ \mathcal{L}(X)},\quad n\in\mathbb{N}.\] Now, taking \(R=1/t\) one obtains \(\|e^{-\omega t}t^{-b}S_{n}(t)\|_{\mathcal{L}(X)}\leq CM_{n}=M_{n}^{\prime},\)\(n\in\mathbb{N},\)\(t>0.\) Clearly, \(\sup_{t>0}\|e^{-\omega t}t^{-b}S_{n}(t)\|_{\mathcal{L}(X)}\leq M_{n}^{\prime}\) which, for \(\omega_{1}\geq\omega+b,\) implies \[\sup_{t\geq 0}\|e^{-\omega_{1}t}S_{n}(t)\|_{\mathcal{L}(X)}\leq M_{n}^{\prime}, \quad n\in\mathbb{N};\] so, \((S_{n})_{n}\in\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X)).\) The obtained growth condition for \((S_{n})_{n}\) is stronger than the one which characterizes the growth in \(\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X))\) because it gives the behaviour of the sequence \((S_{n})_{n}\) when \(t\to 0.\) ## 3 Revisiting of known examples All one time integrated semigroups in this section are well known (for fixed \(n\)). They are used for the explanation of our approach to sequences of such semigroups. Our main literature are results for integrated semigroups given in [2], Section 8.3. Concerning notation, if \(t\mapsto f(t,x)\) is a continuous function on \([0,\infty)\) with values in the Schwartz space of distributions \(\mathcal{D}^{\prime}(\mathbb{R}^{d}),\) we write \(f(t,x)\in C([0,\infty),\mathcal{D}^{\prime}(\mathbb{R}^{d})).\) Additionally, if the above function is continuously differentiable, we write \(f(t,x)\in C^{1}([0,\infty),\mathcal{D}^{\prime}(\mathbb{R}^{d}))\). These functions are elements of \(\mathcal{D}^{\prime}((0,\infty)\times\mathbb{R}^{d})\) through the dual pairing \(\langle f(t,x),\psi(t,x)\rangle,\)\(\psi\in\mathcal{D}((0,\infty)\times\mathbb{R}^{d}),\) where \(\mathcal{D}((0,\infty)\times\mathbb{R}^{d})\) is a space of smooth function \(\psi\) supported by a compact set in \((0,\infty)\times\mathbb{R}^{d}\) with the usual convergence structure. Recall [2], a smooth function \(a\) on \(\mathbb{R}^{d}\) is called a symbol belonging to \(S_{1,0}^{m},\)\(m\in\mathbb{N},\) if \(|D_{\xi}^{\alpha}a(\xi)|\leq C\langle\xi\rangle^{m-|\alpha|},\)\(\xi\in\mathbb{R}^{d},\) for some \(C>0\) and all \(\alpha\in(\mathbb{N}\cup\{0\})^{d},\) where \(\langle\xi\rangle=(1+|\xi|^{2})^{1/2}.\) Then \((a_{n})_{n}\in(S_{1,0}^{m})^{\mathbb{N}}\) is a moderate sequence of symbols if there exists \((C_{n})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}\) so that \[|D_{\xi}^{\alpha}a_{n}(\xi)|\leq C_{n}\langle\xi\rangle^{m-|\alpha|},\quad\xi \in\mathbb{R}^{d},\ n\in\mathbb{N}. \tag{6}\] With the notation \(D=(D_{1},...,D_{d})\), \(D_{j}=\partial/(i\partial x)\), \(j=1,...,d,\) and \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) for the Fourier and inverse Fourier transform, we consider pseudo-differential operator formally defined by \(a(D)f=(\text{Op }a)f=\mathcal{F}^{-1}(a\mathcal{F}f),\) where \(a\in S_{1,0}^{m}\) and \(f\) belongs to an appropriate space of functions or distributions. (Here, the notation \(D\) for the differential operator should not be confused with \(D=D_{A}\subset X,\) which is the domain of the corresponding operator \(A.\)) Usually, a sequence of such operators \((a_{n}(D))_{n}\) can be considered as a stationary one \(a_{n}=a,\)\(n\in\mathbb{N},\) or as a sequence of approximations of \(a.\) **Remark 3.1**.: The regularization of a Cauchy problem (1) with \(u_{0}\in\mathcal{D}^{\prime}(\mathbb{R}^{d})\) and \(f\in C([0,\infty),\mathcal{D}^{\prime}(\mathbb{R}^{d}))\) leads to a family of Cauchy problems with \(u_{0,n}\) and \(f_{n},\)\(n\in\mathbb{N},\) belonging to appropriate function spaces, \[\frac{\partial}{\partial t}w_{n}(t,x)-a_{n}(D)w_{n}(t,x)=f_{n}(t,x),\quad w_{n} (0,x)=u_{0,n}(x),\quad n\in\mathbb{N}, \tag{7}\] as follows. Let \(\theta\in\mathcal{D}(\mathbb{R}^{d}).\) Assume that it is non-negative and \(\int_{\mathbb{R}^{d}}\theta(x)dx=1\). Let \(\theta_{n}(x)=n^{d}\theta(nx),\)\(x\in\mathbb{R}^{d};\) this is a delta sequence. In the case when \(u_{0}(x)\) is a distribution and \(f(t,x)\in C([0,\infty),\mathcal{D}^{\prime}(\mathbb{R}^{d})),\) we make regularization by the use of convolution: \[u_{0,n}(x)=u_{0}(x)*\theta_{n}(x),\quad f_{n}(t,x)=f(t,x)*_{x} \theta_{n}(x),\quad n\in\mathbb{N},\ t\geq 0,\ x\in\mathbb{R}^{d}.\] In order to show that the regularizations of Remark 3.1 determine elements of the domain \(\mathbf{D}_{A}\) (cf. Subsection 2.1) related to the pseudo-differential operators \(A_{n}=a_{n}(D)=(\text{Op }a_{n}),\)\(n\in\mathbb{N},\) we recall that \(g\in\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d}),\)\(p\in(1,\infty],\) if and only if it is of the form \(g=\sum_{|\alpha|\leq k}g_{\alpha}^{(\alpha)},\) where \(g_{\alpha}\in L^{p}(\mathbb{R}^{d}),\)\(k\in\mathbb{N}\cup\{0\}.\) Recall also, that \(\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d})\) is a strong dual of the space \(\mathcal{D}_{L^{q}}(\mathbb{R}^{d}),\)\(q=p/(p-1)\) (\(q=1\) for \(p=\infty\)), consisting of smooth functions \(\phi\) for which all the norms \(\|\phi^{(\alpha)}\|_{L^{q}(\mathbb{R}^{d})},\)\(\alpha\in(\mathbb{N}\cup\{0\})^{d},\) are finite. Using the Holder inequality \(\|g_{\alpha}*\theta_{n}^{(\alpha)}\|_{L^{p}}\leq\|g_{\alpha}\|_{L^{p}}\|\theta_ {n}^{(\alpha)}\|_{L^{q}},\) we have \[g_{n}=\sum_{|\alpha|\leq k}g_{\alpha}*\theta_{n}^{(\alpha)}\in L^ {p}(\mathbb{R}^{d}),\ \ \text{since}\ \ g_{\alpha}*\theta_{n}^{(\alpha)}\in L^{p}(\mathbb{R}^{d}),\ n\in\mathbb{N}. \tag{8}\] Finally, \((\theta_{n}^{(\alpha)}(x))_{n}=(n^{d+|\alpha|}\theta^{(\alpha)}(nx))_{n},\)\(x\in\mathbb{R}^{d},\) imply \((\theta_{n}^{(\alpha)})_{n}\in\mathcal{E}^{M}_{L^{q}(\mathbb{R}^{d})}.\) So, \((g_{n})_{n}\in\mathcal{E}^{M}_{L^{p}(\mathbb{R}^{d})}.\) **Remark 3.2**.: The case \(p=1\) should be treated in another way. We exclude this case in order to simplify our exposition. Recall, if \(a\in S_{1,0}^{m},\) then it determines a pseudo-differential operator on \(L^{p}(\mathbb{R}^{d}),\) with the domain \[D_{\text{Op}\,a}=\{g\in L^{p}(\mathbb{R}^{d}):\mathcal{F}^{-1}( a(\xi)\mathcal{F}g(\xi))\in L^{p}(\mathbb{R}^{d})\}.\] Since \(\mathcal{S}(\mathbb{R}^{d})\subset D_{\text{Op}\,a},\) these operators are densely defined. Moreover, we have the next lemma. **Lemma 3.3**.: _Let \(p\in[1,\infty),\) and \(g_{n},\)\(n\in\mathbb{N},\) be of the form (8). Let \((a_{n})_{n}\in(S_{1,0}^{m})^{\mathbb{N}}\) so that (6) holds. Then, \(((\text{Op }a_{n})g_{n})_{n}\) belongs to \(\mathcal{E}^{M}_{L^{p}(\mathbb{R}^{d})}.\)_ Proof.: Take \(s\in\mathbb{N}\) such that \(h_{n}(x),\)\(x\in\mathbb{R}^{d},\) defined by \(x\mapsto h_{n}(x)=\int_{\mathbb{R}^{d}}e^{2\pi i\xi x}a_{n}(\xi)(1+2\pi|\xi|^{2} )^{-s}d\xi\) belongs to \(L^{q}(\mathbb{R}^{d})\). Then \[(\text{Op }a_{n})g_{n}(x) =\int_{\mathbb{R}^{n}}(1-\Delta_{x})^{s}e^{2\pi i\xi x}\frac{a_{ n}(\xi)}{(1+2\pi|\xi|^{2})^{s}}\ d\xi*g_{n}(x)\] \[=h_{n}(x)*(1-\Delta_{x})^{s}g_{n}(x)\qquad(\Delta_{x}\text{ is Laplacian}). \tag{9}\] By (6), one can find \(C>0\) and \(a\in\mathbb{R}\) so that \(\|h_{n}\|_{L^{q}}\leq Cn^{a},\)\(n\in\mathbb{N}.\) We use (8) and in \((1-\Delta_{x})^{s}g_{n}(x),\) on the right hand side of (8), we differentiate only the part \(\theta_{n}^{(\alpha)}\). Clearly, \((\theta_{n}^{(\alpha)})_{n}\in\mathcal{E}^{M}_{L^{q}(\mathbb{R}^{d})}.\) So, using again Holder inequality, one obtains that there exists a sequence \((C_{n})_{n}\in\mathcal{E}^{M}_{\mathbb{R}}\) so that \[\|(\text{Op }a_{n})g_{n}\|_{L^{p}(\mathbb{R}^{d})}\leq C_{n},\quad n \in\mathbb{N}.\] This completes the proof. We continue with the assumptions (cf. [2] Subsections 8.2, 8.3): (A1): \[\exists\,r>0,\ \exists\,L>0,\ \exists\,C_{n}>0\ \ n\in\mathbb{N},\ \exists\,c_{0}>0,\] \[|a_{n}(\xi)|\geq C_{n}|\xi|^{r},\ |\xi|>L\ \text{and}\ 1/C_{n}\leq c_{0};\] (A2): \(\rho(\text{Op}\ a_{n})\neq\emptyset,\,n\in\mathbb{N};\) (A3): \(\sup_{\xi\in\mathbb{R}^{d}}\text{Re}\ a_{n}(\xi)\leq m,\,n\in\mathbb{N},\) for some \(m\in\mathbb{R}.\) **Proposition 3.4**.: _Assume that a sequence of symbols \((a_{n})_{n},\) satisfies (6), (10), (A1) - (A3), as well as that all \(\text{Op}\ a_{n}\) have the same domain \(D=D_{\text{Op}\,a_{n}},\,n\in\mathbb{N}.\) Assume that \(p\) satisfies_ \[\left|\frac{1}{2}-\frac{1}{p}\right|<\frac{r}{md}. \tag{10}\] _Then_ \[S_{n}(t)u=\mathcal{F}^{-1}\left(\int_{0}^{t}e^{sa_{n}(\cdot)}ds\ \mathcal{F}u( \cdot)\right),\quad u\in L^{p}(\mathbb{R}^{d}),\quad n\in\mathbb{N}, \tag{11}\] _is a g.e.i.s. generated by \((\text{Op}\ a_{n})_{n}.\) Moreover, \((\text{Op}\ a_{n})_{n}\in\mathcal{A}_{3}^{M}.\) In particular, (G3) holds with \(\sup_{n\in\mathbb{N}}\|R(\lambda,\text{Op}\ a_{n})\|_{\mathcal{L}(L^{p})}<\infty,\, \lambda\in(\omega,\infty).\)_ Proof.: Essentially, assumption (10) implies that \(a_{n}\) determines one time integrated semigroup (cf. [2]). Moreover, by the implication (i) \(\Rightarrow\) (ii) of Theorem 8.3.6 in [2] we have directly that \(a_{n}\) determines exponentially bounded integrated semigroup \(S_{n}\) of the form (11) for every \(n.\) By the uniform bound of \(1/C_{n}\) in (A1) and the uniform bound in (A3), we obtain that \((S_{n})_{n}\) is g.e.i.s. Since \(R(\lambda,\text{Op}\ a_{n})\) is defined by the \(S_{n}\) with the uniform exponential bound of all \(S_{n}\) in (4), \(n\in\mathbb{N},\) we have that (G2) holds with \(\omega>|m|,\) as well as that (G3) holds with the uniform bound \(\sup_{n\in\mathbb{N},\lambda>\omega}\|R(\lambda,\text{Op}\ a_{n})\|_{ \mathcal{L}(L^{p})}<\infty.\) **Proposition 3.5**.: _Concerning \((a_{n})_{n}\) assume that all the assumptions of Proposition 3.4 hold. Let \((u_{0,n})_{n}\in\mathcal{E}_{L^{p}(\mathbb{R}^{d})}^{M}\) and \((f_{n})_{n},(\frac{d}{dt}\,f_{n})_{n}\in\mathcal{C}_{exp}^{M}([0,\infty);L^{p} (\mathbb{R}^{d})),\,p\in[1,\infty).\) Then the sequence of equation_ \[w_{n}(t,x)=u_{0,n}(x)+a_{n}(D)\int_{0}^{t}w_{n}(r,x)\ dr+\int_{0}^{t}f_{n}(s,x )\ ds,\quad n\in\mathbb{N}, \tag{12}\] _has a sequence of solutions \((w_{n})_{n}\in\mathcal{C}_{\exp}^{M}([0,\infty);L^{p}(\mathbb{R}^{d}))\) (mild solutions, for every \(n\)) given by \(w_{n}=\frac{d}{dt}v_{n}\), where_ \[v_{n}(t,x)=S_{n}(t)u_{0,n}(x)+\int_{0}^{t}S_{n}(t-r)f_{n}(r,x)\ dr,\quad t\geq 0,\quad x\in\mathbb{R}^{d},\quad n\in\mathbb{N}. \tag{13}\] Proof.: We have by Proposition 3.4 that \((S_{n})_{n}\) is a g.i.e.s. so the mappings \([0,\infty)\to L^{p}(\mathbb{R}^{d})\) given by \(t\mapsto S_{n}(t)u_{0,n}(x)\) and \(t\mapsto\int_{0}^{t}S_{n}(t-r)f_{n}(r,x)\ dr\) are continuous, as well as their derivatives, with respect to \(t,\) for every fixed \(n\). Thus, by assumptions of the proposition, there holds that \(a_{n}(D)u_{0,n}+\frac{d}{dt}f(0)\in L^{p}(\mathbb{R}^{d}),n\in\mathbb{N}.\) This implies that the assumptions of Corollary 3.2.11. in [2] are satisfied. By part c) of this corollary, there exists a unique mild solution to (12) (with fixed \(n\)). The fact that \((v_{n})_{n}\) and \((w_{n})_{n}\) have a moderate growth with respect to \(n\) follows from the moderate growth of \((S_{n})_{n},\)\((u_{0,n})_{n}\) and \((f_{n})_{n}.\) The sequence of mild solutions \((w_{n})_{n}\) is a very weak solution to (1), in the sense of [5] and [16] because for every fixed \(n,\)\(w_{n}(\cdot,\cdot)\) is the distributional solution to (7), \[\langle\frac{\partial}{\partial t}w_{n}(t,x)-a_{n}(D)w_{n}(t,x)-f_{n}(t,x), \psi(t,x)\rangle=0,\quad\psi(t,x)\in\mathcal{D}([0,\infty)\times\mathbb{R}^{d}), \tag{14}\] \(w_{n}(0,x)=u_{0,n}(x),n\in\mathbb{N},\) and \((w_{n})_{n}\) has a moderate growth with respect to \(n\). Moderate growth means that \[\forall\psi\in\mathcal{D}((0,\infty)\times\mathbb{R}^{d})\ \ \exists m=m_{\psi}\in\mathbb{R}, \tag{15}\] \[|\langle w_{n}(t,x),\psi(t,x)\rangle|=O(n^{m}),\ n\to\infty.\] This is a consequence of the fact that the mapping \([0,\infty)\times L^{p}(\mathbb{R}^{d})\ni(t,f)\mapsto S_{n}(t)f\in L^{p}( \mathbb{R}^{d})\) is continuous because \(t\mapsto S_{n}(t,\cdot)\in L^{p}(\mathbb{R}^{d})\) is continuous, and determines a distribution \(w_{n}(t,x)\in\mathcal{D}^{\prime}((0,\infty)\times\mathbb{R}^{d})\). Note that \(C([0,\infty),L^{p}(\mathbb{R}^{d}))=C([0,\infty)\times L^{q}(\mathbb{R}^{d})),\) with \(f(t,\varphi)=\int f(t,x)\varphi(x)\,dx,\)\(\varphi\in L^{q}(\mathbb{R}^{d}).\) The next corollary serves as a motivation for our approach. **Corollary 3.6**.: _Let \(a(D)\in S_{0,1}^{m}\) be a pseudo-differential operator on \(L^{p}(\mathbb{R}^{d})\) so that (A1) - (A3) and (10) hold. Let \((u_{0,n}(x))_{n}\) and \((f_{n}(t,x))_{n}\) be a sequence in \(L^{p}(\mathbb{R}^{d})\) and \(C^{1}([0,\infty),L^{p}(\mathbb{R}^{d}))\), respectively, obtained as a regularization of \(u_{0}\in\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d})\) and \(f\in C^{1}([0,\infty),\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d}))\) (as in Remark 3.1). Then, the sequence \((v_{n})_{n}\) of the form (13) determine \(w_{n}=\frac{d}{dt}v_{n},n\in\mathbb{N}\), a sequence of mild solutions to (12); \((w_{n})_{n}\) has a subsequence \((w_{k_{n}})_{n}\) with elements in \(C([0,\infty),L^{p}(\mathbb{R}^{d}))\) such that it converges to \(w(t,x)\in\mathcal{D}^{\prime}([0,\infty)\times\mathbb{R}^{d})\). Moreover, \(w\) is a weak solution to (1); it satisfies_ \[\langle\frac{\partial}{\partial t}w(t,x)-a(D)w(t,x)-f(t,x),\psi(t,x)\rangle=0, \quad\psi(t,x)\in\mathcal{D}([0,\infty)\times\mathbb{R}^{d}), \tag{16}\] \[\langle w(0,x),\psi(x)\rangle=\langle u_{0}(x),\psi(x)\rangle,\ \psi\in\mathcal{D}(\mathbb{R}^{d}).\] Proof.: We fix \(t>0.\) With \(S\) as in (11) (without subindex), we have that \((S(t)u_{0,n}(\cdot))_{n}\) is a bounded sequence in \(\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d})\). The same holds for \((\int_{0}^{t}S(t-r)f_{n}(r,\cdot)\ dr)_{n}.\) This implies that there exists a subsequence \(v_{k_{n}}(t,x)=S(t)u_{0,k_{n}}(x)+\int_{0}^{t}S(t-r)f_{k_{n}}(r,x)\ dr\) so that it converges weakly as \(n\to\infty\) to \(v(t,\cdot)\in\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d})\). If we consider the set of rational points \(Q_{+}\subset[0,\infty),\)\(Q_{+}=\{q_{1},,q_{2},...\}\) and form a convergent subsequence of already convergent subsequence, by diagonalization, we can construct a subsequence (again denoted as) \((v_{k_{n}})_{n}\) so that for every \(t\in Q_{+},\)\(v_{k_{n}}(t,\cdot)\to v(t,\cdot)\in\mathcal{D}^{\prime}_{L^{p}}(\mathbb{R}^{d}),\)\(n\to\infty.\) Since all the elements of this subsequence are continuous with respect to \(t,\) we obtain that \(v_{k_{n}}(t,\cdot)\to v(t,\cdot),t\in[0,\infty),n\to\infty,\) where \(v(t,\cdot)\in C^{1}([0,\infty),\mathcal{D}^{\prime}_{L^{p}})\subset\mathcal{D} ^{\prime}([0,\infty)\times\mathbb{R}^{d}).\) This is a consequence of the fact that \(t\mapsto\langle v_{k_{n}}(t,x),\psi(x)\rangle,\)\(n\in\mathbb{N},\)\(\psi\in\mathcal{D}_{L^{q}}(\mathbb{R}^{d})\) as well as \(t\mapsto\langle\frac{d}{dt}v_{k_{n}}(t,x),\psi(x)\rangle,\) \(n\in\mathbb{N},\,\psi\in\mathcal{D}_{L^{q}}(\mathbb{R}^{d})\) are uniformly continuous sequences of functions on any bounded interval \([0,T],\,T>0.\) Thus, by the convergence in this space of distributions, \(w=\frac{d}{dt}v\) is a weak solution to (1), i.e. (16) holds. Assume that \(u_{0}\in L^{p}(\mathbb{R}^{d}),\,f\in C^{1}([0,\infty),L^{p}(\mathbb{R}^{d})), \,|D^{\alpha}_{\xi}a_{n}(\xi)|\leq C\langle\xi\rangle^{m-|\alpha|},\,n\in \mathbb{N},\xi\in\mathbb{R}^{d},\) for some \(C>0,\) as well as that (10) and (A1) - (A3) hold. Then, the sequence of equations (12) (with \(u_{0}\) and \(f\) instead of \(u_{0,n}\) and \(f_{n}\)) has a sequence of solutions \((w_{n})_{n}\) of the form (13), where \(S_{n}(t)\) is given by (11). Moreover, there exists a subsequence of solutions \((w_{k_{n}})_{n}\) such that \(w_{k_{n}}\to w,\,n\to\infty,\) weakly in \(\mathcal{D}^{\prime}([0,\infty)\times\mathbb{R}^{d}).\) This can be proved by the similar arguments as in the proof of the previous corollary. We apply the above considerations to a special equation (12), in the case \(d=1\) in order to discuss its dependence on the sequences of coefficients: \[\frac{\partial}{\partial t}w_{n}-P_{n}\left(\frac{\partial}{\partial x}\right) w_{n}=f_{n},\quad n\in\mathbb{N}, \tag{17}\] where \((f_{n})_{n}\) is a moderate sequence in \(C^{1}([0,\infty),L^{p}(\mathbb{R}))\) and \(P_{n}\) is a linear differential operator with constant coefficients belonging to \(\mathcal{E}^{M}_{\mathbb{C}},\) of the form \[P_{n}(\partial/\partial x)=\alpha_{0,n}+i\beta_{0,n}+(\alpha_{1,n}+i\beta_{1, n})\partial/\partial x+(\alpha_{2,n}+i\beta_{2,n})(\partial/\partial x)^{2}, \quad n\in\mathbb{N}.\] We note that (G1) holds since all the domains \(D_{P_{n}},n\in\mathbb{N},\) are equal to the Sobolev space \(W^{2,p}(\mathbb{R})\). Since we are considering one dimensional case, \(P_{n}\) are elliptic, \(n\in\mathbb{N},\) so (A1) and (A2) are fulfilled. A sufficient condition for the application of Proposition 3.5 and Corollary 3.6, originating from (A3) reads: \[\alpha_{2,n}\geq 0\text{ and }\omega_{n}=\max\left\{0,\frac{4\alpha_{2,n} \alpha_{0,n}+\beta_{1,n}^{2}}{4\alpha_{2,n}}\right\}\leq\omega,\ n\in\mathbb{N },\text{ for some }\omega\in\mathbb{R}.\] It shows that whenever \(\beta_{1,n}=O(\sqrt{\alpha_{2,n}}),\)\(\alpha_{2,n}>0\) and \(\alpha_{0,n}=O(1)\) condition (A3) is satisfied. Then, directly using [9] Theorem 4.1, one has that g.e.i.s. \((S_{n})_{n}\) is defined by \(S_{n}(t)u(t,\cdot)=(2\pi)^{-1/2}(\mathcal{F}^{-1}\phi_{t,n})\ast u(t,\cdot),\,n \in\mathbb{N}\) where \(\phi_{t,n}(\xi):=\int_{0}^{t}e^{p_{n}(i\xi)s}ds,\,\xi\in\mathbb{R},\,t>0,\) see (11). Again one has the existence of a very weak solution for (17). Note that by [9], instead of \(L^{p}(\mathbb{R})\) one can consider in (17) spaces: \(C_{0}(\mathbb{R}),\,C_{b}(\mathbb{R}),\,UC_{b}(\mathbb{R}).\) More generally, if one defines the space \(\mathcal{D}_{E},\) where \(E\) is one of quoted spaces, then by the adaptation of Proposition 3.5 and Corollary 3.6, one obtains the corresponding g.e.i.s's. We refer to [4] for the translation invariant spaces \(\mathcal{D}_{E}\) and their duals. **Remark 3.7**.: We can consider equation (7) with \(a_{n}(\xi)=ic_{n}|\xi|^{m},\,\xi\in\mathbb{R}^{d},\,m\in\mathbb{R},\,c_{n}\in \mathbb{R},\)\(|c_{n}|\leq c,\,n\in\mathbb{N},\) with the similar definitions: (Op \(a_{n})(u)=\mathcal{F}^{-1}(a_{n}\mathcal{F}u)\) and their domains \(D_{n}=D\subset L^{p}(\mathbb{R}^{d}),\,n\in\mathbb{N}.\) Now as in [2], Example 8.2.5, let \(m\) and \(p\) satisfy conditions of Theorem 8.3.9 of [2] (with \(k=1\)). Then the sequence \((a_{n})_{n}\in\mathcal{A}^{M}_{3}\) and it determines a g.e.i.s. \((S_{n})_{n}\). We have a similar assertions as in Proposition 3.5 and Corollary 3.6, adapted to \((ic_{n}|\xi|^{m})_{n},\) which will not be repeated. Associated sequences In this section we classify infinitesimal generators and corresponding g.e.i.s.'s and we analyse the relations of generalized infinitesimal generators and g.e.i.s.'s. Moreover, we introduce sequences associated to zero within algebras of Subsection 1.1. The notion of association between sequences is well understood in the literature related to the algebraic theory of generalized function [3], [6]. A sequence \((x_{n})_{n}\in\mathcal{E}_{X}^{M}\) is associated to zero if \(||x_{n}||_{X}\to 0\) as \(n\to\infty\). Denote by \(\mathcal{I}_{X}\) the space of elements of \(\mathcal{E}_{X}^{M}\) which are associated to zero. Such elements make a subspace of \(\mathcal{E}_{X}^{M}\). Similarly, we define \(\mathcal{I}_{\mathcal{L}(X)}\) as the space of sequences \((N_{n})_{n}\in\mathcal{E}_{\mathcal{L}(X)}^{M}\) which converges to zero in \(\mathcal{L}(X)\), as \(n\to\infty\); \(\mathcal{I}_{\mathcal{L}(X)}\) is a subalgebra of \(\mathcal{E}_{\mathcal{L}(X)}^{M}\) under the operation of composition. A subspace of \(\mathcal{C}_{\exp}^{M}([0,\infty);X)\), consisting of elements \((N_{n}(t))_{n}\), \(t\geq 0\), with the property \[\sup_{t\geq 0}\|e^{-\omega t}N_{n}(t)\|_{X}\to 0,\quad n\to\infty,\text{ for some }\omega>0,\] is denoted by \(\mathcal{I}_{\exp}([0,\infty);X).\) Analogously, we define a subspace \(\mathcal{I}_{\exp}([0,\infty);\mathcal{L}(X))\) of the space \(\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X))\) containing elements \((N_{n}(t))_{n}\), \(t\geq 0\), such that for some \(\omega>0\), \(\sup_{t\geq 0}\|e^{-\omega t}N_{n}(t)\|_{\mathcal{L}(X)}\to 0\), \(n\to\infty\). Two sequences in \(\mathcal{E}_{X}^{M}\) or \(\mathcal{E}_{\mathcal{L}(X)}^{M}\) or \(\mathcal{C}_{\exp}^{M}([0,\infty);X)\) or \(\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X))\) are associated in these spaces, respectively, if their difference converges to zero, that is, belongs to the corresponding space \(\mathcal{I}_{X}\) or \(\mathcal{I}_{\mathcal{L}(X)}\) or \(\mathcal{I}_{\exp}([0,\infty);X)\) or \(\mathcal{I}_{\exp}([0,\infty);\mathcal{L}(X)).\) In any of these spaces the association is the relation of equivalence. We will use in the sequel the symbol "\(\sim\)" if the difference of two elements is associated. **Remark 4.1**.: One can also define weak associative sequences of quoted algebras when one involve test functions in the definitions (cf. [3]), as is it implicitly suggested in Section 3, where we considered sequences with values in distribution spaces. Concerning generators, we add to conditions (G1), (G2) and (G3) the following one: * For every \(\lambda>\omega\) there exist \(0<c_{1}(\lambda)<c_{2}(\lambda)\) such that \[\|R(\lambda,A_{n})\|_{\mathcal{L}(X)}\in(c_{1}(\lambda),c_{2}(\lambda)),\quad n \in\mathbb{N}.\] Note that (G4) implies \[R(\lambda,A_{n})y_{n}\to 0\ \text{ if and only if }\ y_{n}\to 0,\ n\to\infty\text{ for all }\lambda\in(\omega,\infty).\] We denote by \(\mathcal{A}_{4}^{M}\), the set of sequences which satisfy (G1), (G2), (G3), (G4). **Lemma 4.2**.: _Let \(A\in\mathcal{A}_{4}^{M}.\) If \((x_{n})_{n}\in\mathbf{D}_{A}\) and \((x_{n})_{n}\in\mathcal{I}_{X}\) then \((A_{n}x_{n})_{n}\in\mathcal{I}_{X}\)._ Proof.: Denote by \(y_{n}=\lambda x_{n}-A_{n}x_{n}\), \(n\in\mathbb{N}\), \(\lambda>\omega.\) Then \((x_{n})_{n}\in\mathbf{D}_{A}\) imply \((y_{n})_{n}\in\mathcal{E}_{X}^{M}\). Since \((R(\lambda,A_{n})y_{n})_{n}=(x_{n})_{n}\in\mathcal{I}_{X}\), according to (G4), one obtains \((y_{n})_{n}\in\mathcal{I}_{X}.\) Finally, \(A_{n}x_{n}=\lambda x_{n}-y_{n}\), \(\lambda>\omega\), \(n\in\mathbb{N}\), implies \((A_{n}x_{n})_{n}\in\mathcal{I}_{X}\) We introduce the relation of equivalence in \(\mathcal{A}_{4}^{M}\): \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}\) if * \(D=\tilde{D}\), where \(D=D_{A_{n}}\) and \(\tilde{D}=D_{\tilde{A}_{n}}\), \(n\in\mathbb{N}\); * \(\mathbf{D}_{A}=\mathbf{D}_{\tilde{A}}\); * \(((A_{n}-\tilde{A}_{n})x_{n})_{n}\to 0\) in \(X\), as \(n\to\infty\), for every \((x_{n})_{n}\in\mathbf{D}_{A}\). Note, if \((A_{n})_{n},(\tilde{A}_{n})_{n}\) in \(\mathcal{A}_{4}^{M}\) then there always exists \(\omega>0\) such that \((\omega,\infty)\subset\rho(A_{n})\cap\rho(\tilde{A}_{n})\), \(n\in\mathbb{N}\). We say that sequences of resolvents \((R(\lambda,A_{n}))_{n}\) and \((R(\lambda,\tilde{A}_{n}))_{n}\), \(\lambda>\omega\), are associated, and write \[(R(\lambda,A_{n}))_{n}\simeq(R(\lambda,\tilde{A}_{n}))_{n}\] if for every \(\lambda\in(\omega,\infty)\), * \(R=\operatorname{range}R(\lambda,A_{n})=\operatorname{range}R(\lambda,\tilde{A }_{n}),\,n\in\mathbb{N}\); * \(\mathbf{R}_{A}=\mathbf{R}_{\tilde{A}}\); * \(((R(\lambda,A_{n})-R(\lambda,\tilde{A}_{n}))y_{n})_{n}\to 0\) in \(X\), as \(n\to\infty\), for every \((y_{n})_{n}\in\mathcal{E}_{X}^{M}\). **Theorem 4.3**.: _Let \((A_{n})_{n},(\tilde{A}_{n})_{n}\in\mathcal{A}_{4}^{M}\) and \((R(\lambda,A_{n}))_{n},(R(\lambda,\tilde{A}_{n}))_{n}\in\mathcal{E}_{\mathcal{ L}(X)}^{M},\,\,\lambda>\omega,\) be corresponding resolvents._ _Then \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}\) if and only if \((R(\lambda,A_{n}))_{n}\simeq(R(\lambda,\tilde{A}_{n}))_{n},\,\lambda\in(\omega,\infty)\)._ _Proof._ (\(\Rightarrow\)) Let us first assume that \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}.\) Since \(D_{A_{n}}=D_{\tilde{A}_{n}}=D,\,\,n\in\mathbb{N},\) one directly obtains that \(\operatorname{range}R(\lambda,A_{n})=\operatorname{range}R(\lambda,\tilde{A }_{n})=D\) for any \(\lambda\in(\omega,\infty)\) and \(n\in\mathbb{N}.\) Also, Corollary 2.2 shows that \(\mathbf{D}_{A}=\mathbf{D}_{\tilde{A}}\) implies \(\mathbf{R}_{A}=\mathbf{R}_{\tilde{A}}\). Let \((y_{n})_{n}\in\mathcal{E}_{X}^{M}\) arbitrary and \(\lambda>\omega.\) Denote by \[(R(\lambda,A_{n})y_{n})_{n}=(x_{n})_{n},\,\,(R(\lambda,\tilde{A}_{n})y_{n})_{ n}=(\tilde{x}_{n})_{n}\in\mathbf{R}_{A}=\mathbf{D}_{A}.\] This implies that \[y_{n}=(\lambda I-A_{n})x_{n}=(\lambda I-\tilde{A}_{n})\tilde{x}_{n},\quad n \in\mathbb{N}.\] Now we infer \[\lambda(x_{n}-\tilde{x}_{n}) =A_{n}x_{n}-\tilde{A}_{n}\tilde{x}_{n}=A_{n}x_{n}-\tilde{A}_{n} \tilde{x}_{n}+\tilde{A}_{n}x_{n}-\tilde{A}_{n}x_{n}\] \[=(A_{n}-\tilde{A}_{n})x_{n}+\tilde{A}_{n}(x_{n}-\tilde{x}_{n})\] and \[(\lambda I-\tilde{A}_{n})(x_{n}-\tilde{x}_{n})=(A_{n}-\tilde{A}_{n})x_{n}.\] Since \(((A_{n}-\tilde{A}_{n})x_{n})_{n}\in\mathcal{I}_{X}\) one obtains \(((\lambda I-\tilde{A}_{n})(x_{n}-\tilde{x}_{n}))_{n}\in\mathcal{I}_{X}.\) Applying \((R(\lambda,\tilde{A}_{n}))_{n}\) and using (G4) one obtains \[(x_{n})_{n}\sim(\tilde{x}_{n})_{n}\quad\Leftrightarrow\quad(R(\lambda,A_{n}) y_{n})_{n}\sim(R(\lambda,\tilde{A}_{n})y_{n})_{n}.\] So one obtains \((R(\lambda,A_{n}))_{n}\simeq(R(\lambda,\tilde{A}_{n}))_{n}\), \(\lambda\in(\omega,\infty)\). * Now, let \((R(\lambda,A_{n}))_{n}\simeq(R(\lambda,\tilde{A}_{n}))_{n}\), \(\lambda\in(\omega,\infty).\) Clearly, \(D_{A_{n}}=D_{\tilde{A}_{n}}=D,\)\(n\in\mathbb{N},\) since range \(R(\lambda,A_{n})=\) range \(R(\lambda,\tilde{A}_{n})=D,\)\(n\in\mathbb{N}.\) Corollary 2.2 implies \(\mathbf{D}_{A}=\mathbf{D}_{\tilde{A}}.\) Finally, let us show that (GE3) holds. Let \((x_{n})_{n}\in\mathbf{D}_{A},\) be given and denote by \((y_{n})_{n}=(A_{n}x_{n})_{n},\;(\tilde{y}_{n})_{n}=(\tilde{A}_{n}x_{n})_{n}\in \mathcal{E}_{X}^{M}.\) Then, for \(\lambda>\omega,\) \[(\lambda I-A_{n})x_{n} =\lambda x_{n}-y_{n}\quad\Rightarrow x_{n}=\lambda R(\lambda,A_{n})x_{n}-R(\lambda,A_{n})y_{n}\] \[(\lambda I-\tilde{A}_{n})x_{n} =\lambda x_{n}-\tilde{y}_{n}\quad\Rightarrow x_{n}=\lambda R(\lambda,\tilde{A}_{n})x_{n}-R(\lambda, \tilde{A}_{n})\tilde{y}_{n}.\] Next, for \(\lambda>\omega,\) \[R(\lambda,\tilde{A}_{n})\tilde{y}_{n}-R(\lambda,A_{n})y_{n} =\lambda R(\lambda,\tilde{A}_{n})x_{n}-\lambda R(\lambda,A_{n})x_ {n},\] \[=\left(R(\lambda,\tilde{A}_{n})-R(\lambda,A_{n})\right)(\lambda x _{n}).\] This relation and the assumption \((R(\lambda,A_{n}))_{n}\simeq(R(\lambda,\tilde{A}_{n}))_{n}\), \(\lambda>\omega\) imply that \((R(\lambda,\tilde{A}_{n})\tilde{y}_{n})_{n}\sim(R(\lambda,A_{n})y_{n})_{n}.\) On the other hand, since \[R(\lambda,\tilde{A}_{n})\tilde{y}_{n}-R(\lambda,A_{n})y_{n} =R(\lambda,\tilde{A}_{n})\tilde{y}_{n}-R(\lambda,A_{n})y_{n}\pm R (\lambda,\tilde{A}_{n})y_{n}\] \[=R(\lambda,\tilde{A}_{n})(\tilde{y}_{n}-y_{n})+(R(\lambda,\tilde{A }_{n})-R(\lambda,A_{n}))y_{n},\] we conclude that \[(R(\lambda,\tilde{A}_{n})\tilde{y}_{n})_{n}\sim(R(\lambda,\tilde{A}_{n})y_{n })_{n},\quad\lambda\in(\omega,\infty).\] Thus, Lemma 4.2 implies \[((\lambda I-\tilde{A}_{n})R(\lambda,\tilde{A}_{n})\tilde{y}_{n})_{n}\sim(( \lambda I-\tilde{A}_{n})R(\lambda,\tilde{A}_{n})y_{n})_{n}\quad\Leftrightarrow \quad(\tilde{y}_{n})_{n}\sim(y_{n})_{n}.\] This means \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}.\) ### Relations between generators and g.e.i.s.'s We define the relation of equivalence for g.e.i.s.'s in the sense of association: **Definition 4.1**.: _Let \((S_{n})_{n}\) and \((\tilde{S}_{n})_{n}\) be g.e.i.s.'s determined by \((A_{n})_{n},(\tilde{A}_{n})_{n}\in\mathcal{A}_{4}^{M}.\) Then \((S_{n})_{n}\simeq(\tilde{S}_{n})_{n}\) if \(((S_{n}-\tilde{S}_{n})x_{n})_{n}\in\mathcal{I}_{\exp}([0,\infty);X)\) for any \((x_{n})_{\in}\mathcal{E}_{X}^{M}\) and the sequences of resolvents \((R(\lambda,A_{n}))_{n}\) and \((R(\lambda,\tilde{A}_{n}))_{n}\) satisfy (RE1) and (RE2)._ **Theorem 4.4**.: _Assume \((S_{n})_{n}\simeq(\tilde{S}_{n})_{n}.\) Then their infinitesimal generators satisfy \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}.\)_ Proof.: Let us prove that \(((R(\lambda,A_{n})-R(\lambda,\tilde{A}_{n}))x_{n})_{n}\in\mathcal{I}_{X}\) for any \((x_{n})_{n}\in\mathcal{E}_{X}^{M}.\) By Proposition 4.3, this implies that \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}.\) Let \(\lambda>\omega\) and \((x_{n})_{n}\in\mathcal{E}_{X}^{M}\) be fixed. Then, \[\left\|(R(\lambda,A_{n})-R(\lambda,\tilde{A}_{n}))x_{n}\right\|_{X} =\left\|\lambda\int_{0}^{\infty}e^{-\lambda t}\left(S_{n}(t)-\tilde {S}_{n}(t)\right)x_{n}\,dt\right\|_{X}\] \[\leq\sup_{t\geq 0}\|e^{-\omega t}(S_{n}(t)-\tilde{S}_{n}(t))x_{n} \|_{X}\left|\lambda\int_{0}^{\infty}e^{-(\lambda-\omega)t}dt\right|\] \[\leq C\sup_{t\geq 0}\|e^{-\omega t}(S_{n}(t)-\tilde{S}_{n}(t))x_{n} \|_{X}\to 0,\quad n\to\infty.\] This completes the proof. Next, we introduce sets of sequences of strong infinitesimal generators denoted by \(\mathcal{A}_{5}^{M}.\) Denote by (G5) assumption (5), i.e. * \(\sup_{\operatorname{Re}\lambda>\omega}\|\lambda^{b}R(\lambda,A_{n})\|_{ \mathcal{L}(X)}\leq M_{n},\)\(n\in\mathbb{N},\) for some \(b>0\) and \((M_{n})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}.\) We say that a sequence \((A_{n})_{n}\) is a sequence of strong infinitesimal generators, that is \((A_{n})_{n}\in\mathcal{A}_{5}^{M},\) if \((A_{n})_{n}\in\mathcal{A}_{4}^{M}\) and (G5) holds. Note that if \((A_{n})_{n}\) satisfies (G5) then it also satisfies condition (G3). Let us introduce a relation of equivalence in \(\mathcal{A}_{5}^{M}\) as follows: \((A_{n})_{n}\simeq_{0}(\tilde{A}_{n})_{n}\) if \((A_{n})_{n}\simeq(\tilde{A}_{n})_{n}\) and * there exists \(b>0\) such that \(\sup_{\operatorname{Re}\lambda>\omega}\left\|\lambda^{b}\left(R(\lambda,A_{n} )-R(\lambda,\tilde{A}_{n})\right)x_{n}\right\|_{X}\to 0,\)\(n\to\infty,\) for all \((x_{n})_{n}\in\mathcal{E}_{X}^{M}.\) **Theorem 4.5**.: _If \((A_{n})_{n}\simeq_{0}(\tilde{A}_{n})_{n}\) in \(\mathcal{A}_{5}^{M},\) then \((S_{n})_{n}\simeq(\tilde{S}_{n})_{n}.\)_ Proof.: Assumption (G5) and Theorem 2.4 imply that \((A_{n})_{n}\) and \((\tilde{A}_{n})_{n}\) generate g.e.i.s.'s \((S_{n})_{n}\) and \((\tilde{S}_{n})_{n},\) respectively. Next, as in the proof of Theorem 2.5.1 in [2], \[\|(S_{n}(t)-\tilde{S}_{n}(t))x_{n}\|_{X}\leq\frac{M_{n}e^{\alpha t}}{\pi}\int_ {R}^{\infty}\frac{dr}{r^{1+b}}+\frac{M_{n}e^{\alpha t}}{\pi R^{b}}\int_{0}^{ \pi/2}e^{Rt\cos\theta}\,d\theta,\;n\in\mathbb{N},\;t>0,\] where \[M_{n}=\sup_{\operatorname{Re}\lambda>\omega}\|\lambda^{b}(R(\lambda,A_{n})-R( \lambda,\tilde{A}_{n}))x_{n}\|_{X}.\] Now assumption (GE4) above gives that \(M_{n}\to 0,\)\(n\to\infty.\) This finishes the proof. We recall a one more known assumption for integrated semigroups (called theoretically important, in [15], p.128). Let a sequence \((A_{n})_{n}\in\mathcal{A}_{4}^{M}\) be such that the common domain \(D\) of operators \(A_{n},\)\(n\in\mathbb{N},\) given in (G1), is dense in \(X\) (\(\overline{D}=X\)). Assume \[\sup_{k\in\mathbb{N}_{0}}\sup_{\lambda>\omega}\|(\lambda-\omega)^{k+1}(R( \lambda,A_{n})/\lambda)^{(k)}/k!\|_{\mathcal{L}(X)}\leq M_{n},\quad n\in \mathbb{N}, \tag{18}\] for some \((M_{n})_{n}\in\mathcal{E}_{\mathbb{R}}^{M}.\) Then, by Theorem 3.3.2 in [2], the sequence \((A_{n})_{n}\) generate a sequence of exponentially bounded integrated semigroups \((S_{n})_{n}\) such that \(\|S_{n}(t)\|_{\mathcal{L}(X)}\leq M_{n}e^{\omega t};\) so \((S_{n})_{n}\in\mathcal{C}_{\exp}^{M}([0,\infty);\mathcal{L}(X)).\) This classical result implies the following assertion. **Proposition 4.6**.: _Let \((A_{n})_{n},(\tilde{A}_{n})_{n}\in\mathcal{A}_{4}^{M}\) satisfy (18), so that the common domain of \(A_{n},\)\(\tilde{A}_{n},\)\(n\in\mathbb{N},\)\(D\) is dense in \(X.\) Assume that for all \((x_{n})_{n}\in\mathcal{E}_{X}^{M}\)_ \[\sup_{k\in\mathbb{N}_{0}}\sup_{\lambda>\omega}\|(\lambda-\omega)^{k+1}(((R( \lambda,A_{n})-R(\lambda,\tilde{A}_{n}))/\lambda)^{(k)}x_{n})/k!\|_{X}\to 0,\ n\to\infty. \tag{19}\] _Then \((S_{n})_{n}\simeq(\tilde{S}_{n})_{n}.\)_ Proof.: We know, by Theorem 3.3.2 in [2], that \((A_{n})_{n}\) and \((\tilde{A}_{n})_{n}\) generate g.e.i.s.'s \((S_{n})_{n}\) and \((\tilde{S}_{n})_{n}.\) Let \((x_{n})_{n}\in\mathcal{E}_{X}^{M}\) and let \(m_{n},\)\(n\in\mathbb{N}\) denote, in (19), \[\sup_{k\in\mathbb{N}_{0}}\sup_{\lambda>\omega}\|(\lambda-\omega)^{k+1}(((R( \lambda,A_{n})-R(\lambda,\tilde{A}_{n}))/\lambda)^{(k)}x_{n})/k!\|_{X}\leq m_{ n}\to 0,\quad n\to\infty.\] Now, Theorem 2.4.2 in [2] implies that there exists \((g_{n})_{n}\in(L^{1}_{\rm loc}([0,\infty),X))^{\mathbb{N}}\) so that \[\frac{(R(\lambda,A_{n})-R(\lambda,\tilde{A}_{n}))x_{n}}{\lambda}=\int_{0}^{ \infty}e^{-\lambda t}g_{n}(t)\ dt,\quad\lambda>\omega,\] and \(\|g_{n}(t)\|_{X}\leq m_{n}e^{\omega t},\)\(t\geq 0,\)\(n\in\mathbb{N}.\) Also, we know that there exist \((H_{n})_{n}\) and \((\tilde{H}_{n})_{n}\) in \((L^{1}_{\rm loc}([0,\infty),X))^{\mathbb{N}}\) determined by \(R(\lambda,A_{n})\) and \(R(\lambda,\tilde{A}_{n})\) so that \[\frac{R(\lambda,A_{n})x_{n}}{\lambda}-\frac{R(\lambda,\tilde{A}_{n})x_{n}}{ \lambda}=\int_{0}^{\infty}e^{-\lambda t}(H_{n}(t)-\tilde{H}_{n}(t))\ dt=\int_{0}^{ \infty}e^{-\lambda t}g_{n}(t)\ dt.\] Thus, by the uniqueness of the Laplace transform, we have \(\|H_{n}(t)-\tilde{H}_{n}(t)\|_{X}\leq m_{n}e^{\omega t},\)\(n\in\mathbb{N}.\) With \(H_{n}(t)=S_{n}(t)x_{n}\) and \(\tilde{H}_{n}(t)=\tilde{S}_{n}(t)x_{n},\)\(n\in\mathbb{N},\) we complete the assertion. ### Perturbations We finish the paper with the result concerning the perturbations. It directly follows from the corresponding one in [9], Section 3. The proof is omitted. Let \((A_{n})_{n}\in\mathcal{A}_{5}^{M},\) be a sequence of infinitesimal generators of g.e.i.s \((S_{n})_{n}\in\mathcal{C}_{\rm exp}^{M}([0,\infty);\mathcal{L}(X)).\) Let \((B_{n})_{n}\in\mathcal{E}_{\mathcal{L}(\bar{D})}^{M}\) so that \(||B_{n}||_{\mathcal{L}(\bar{D})}\leq C,\)\(n\in\mathbb{N}\) for some \(C>0\) (\(\bar{D}\) is the closure of \(D=D_{A_{n}},\)\(n\in\mathbb{N}\)). Assume that there exists \(\lambda_{0}\) such that \[B_{n}R(\lambda,A_{n})=R(\lambda,A_{n})B_{n},\quad\lambda>\lambda_{0},\quad n \in\mathbb{N}.\] Let (as in [9]) \[S_{n}^{B_{n}}(t)=e^{tB_{n}}S_{n}(t)-B_{n}\int_{0}^{t}e^{sB_{n}}S_{n}(s)\ ds, \quad t>0,\ \ n\in\mathbb{N}. \tag{20}\] Then we have the next adaptation of Proposition 3.1 in [9]. **Proposition 4.7**.: _Let \((A_{n})_{n}\in\mathcal{A}_{5}^{M}\) and \((B_{n})_{n}\in\mathcal{E}_{\mathcal{L}(\bar{D})}^{M}\) satisfy all assumptions given above._ 1. \((A_{n}+B_{n})_{n}\in\mathcal{A}_{5}^{M}.\) _It is a sequence of infinitesimal generators of_ \((S_{n}^{B_{n}})_{n}\) _given by (_20_) and_ \((S_{n}^{B_{n}})_{n}\) _is g.e.i.s._ 2. _Let_ \((C_{n})_{n}\in\mathcal{I}_{\mathcal{L}(\bar{D})}\) _such that_ \(C_{n}R(\lambda,A_{n})=R(\lambda,A_{n})C_{n},\)__\(\lambda>\lambda_{0},\)__\(n\in\mathbb{N},\) _and_ \(\tilde{B}_{n}=B_{n}+C_{n},\)__\(n\in\mathbb{N}.\) _Then_ \((S_{n}^{B_{n}})_{n}\simeq(S_{n}^{\tilde{B}_{n}})_{n}\) _and_ \((A_{n}+B_{n})_{n}\simeq_{0}\left(A_{n}+\tilde{B}_{n}\right)_{n}\)_._ 3. _Let_ \((A_{n})_{n},\left(\tilde{A}_{n}\right)_{n}\in\mathcal{A}_{5}^{M}.\) _Then_ \[(A_{n})_{n}\simeq_{0}(\tilde{A}_{n})_{n}\quad\Rightarrow\quad(S_{n}^{B_{n}})_{n }\simeq(\tilde{S}_{n}^{B_{n}})_{n}.\] With this proposition we can construct associated infinitesimal generators which produce associated g.e.i.s.'s. For example, continuing with the example concerning the differential operator given in the last part of Section 3, let \(X=L^{p}(\mathbb{R}),\)\(p\in(1,\infty),\)\(A=P(D)=a_{0}+a_{1}D+a_{2}D^{2},\)\(p(i\xi)=\sum_{j=0}^{2}a_{j}(i\xi)^{j}\) and make perturbation so that \(A_{n}=P_{n}(D)=a_{0}+1/n+a_{1}D+(a_{2}+1/n)D^{2}\) and \(p_{n}(i\xi)=(a_{0}+1/n)+a_{1}(i\xi)+(a_{2}+1/n)(i\xi)^{2}.\) We have that \(D(A)=D(A_{n})=W^{2,2}(\mathbb{R})\) and \(A-A_{n}=(1+D^{2})/n,\)\(n\in\mathbb{N}.\) So these sequences are associated and \[S(t)f=\mathcal{F}^{-1}\left(\int_{0}^{t}e^{p(i\xi)s}ds\right)*f,\quad S_{n}(t )f=\mathcal{F}^{-1}\left(\int_{0}^{t}e^{p_{n}(i\xi)s}ds\right)*f,\] \(f\in X,\ n\in\mathbb{N},\) and we have that for given \(f\in X\) \[\sup_{t>0}\|S_{n}(t)f-S(t)f\|_{X}\to 0,\quad n\to\infty.\] ## Acknowledgment The paper is supported by the following projects and grants: project F10 of the Serbian Academy of Sciences and Arts, project 142-451-2384 of the Provincial Secretariat for Higher Education and Scientific Research and projects 451-03-47/2023-01/200125 and 337-00-577/2021-09/46 of the Ministry of Education, Science and Technological Development of the Republic of Serbia.
2309.11993
Neural Stochastic Screened Poisson Reconstruction
Reconstructing a surface from a point cloud is an underdetermined problem. We use a neural network to study and quantify this reconstruction uncertainty under a Poisson smoothness prior. Our algorithm addresses the main limitations of existing work and can be fully integrated into the 3D scanning pipeline, from obtaining an initial reconstruction to deciding on the next best sensor position and updating the reconstruction upon capturing more data.
Silvia Sellán, Alec Jacobson
2023-09-21T12:04:15Z
http://arxiv.org/abs/2309.11993v1
# Neural Stochastic Screened Poisson Reconstruction ###### Abstract. Reconstructing a surface from a point cloud is an underdetermined problem. We use a neural network to study and quantify this reconstruction uncertainty under a Poisson smoothness prior. Our algorithm addresses the main limitations of existing work and can be fully integrated into the 3D scanning pipeline, from obtaining an initial reconstruction to deciding on the next best sensor position and updating the reconstruction upon capturing more data. neural surface reconstruction, uncertainty quantification 2023 ## 1. Introduction _Surface reconstruction_ is the process of transforming a discrete set of points in space (a common format for captured 3D geometry) into a complete two-dimensional manifold for use in downstream scientific applications. Given the fundamentally underdetermined nature of the problem, algorithms must rely on priors to decide on an output surface. Absent task-specific knowledge, the predominant geometry processing algorithm for surface reconstruction is _Poisson Surface Reconstruction_ (PSR) [11]. PSR encourages smoothness in the reconstruction through a Partial Differential Equation (PDE) whose solution can be computed efficiently and robustly. Drawing inspiration from it, Dai and Niessner [20] recently introduced a neural approximation of PSR, which sidesteps the PDE perspective on the problem, achieving some performance gains at the cost of losing theoretical guarantees (see Figure 4), overfitting (see Figure 3) and additional requirements (e.g., sensor positioning). Figure 1. We use a neural network to quantify the reconstruction uncertainty in Poisson Surface Reconstruction (center left), allowing us to efficiently select next sensor positions (center right) and update the reconstruction upon capturing data (right). Figure 2. We use neural networks to parametrize the stochastic implicit function describing the reconstructed surface. The mean is a simple five-layered MLP while the covariance includes a SoftPlus (SP) pass and an averaging step to enforce positiveness and symmetry, respectively. Statistically, PSR generates the most probable reconstruction based on the selected prior. This choice inherently defines an entire _posterior_ distribution in the space of possible reconstructions. While _Stochastic_ PSR (Sellan and Jacobson, 2022) computes this distribution for the first time in the context of PSR, it demands a complex discretization scheme and relies on multiple approximations to achieve computational tractability. We build on the work by Sellan and Jacobson (2022) and introduce a neural formulation of Stochastic PSR that provides a full statistical formalism of the reconstruction process while avoiding overfitting and requiring no additional sensor information. Unlike Sellan and Jacobson (2022), we parametrize the mean and covariance of the implicit field describing the reconstructed surface using a neural network (see Figure 2), which we optimize using gradient-based optimization on losses derived from the variational version of the Poisson equation. Our neural formulation also allows us to extend this stochastic perspective beyond the original PSR and into _Screened_ PSR (Kazhdan and Hoppe, 2013). We showcase the power of our algorithm by showing its performance in a breadth of applications made possible by our novel neural perspective. In particular, we show how one can fully integrate our algorithm in the 3D scanning pipeline, from obtaining an initial reconstruction to defining a differential camera score that can guide the choice of the next best scanning position and efficiently updating the previous reconstruction (see Figure 1) by fine-tuning our network with additional data. We also explore promising avenues for future work, like latent space generalization over scanning positions for a given object or over a space of objects. ## 2. Related Work ### Surface reconstruction Three-dimensional geometry is often captured by recording the distance from a sensor or _depth camera_ to a real-world object (Ozyesil et al., 2017; Raj et al., 2020). Combining the information from many sensors allows us to represent the raw captured geometry as a discrete set of points in space or _point cloud_. It is often possible to use properties about the sensor positioning or heuristics based on global or local cloud attributes (Hoppe et al., 1992; Konig and Gumhold, 2009; Metzer et al., 2021; Schertler et al., 2017) to equip every point with a normal direction, allow for the slightly more complete representation of an _oriented point cloud_. Despite their ubiquitousness, (oriented) point clouds are a fundamentally underdetermined surface representation: by specifying only a discrete set of points in space through which a surface passes, it describes a theoretically infinite number of possible surfaces. _Surface Reconstruction_ algorithms (see (Berger et al., 2017) for a survey) use a _prior_ to decide between them and output a fully determined surface, usually in a format appropriate for specific downstream tasks like a mesh or an implicit function. These priors range from simple geometric primitives (Schnabel et al., 2009) to global properties like symmetry (Pauly et al., 2008) or self-similarity (Williams et al., 2019), user-specified ones (Sharf et al., 2007) and, especially in recent years, data-driven (Groueix et al., 2018; Remil et al., 2017). Absent task-specific knowledge, a commonly used prior is smoothness. This can be enforced explicitly by considering only surfaces parametrized by a smooth family of functions; for example, spatially-varying polynomials (Alexa et al., 2003; Levin, 2004; Ohtake et al., 2005) and linear combinations of radial basis functions (Carr et al., 2001). Smoothness can also be enforced variationally: _Poisson Surface Reconstruction_(PSR) (Kazhdan and Hoppe, 2013; Kazhdan and Hoppe, 2013) encodes volumetric smoothness away from the input point cloud by minimizing the integrated gradient of the surface's implicit representation and remains one of the best performing general surface reconstruction algorithms in terms of robustness and efficiency (see Table 1 in (Berger et al., 2017)). While the authors solve this optimization problem using the Finite Element Method on a hierarchical grid, Dai and Niessner (2022) have recently proposed using a neural network for a similar task, albeit they suggest forgoing the volumetric integration and instead minimizing the gradient only at the point cloud points (see Figs. 3 and 4). We cover PSR and its variants in more detail in Section 3.1. ### Stochastic Surface Reconstruction From a statistical perspective, the vast majority of surface reconstruction works limit themselves to outputting the likeliest surface given the point cloud observations and their assumed prior. Relatively fewer works take this stochastic perspective one step further and compute a _posterior_ distribution of all possible surfaces conditioned on the observations. For example, Pauly et al. (2004) quantify the likelihood of any spatial point belonging to the reconstructed surface by measuring its alignment with the point cloud. Figure 3. Unlike the Poisson-inspired model by Dai and Niessner (2022), we propose using a neural network to solve the Poisson equation in Poisson Surface Reconstruction, avoiding overfitting in sparsely sampled point clouds. Additionally, we provide a full statistical formalism, including variances (bottom right). More recently, _Stochastic Poisson Surface Reconstruction_(Sellan and Jacobson, 2022) reinterprets the classic algorithm as a Gaussian Process, enabling the computation of statistical queries crucial to the reconstruction process and applications such as ray casting, point cloud repair, and collision detection. The authors utilize a Finite Element discretization to compute the mean and covariance functions of the posterior multivariate Gaussian distribution, which represents the likelihood of all possible reconstructions (see Section 3.2), resorting to a several approximations and parameter choices for computational tractability (see Figure 5). In contrast, our work proposes the parametrization of these functions using neural networks, optimizing them through gradient-based methods for a more efficient and flexible approach while still computing the same statistical quantities (see Figure 7). ### Neural PDE solvers We propose solving the Poisson equation in Stochastic PSR using a neural network. As such, our algorithm is one more application in the growing field of neural partial differential equation solvers. A broad class of these are _Physics-Informed Neural Networks_ (PINNs) (see (Cuomo et al., 2022) for a literature review), which effectively soften a PDE and its boundary conditions into integral loss terms that are minimized with, e.g., stochastic gradient descent. If a given PDE accepts a variational formulation, the above process can be done in a more principled way, as shown by Yu et al. (Yu et al., 2018). This is the case for the Poisson equation, which can be equivalently described as a variational Dirichlet energy minimization. This is noted by Sitzmann et al. (Sitzmann et al., 2020), who show the impressive performance of sinusoidal activation functions when applied to Dirichlet-type problems. We borrow from their observations and propose a network architecture with sine activations. ### Next-Best-View planning A key benefit of proposed approach is its integration in the 3D scanning process. Specifically, it allows us to compute a _score_ function that quantifies how useful a proposed next sensor position would be for the reconstruction task. This is a common first step in the _active vision_ or _next-best-view planning_ pipeline, which has been a subject of study for decades (see, e.g., (Chen et al., 2011; Scott et al., 2003) for surveys). In it, prospective sensor placements may be scored by accounting for one or several factors like coverage (Bircher et al., 2016; Connolly, 1985; Yamauchi, 1997), navigation distance, expected reconstruction error (Vasquez-Gomez et al., 2014), scene segmentation entropy (Xu et al., 2015) and redundancy of multiple views (Lauri et al., 2020). Orthogonally, works may need to rely on coarse shape priors for the reconstruction (Zhang et al., 2021; Zhou et al., 2020) or balance improving reconstruction in sampled areas with exploring new unsampled ones. More recently, volumetric methods like those by Isler et al. (Isler et al., 2016), and Daudelin and Campbell (Daudelin and Campbell, 2017) use simple heuristics (e.g., distance to the point cloud combined with visibility) to quantify the marginal likelihood of a given point in space being contained in the reconstructed object. This quantity is discretized onto a voxel grid and used to quantify the expected information gain from a given sensor position. Building on these works, our proposed utility function requires no heuristics, coming instead directly from the statistically formalized reconstruction process and, unlike (Daudelin and Campbell, 2017), accounts for the possible spatial interdependencies along a single ray (see Figure 13). Further, since our reconstruction is parametrized by a neural network, this score is differentiable with respect to the sensor parameters, allowing for the efficient discovery of locally optimal camera placements (see Figure 6). While we introduce said novel, differentiable utility function, the development of a comprehensive next-best-view planning pipeline, which would encompass global searches, travel times, collision avoidance, and robot constraints, falls outside the scope of this paper. Finally, outside of the point cloud reconstruction realm, the recent popularity of Neural Radiance Fields (Mildenhall et al., 2021) has also given rise to uncertainty-driven approaches for next-best-view planning in RGB multi-view representations (see, e.g., (Jin et al., 2023; Kong et al., 2023; Smith et al., 2022; Sucar et al., 2021)). ## 3. Background Given an oriented point cloud \(\mathcal{P}\) with points \(p_{1},\ldots,p_{n}\) and corresponding (outward-facing) normal observations \(\vec{n}_{1},\ldots,\vec{n}_{n}\), we consider the implicit reconstruction task of finding a function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) such that \[f(p_{i})=0,\qquad\nabla f(p_{i})=\vec{n}_{i},\qquad\forall i\in\{1,\ldots,n\}. \tag{1}\] The zero levelset \(\mathcal{S}=f^{-1}(\{0\})\) is the reconstructed surface, whose interior is \(\Omega=\{x\in\mathbb{R}^{d}\,:\,f(x)\leq 0\}\). We will be consistent with this convention that places _negative_ implicit function values _inside_ the reconstruction, and _positive_ ones _outside_. Figure 4. Even before overfitting, the result by Dai and Niessner (2022) does not replicate the PSR output, with non-zero gradients away from the data. Figure 5. Sellan and Jacobson (2022) couple the reconstruction lengthscale with their discretization grid spacing, and require a subspace approximation. Our neural network discretizations avoids both issues. ### Poisson Surface Reconstruction _Poisson Surface Reconstruction_ (PSR) (Kazhdan et al., 2006) builds \(f\) in two steps. First, a smear kernel \(F\) is used to interpolate \(\vec{n}_{i}\) into a vector field \(\vec{v}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) defined in a box \(B\) containing \(\mathcal{P}\): \[\vec{v}(x)=\sum_{i=1}^{n}F(x,x_{i})\vec{n}_{i}\,. \tag{2}\] Then, \(f\) is defined as the function whose gradient best matches \(\vec{v}\): \[f=\operatorname*{argmin}_{g}\int_{B}\|\vec{v}(x)-\nabla g(x)\|^{2}\,\,\mathrm{ dx}\,\,. \tag{3}\] This variational problem is equivalent to the Poisson equation \[\Delta f=\nabla\cdot\vec{v}(x)\,, \tag{4}\] which the authors discretize using the Finite Element Method on an octree and solve using a purpose-built multigrid algorithm. Since Eq. (4) alone does not uniquely determine \(f\), a valid \(f\) is computed and its values shifted to best satisfy \(f(p_{i})=0\). In _Screened Poisson Surface Reconstruction_, Kazhdan and Hoppe (2013) circumvent this by adding a _screening_ term to Eq. (3) \[f=\operatorname*{argmin}_{g}\int_{B}\|\vec{v}(x)-\nabla g(x)\|^{2}\,\,\mathrm{ dx}+\lambda\sum_{i=1}^{n}g(p_{i})^{2} \tag{5}\] which translates into a Screened Poisson equation \[(\Delta-\lambda\lambda)f=\nabla\cdot\vec{v}(x)\,, \tag{6}\] for a specific masking operator \(I\). ### Stochastic Poisson Surface Reconstruction Screened or not, the output of Poisson reconstruction is a single function \(f\). However, the reconstruction task is fundamentally uncertain: Eq. (1) alone is underdetermined and satisfied by an infinite number of possible functions \(f\). When subject to appropriate boundary conditions, Poisson reconstruction selects one particular solution, which can be understood as the most likely solution under a given prior. Sellan and Jacobson (2022) formalize this statistical intuition by interpreting \((p_{i},\vec{n}_{i})\) as observations of a Gaussian Process and computing the posterior distribution \[\vec{v}\mid(p_{1},\vec{n}_{1}),\ldots,(p_{1},\vec{n}_{n})\sim\mathcal{N}( \vec{v}(x),\Sigma(x,x^{\prime}))\,. \tag{7}\] Eq. (3) is then enforced in the space of distributions, obtaining a posterior for \(f\), \[f\mid(p_{1},\vec{n}_{1}),\ldots,(p_{1},\vec{n}_{n})\sim\mathcal{N}(m(x),k(x,x ^{\prime}))\,, \tag{8}\] whose mean and covariance functions \(m,k\) are solutions to the variational problem \[m =\operatorname*{argmin}_{g}\int_{B}\|\vec{n}(x)-\nabla g(x)\|^{2} \,\,\mathrm{dx}\,, \tag{9}\] \[k =\operatorname*{argmin}_{c}\iint_{B}\|\Sigma(x_{1},x_{2})-\mathrm{De}(x_{1},x_{2})\|_{F}^{2}\,\,\mathrm{dx}_{1}\,\mathrm{dx}_{2}\,, \tag{10}\] where \(\mathrm{De}(x_{1},x_{2})\) is the \(d\times d\) matrix whose \(i,j\) entries are \[\frac{\partial^{2}}{\partial a_{i}\partial b_{j}}c(a,b)\bigg{|}_{a=x_{1},b=x_{2}} \tag{11}\] In the same way of Eq. (3), Eqs. (9) and (10) can be written as Poisson-style PDEs that are solved using the Finite Element Method on a uniform or hierarchical grid. Like the original work by Kazhdan et al. (2006), Sellan and Jacobson (2022) shift the values of \(m\) and \(k\) after the fact to satisfy \(m(p_{i})=k(p_{i},p_{i})=0\) on average. ## 4. Method We propose discretizing \(g\) and \(c\) in Eqs. (9) and (10) using neural networks parametrized by weights \(\theta\) and \(\phi\) and solving them directly using gradient-based optimization. ### Loss Given \(s\) samples \(x_{1},\ldots,x_{s}\in\mathbb{R}^{d}\) drawn from a uniform distribution of \(B\) (see Figure 8), let us define the _Dirichlet mean loss_ as \[\mathcal{L}_{D}^{m}(\theta)=\frac{|B|}{s}\sum_{i=1}^{s}\|\vec{n}(x_{i})-\nabla g _{\theta}(x_{i})\|^{2} \tag{12}\] Figure 6. We provide a differentiable utility function that we can optimize to explore local next-best-views. Figure 7. Like Sellán and Jacobson (2022), our algorithm can respond to statistical queries related to the reconstruction. and its covariance counterpart (13) By Monte Carlo integration, we have (14) and (15) Thus, the functions \(g_{\mathbf{\sigma}^{\star}}\) and \(c_{\phi^{\star}}\) parametrized by the minimizers \[\{\theta^{\star},\phi^{\star}\}=\operatorname*{argmin}_{\theta,\phi}\mathcal{L} _{D}^{m}(\theta)+\mathcal{L}_{D}^{k}(\phi) \tag{16}\] are solutions to the variational problem in Eqs. (9) and (10) when restricted to the space of neural-network-parametrized functions. Thus, they are also Poisson solutions. It should be noted that, if one substitutes the samples \(x_{i}\) with the points in the input point cloud \(p_{i}\) in Eq. (12), \(\mathcal{L}_{D}^{m}(\theta)\) is identical to the loss proposed by Dai and Niessner (2022). However, our decoupling of the sampling from the point cloud is critical. Importantly, it is only by sampling from the volumetric bounding box in Eq. (12) that we can claim to be approximating the volumetric integral in Eq. (14) and thus solving a Poisson equation. Theoretically, this choice has the effect of making our algorithm into a strict generalization of PSR (see Figure 4); in practice, it imposes a volumetric smoothness prior that avoids overfitting (see Figure 3). An immediate benefit of this neural perspective is the possibility to extend the statistical formalism of Sellan and Jacobson (2022) from the original Poisson Surface Reconstruction (Kazhdan et al., 2006) to its improved, screened version (Kazhdan and Hoppe, 2013). We can do so merely by adding mean and covariance _screen losses_ \[\mathcal{L}_{S}^{m}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\|g_{\theta}(p_{i})\|^{2},\quad\mathcal{L}_{S}^{k}(\phi)=\frac{1}{n}\sum_{i=1}^{n}\|c_{\phi}(p_{i},p_{i })\|^{2}\,, \tag{17}\] which we combine with the Dirichlet losses to reach our total loss \[\mathcal{L}(\theta,\phi)=\mathcal{L}_{D}^{m}(\theta)+\mathcal{L}_{D}^{k}(\phi) +\lambda_{S}\mathcal{L}_{S}^{m}(\theta)+\lambda_{S}\mathcal{L}_{S}^{k}(\phi) \tag{18}\] Inspired by the choice made by Dai and Niessner (2022), which we validate experimentally (see Figure 9), we fix \(\lambda_{S}=100\). ### Data generation To evaluate \(\mathcal{L}(\theta,\phi)\), we first choose \(B\) to be a loose box around the input point cloud and uniformly sample \(x_{1},\ldots,x_{s}\in B\). Then, as described by Sellan and Jacobson (2022), we compute the matrices \[\mathbf{K}_{1}=(F(x_{i},x_{j}))_{i,j}\in\mathbb{R}^{s\times s},\quad\mathbf{ K}_{2}=(F(x_{i},p_{j}))_{i,j}\in\mathbb{R}^{s\times n}\,, \tag{19}\] as well as the lumped sample covariance matrix \[\mathbf{D}\approx\mathbf{K}_{3}=(F(p_{i},p_{j}))_{i,j}\in\mathbb{R}^{n\times n }\,. \tag{20}\] We employ the same approximated Gaussian kernel suggested by the authors and make use of its compact support to efficiently evaluate the above matrices with a KD tree. Using these matrices, we compute the Gaussian Process posterior mean \[\mathbf{\mu}=\mathbf{K}_{2}\mathbf{D}^{-1}\mathbf{N}\,, \tag{21}\] where \(\mathbf{N}\in\mathbb{R}^{n\times d}\) concatenates \(\vec{n}_{1},\ldots,\vec{n}_{n}\), and the covariance \[\mathbf{\Sigma}=\mathbf{K}_{1}-\mathbf{K}_{2}\mathbf{D}^{-1}\mathbf{K}_{2}^{ \top}\,. \tag{22}\] The row entries in \(\mathbf{\mu}\) then correspond to \(\vec{\mu}(x_{i})\), while each scalar entry in \(\mathbf{\Sigma}\) determines the \(d\times d\) matrix \(\Sigma\) through \(\Sigma(x_{i},x_{j})=\mathbf{\Sigma}_{ij}\mathbf{I}\). As we validate experimentally in Figure 8, sampling \(B\) uniformly during training is necessary to maintain the theoretical guarantees in Eqs. 14 and 15. More elaborate strategies beyond this work's scope (e.g., Metropolis-Hastings integration) that would result in weights being added in Eqs. 14 and 15 may yield performance improvements. ### Architecture & Training We model \(g_{\theta}\) and \(c_{\phi}\) using two five-layered MLPs (see Figure 11) with 512 internal hidden units and sine activation functions (Sitzmann et al., 2020). Our covariance network \(c_{\phi}\) also includes a SoftPlus layer to enforce positivity, followed by an averaging \((c_{\phi}(x_{1},x_{2})+c_{\phi}(x_{2},x_{1}))/2\) (see Figure 2). Combined with Schwarz's theorem, this forces \(\operatorname{De}(x_{1},x_{2})\) in Eq. (15) to be symmetric by construction. We Figure 8. We choose to draw samples uniformly from a bounding box around the point cloud after observing overfitting when using different strategies. Figure 10. For our choice of hyperparameters, the Poisson losses regularly dominate over the screening terms. Figure 9. Our screen weight balances smoothness with input fidelity, but can complicate convergence at high values. experimented with residual connection layers as suggested by Yu et al. (2018), but found no significant performance improvement. At each epoch, we generate 100,000 covariance and 100,000 mean Poisson samples \(x_{t}\) together with an equal number of screening samples selected from the point cloud \(p_{t}\) (with repetition if necessary) as detailed in Section 4.2. This sampling results in four datasets (covariance, mean, covariance screening and mean screening). We cycle through all four with repetition until they are all exhausted with a 512 batch size, evaluating our losses and backpropagating through them to compute the gradient of \(\mathcal{L}(\theta,\phi)\) with respect to \((\theta,\phi)\). We then use the Adam (Kingma and Ba, 2014) optimizer with learning rate \(10^{-4}\) and weight decay \(10^{-5}\). We repeat this process for a number of epochs between 50 and 200 (see Figure 10). Implementation detailsWe implement our algorithm in Python, using PyTorch to build and train our model and Gyytoolbox(Sellan et al., 2023) for common geometry processing subroutes. In our 3.0Ghz 18-core Linux machine with a 48 GB NVIDIA RTX A6000 graphics card and 528 GB RAM, our unoptimized implementation lasts around 30 seconds to train each epoch, the main bottleneck being the backpropagation through the \(\mathbf{D}\) operator in Eq. (11). For Figures 3 and 4, we implemented the algorithm by Dai and Niessner (2022) following their instructions in the absence of author-provided code. We rendered our 3D results using Blender. ## 5. Results & Applications ### 3D Scanning integration Once a point cloud has been captured, our method can be used to compute all kinds of statistical queries useful to the reconstruction (Figure 7) in the same way as described by Sellan and Jacobson (2022). However, our novel neural perspective goes one qualitative step further and allows for a full integration into the scanning process, providing feedback over where to scan next and efficiently updating a given reconstruction upon capturing more data. #### 5.1.1. Ray casting Given a captured scan and a proposed sensor position \(\mathbf{r}\) and direction \(\mathbf{d}\), a crucial question is where a ray travelling from the sensor would intersect the surface. In traditional volumetric rendering terms, this amounts to computing the _opacity_ along the ray, or the likelihood that a ray emanating from the sensor reaches a given distance without terminating. Sellan and Jacobson (2022) suggest computing the marginal probabilities along the ray \[p(t)=P(f(\mathbf{r}+t\mathbf{d})\leq 0) \tag{23}\] and interpreting these as densities \[\rho(t)=\frac{p(t)}{1-p(t)} \tag{24}\] that they propose integrating to compute the opacity \[o(t)=1-e^{-\int_{0}^{t}\rho(\tau)\,\mathrm{d}\tau}. \tag{25}\] However, we note that this expression for the opacity is usually employed in the context of gases, for which the effects of inter-particle interactions are negligible and one can assume that the likelihood of encountering a gas particle at time \(\tau\) is independent of encountering one at time \(\tau+\mathrm{d}\tau\), giving validity to the integral in Eq. (25). This independence assumption does not hold for the case of uncertain solids, as evidenced by Figure 13: while the marginal likelihood is \(p(t)=0.5\) for all \(t\) between \(t_{1}\) and \(t_{2}\), there is no configuration of the shape for which a ray terminates at \(t\). Statistically, this is because the point at time \(t\) is fully correlated with the point at time \(t_{1}\). While Figure 13 is an extreme example, this difference appears in general reconstruction examples (see Figure 14) Accounting for these correlations is simple. Instead of Eq. (25), one can compute the opacity as the joint probability that \(f\) was positive at every point in the ray prior to \(\mathbf{r}+t\mathbf{d}t\): \[o(t)=P(f(\mathbf{r}+\mathbf{r}\mathbf{d})>0,\forall\tau<t). \tag{26}\] We uniformly discretize the interval \([0,t]\) such that it amounts to querying a cumulative multivariate Gaussian. Fortunately, as shown by Marmin et al. (2015, Sec. 6), this expression can be differentiated with respect to the entries in the covariance matrix with the aid of Plackett's formula (Berman, 1987). We use the PyTorch implementation of this formula by Marmin (2023) for this task. Figure 11. Too few hidden layers can limit the geometric detailed captured un our reconstructions. At the same time, we observe diminishing returns and difficulty with covariance convergence for higher layer numbers. Figure 12. Our local next-best-view search is best combined with a global search, where the scores of different local optima are compared. #### 5.1.2. Next view planning As seen above, the time travelled by a ray from a given camera position before colliding with the surface can be interpreted as a random variable, whose cumulative distribution function is the opacity in Eq. (26). Crucially, by Foubini's theorem, this means one can compute the expected collision time as \[\langle t(\mathbf{r},\mathbf{d})\rangle=\int_{0}^{\infty}\left(1-o(\tau) \right)\mathrm{d}\tau\,, \tag{27}\] leading to the expected collision point \[\mathbf{p}^{\star}(\mathbf{r},\mathbf{d})=\mathbf{r}+\langle t(\mathbf{r}, \mathbf{d})\rangle\,\mathbf{d}\,. \tag{28}\] The optimal sensor position will be one that generates a new point cloud point in an area of high variance. Therefore, it makes sense to define the _score_ of a camera as \[u(\mathbf{r},\mathbf{d})=\sigma(\mathbf{p}^{\star}(\mathbf{r},\mathbf{d}))=c _{\phi\star}(\mathbf{p}^{\star}(\mathbf{r},\mathbf{d}),\mathbf{p}^{\star}( \mathbf{r},\mathbf{d}))\,. \tag{29}\] While Sellan and Jacobson (2022) propose a camera scoring criteria, our novel neural perspective allows us to backpropagate through \(c_{\phi}\), meaning that we can compute the gradient of the score with respect to camera parameters \((\mathbf{r},\mathbf{d})\), and find an optimal camera position with gradient descent. We show the potential of this contribution in Figure 6, inspired by Fig. 26 by Sellan and Jacobson (2022). This gradient-based next view angle optimization will often converge to suboptimal local minima. Indeed, as we show in Figure 12, it is better combined with a global search by sampling several initial sensor positions, backpropagating to find an optimum near them, and then choosing the converged camera with the best global score. In Figure 15, we quantify the quality of our subsequent chosen views of a mechanical object by showing they improve on randomly sampled ones. Only in this simplified setup in which views are sampled from a sphere around the object and the directions are constrained to aim to the same spatial point, we are able to compare also to other heuristics like furthest-point sampling, which we show our more generally applicable method matches or outperforms. #### 5.1.3. Fine-tuning Once a new sensor position is chosen and a new scan is taken, points are added to the cloud. Traditional algorithms like PSR would then require investing in an updated discretization and entirely new Poisson solve to obtain an updated reconstruction. Fortunately, our neural perspective allows us to take advantage of an earlier reconstruction to update it more efficiently. Indeed, as shown in Figure 16, we may consider our model's training on the initial point cloud as a _pretraining_ of our model, which is _fine-tuned_ for only a few epochs every time new points are captured. Our model can thus be integrated in an end-to-end scanning pipeline, as once an updated mean and variance is obtained, the best next view angle optimization can start again (see Figure 1). Our algorithm can even provide a stopping criterion, in the form of the integrated uncertainty proposed by Sellan and Jacobson (2022). ### Generalization Another major advantage of our neural formalism over a traditional one is the possibility to train our network on many given reconstructions and trust it to generalize to similar-yet-unseen data. This can circumvent expensive optimizations in cases where one has access to a large training set of point clouds and must quickly make inference on a newly observed set of points. We show a prototypical example of what such a process could look like in Figure 17, where a training set of point clouds is captured by scanning a shape from several different angles. Our model is then expanded to accept a latent encoding \(z\), the values of which are trained simultaneously with the model parameters in the "autocoder" style proposed by Park et al. (2019). When a new scan \(\mathcal{S}\) of the object is captured, test-time optimization (with the model parameters frozen) produces an optimal latent encoding for the new point cloud. This reconstruction can be used as-is or fine-tuned for a very limited number of epochs for a final reconstruction. We believe this generalization capability can prove useful in industrial applications, where one may be able to produce a number of partial training scans of an object. Then, objects on an assembly line can be quickly scanned and projected into the learned latent space of partial scans. As we show in Figure 18, our model's statistical formalism can then be used (in the form of the point cloud's average log likelihood) to identify foreign objects or defective pieces. Figure 14. Accounting for correlations leads to significant differences in the ray termination distribution for general point cloud reconstruction examples. Figure 13. Sellan and Jacobson (2022) consider only marginal likelihoods to compute the termination probability along a ray. This leads to inaccuracies in cases with high correlations among spatial points (see text). One can also use our model to generalize over a space of different-yet-similar shapes, as we show in Figure 19, where a latent space of scans is learned over 20 diverse human scans generated using STAR (Osman et al., 2020). Upon capturing a new scan, test-time latent code optimization can efficiently provide a novel reconstruction. ## 6. Limitations & Conclusion As we have shown, a key advantage of our neural formulation is the possibility to iteratively fine-tune reconstructions upon capturing more data. To fully take advantage of our method's efficiency, one may need to optimize its runtime, which we did not do beyond asymptotics. We believe the clearest avenues for speedups are exploring non-uniform distributions for data generation and task-specific weight initializations. We introduce a method for formalizing reconstruction uncertainty using a neural network. However, it should be noted that this uncertainty is encoded by the Gaussian Process used to generate data, while the network is merely solving a PDE. A promising avenue for future work is circumventing the GP altogether, using Machine Learning uncertainty quantification techniques to obtain a posterior distribution directly from the input point cloud. While this may mean deviating from Poisson Surface Reconstruction, it could present a major improvement in accuracy (removing the need for covariance matrix lumping) and applicability (allowing for sensor-specific non-Gaussian noise patterns). All our generalization results (Figures 17, 18 and 19) use identical (virtual) scanning devices, and every input point cloud is re-scaled to the unit cube; as such, we do not expect our results to generalize beyond these choices. Future work could mitigate this; for example, by learning a latent space of device parameters and positions as suggested by Martin-Brualla et al. (2021) in the context of NeRF. While uncertainty quantification has become a common consideration in neighboring fields like Computer Vision and Robotics (Kendall and Gal, 2017), it remains rare for Computer Graphics works to expose their algorithmic uncertainties. It is our hope that as our tool set grows and our field's application realm diversifies, our work can serve as a first step in the right direction.
2301.00064
The orbital kinematics of eta Carinae over three periastra with a possible detection of the elusive secondary's motion
The binary eta Carinae is the closest example of a very massive star, which may have formed through a merger during its Great Eruption in the mid-nineteenth century. We aimed to confirm and improve the kinematics using a spectroscopic data set taken with the CTIO 1.5 m telescope over the time period of 2008-2020, covering three periastron passages of the highly eccentric orbit. We measure line variability of H-alpha and H-beta, where the radial velocity and orbital kinematics of the primary star were measured from the H-beta emission line using a bisector method. At phases away from periastron, we observed the He II 4686 emission moving opposite the primary star, consistent with a possible Wolf-Rayet companion, although with a seemingly narrow emission line. This could represent the first detection of emission from the companion.
Emily Strawn, Noel D. Richardson, Anthony F. J. Moffat, Nour Ibrahim, Alexis Lane, Connor Pickett, André-Nicolas Chené, Michael F. Corcoran, Augusto Damineli, Theodore R. Gull, D. John Hillier, Patrick Morris, Herbert Pablo, Joshua D. Thomas, Ian R. Stevens, Mairan Teodoro, Gerd Weigelt
2022-12-30T22:12:31Z
http://arxiv.org/abs/2301.00064v1
The orbital kinematics of \(\eta\) Carinae over three periastra with a possible detection of the elusive secondary's motion ###### Abstract The binary \(\eta\) Carinae is the closest example of a very massive star, which may have formed through a merger during its Great Eruption in the mid-nineteenth century. We aimed to confirm and improve the kinematics using a spectroscopic data set taken with the CTIO 1.5 m telescope over the time period of 2008-2020, covering three periastron passages of the highly eccentric orbit. We measure line variability of H\(\alpha\) and H\(\beta\), where the radial velocity and orbital kinematics of the primary star were measured from the H\(\beta\) emission line using a bisector method. At phases away from periastron, we observed the He ii 4686 emission moving opposite the primary star, consistent with a possible Wolf-Rayet companion, although with a seemingly narrow emission line. This could represent the first detection of emission from the companion. keywords: techniques: spectroscopic -- stars: massive -- stars: variables: S Doradus -- stars: winds, outflows -- binaries: spectroscopic -- stars: individual: \(\eta\) Carinae ## 1 Introduction The binary star system \(\eta\) Carinae is known for being one of the most massive and luminous binaries in our local galaxy (Davidson and Humphreys, 2012). The two stars are locked in a highly eccentric orbit (Damineli, 1996; Damineli et al., 1997). Employing these stars is the Homunculus nebula which was formed by a large eruption in the mid-nineteenth century (e.g., Currie et al., 1996). The Great Eruption that formed the Homunculus nebula was recently modeled to be the product of a binary merger in a triple system leading to the current orbit (Portegies Zwart and van den Heuvel, 2016; Hirai et al., 2021), supported by light echo observations (e.g., Smith et al., 2018) and an extended central high-mass torus-like structure surrounding the central binary (Morris et al., 2017). In this scenario, the luminous blue variable primary star is currently orbited by a secondary star that is a classical Wolf-Rayet star, as discussed by Smith et al. (2018). The system began as a hierarchical triple, and mass transfer led to the initial primary becoming a hydrogen-deficient Wolf-Rayet star. Mass transfer causes the orbits to become unstable, which leads to the merger and leaves behind the highly eccentric binary system we see today. An alternate model for the eruption relies on the fact that \(\eta\) Car is a binary in a highly eccentric orbit, and proposes that the periastron events triggered large mass transfer events that caused the eruptions (Kashi and Soker, 2010). A similar model was used to explain the much less massive eruption that was seen from the SMC system HD 5980 during its LBV-like outburst (e.g., Koenigsberger et al., 2021). While the binary nature of the system was inferred by Damineli (1996b) and Damineli et al. (1997), the orbit of the system has mostly eluded observers since the discovery of the spectroscopic events by Damineli (1996a). Davidson (1997) criticized the first orbit published by Damineli et al. (1997) and published a higher eccentricity model using the same data as Damineli et al. (1997). Since these first attempts to derive the orbital motion of the system, very few observationally derived models have appeared in the literature, with most references to the orbit being inferred for modeling purposes. Recently, Grant et al. (2020) used archival moderate-resolution Gemini/GMOS spectra from 2009 to fit the hydrogen lines using multiple, weighted Gaussians to measure radial velocities corrected to account for motion from strong stellar winds. They derived a single-lined spectroscopic orbit based on the upper Balmer lines to be \(T_{0}=2454848\) (HJD), \(e=0.91\), \(K_{1}=69\) km s\({}^{-1}\), and \(\omega_{\rm prii}=241^{\circ}\) with the period of 2022.7 d that has been widely adopted based on multi-wavelength observations (e.g., Teodoro et al., 2016). These are broadly consistent with the smoothed-particle hydrodynamical (SPH) models used to describe variability across the electromagnetic spectrum (e.g., Madura et al., 2013) including the X-ray light curves (e.g., Okazaki et al., 2008), optical He i absorption variability (Richardson et al., 2016), and the near-UV emission observed with the _Hubble Space Telescope_(Madura and Groh, 2012). While the results of Grant et al. (2020) establish the orbital parameters with greater precision to date, there are potential issues with the determination of orbital elements from hydrogen lines in \(\eta\) Car's spectrum, as the strong wind of the primary causes the effective photospheric radius to be further out from the central star for lower energy transitions. Indeed, Grant et al. (2020) found better results with higher-order Balmer lines than with the optically thick H\(\alpha\) or H\(\beta\). This is a known effect for evolved Wolf-Rayet stars, where the observed semi-amplitude can change with the ionization potential of the line measured because lower-energy emission lines tend to form further out in the wind, where they are more likely to be perturbed by the companion star as seen in \(\gamma^{2}\) Vel (Richardson et al., 2017). This effect causes differences from the true orbital motion for lower energy transitions, making it difficult to determine accurate orbits (Grant et al., 2020). Grant and Blundell (2022) confirmed that their methods used for emission-line stars worked for the WR binaries WR 133 and WR 140 that have combined spectroscopic and interferometric orbits (Richardson et al., 2021; Thomas et al., 2021). The primary star in the \(\eta\) Car system is a luminous blue variable star, with the largest measured value for a mass-loss rate for a massive star with \(\dot{M}=8.5\times 10^{-4}M_{\odot}\)yr\({}^{-1}\) and a terminal wind speed of \(v_{\infty}=420\) km s\({}^{-1}\)(Davidson and Humphreys, 1997; Groh et al., 2012). Prior to the recent kinematic studies of Grant et al. (2020) and Grant and Blundell (2022), the best constraints on the companion star parameters, while indirect, came from the X-ray variability analyses from _RXTE_, _Swift_, and _NICER_ observations of the system (Corcoran et al., 2001, 2017; Espinoza-Galeas et al., 2022). These analyses point to a secondary star with a mass-loss rate on the order of \(\dot{M}\sim 10^{-5}M_{\odot}\)yr\({}^{-1}\) and a terminal velocity of \(v_{\infty}\sim 3000\) km s\({}^{-1}\)(Pittard and Corcoran, 2002). These values are broadly in agreement with the suggestion based on the merger models and mass-loss parameters that the remaining secondary would be a Wolf-Rayet star. Despite recent work with long-baseline near-infrared interferometry by Weigelt et al. (2021), no direct detection of the companion star has been made to date. From the interferometric data, a minimum primary-secondary flux ratio of \(\sim\)50 was derived in the \(K\)-band (Weigelt et al., 2007). Given the extreme luminosity of the LBV primary, this is consistent with any O or WR star in the Galaxy. The evolution of the secondary star may well have been significantly modified by interactions and mass exchange during formation of the present-day binary, but if the current secondary star is a classical H-free Wolf-Rayet star as suggested by Smith et al. (2018) and Hirai et al. (2021), or a hydrogen-rich WNh star, possibly the best line to detect it in the optical would be the He ii\(\lambda\) 4686 line, which is the dominant line in the optical for the nitrogen-rich WR stars, or the hydrogen-rich WNh stars. Most of the observations of He ii were made near periastron, where the He ii excess can be explained by ionization of He i in the colliding winds in a highly eccentric binary. Teodoro et al. (2016) showed that the variability could be explained with the smoothed-particle hydrodynamics models of Madura et al. (2013). Away from periastron (\(0.04<\phi<0.96\)), the He ii line is typically not observed with moderate resolving power and a nominal S/N of \(\sim\)100. In this paper, we present our analysis of the spectroscopy collected with the CTIO 1.5 m telescope and the CHIRON spectrograph, as well as the data collected with the previous spectrograph on that telescope with the aim of better constraining the kinematics of the system. These observations are described in Section 2. In Section 3, we review the variability in the two Balmer lines we can easily measure (H\(\alpha\) and H\(\beta\)). Section 4 describes our techniques of measuring the radial velocity of the H\(\beta\) line, and presents observations of He ii away from periastron in the hope of determining the orbit of the companion star. We discuss our findings in Section 6, and conclude this study in Section 7. ## 2 Observations We collected high resolution spectra of \(\eta\) Carinae during the periastron passages of 2009, 2014, and 2020. Many additional spectra were taken in the intermediate phases of the binary orbit as well. These were collected from the 1.5m telescope at Cerro Tololo Inter-American Observatory (CTIO 1.5) and both current CHIRON and the former fiber-fed echelle spectrograph (FECH). The data from the 2009 spectroscopic event spanned from 2008 October 16 to 2010 March 28, with approximately one spectrum taken every night between 2008 December 18 to 2009 February 19, which were previ Figure 1: A comparison of an example Gemini-GMOS spectrum used by Grant et al. (2020) with the CTIO data from the fiber echelle (FECH) in 2009 and with more recent CHIRON data at the same phase (phases given in the legend). Note that the pixel sizes are indicated for the spectra, which is most obvious for the GMOS spectrum. The spectra are offset by orbital cycle, which highlights the complexities in the echelle spectra compared to the GMOS data. ously used by Richardson et al. (2010, 2015) and cover the spectral range \(\sim 4700-\)\(-\)7200A. These spectra with the fiber echelle1 were collected in late 2009 and 2010, and often had a signal-to-noise ratio around 80-100 per resolution element with \(R\sim 40,000\). In total, we analyzed 406 spectra of the system. Footnote 1: [http://www.ctio.noao.edu/noao/content/CHIRON](http://www.ctio.noao.edu/noao/content/CHIRON) The 2014-2020 data were collected with the new CHIRON spectrograph (Tokowinin et al., 2013), and spanned the time between 2012 March 2 and 2020 March 16, with high-cadence time-series spanning the 2014 and 2020 periastron passages between 2013 December 29 through 2015 April 21 as well as between 2020 January 3 to 2020 March 16 when the telescope shut down for the COVID-19 pandemic. The CHIRON spectra cover the spectral range of \(\sim\)4500-8000A, with some spectral gaps between orders in the red portion of the spectrum. The data covering the 2014 periastron passage were previously used by both Richardson et al. (2016) and Teodoro et al. (2016). These data have a spectral resolution of \(R\sim 80,000\) and typically have a signal-to-noise of 150-200 in the continuum and were all reduced with the CHIRON pipeline, which is most recently described by Paredes et al. (2021). In addition to the pipeline reductions, we perform a blaze correction using fits from an A0V star, as done by Richardson et al. (2016), allowing orders to be merged if needed. This process resulted in a flat continuum in regions that were line-free. These observations were all fiber-fed with the fiber spanning 2.7'' on the sky, meaning that the data include the nebular emission from the Homunculus nebula formed from the eruption of \(\eta\) Car in the mid-nineteenth century, as well as the Weigelt knots (Weigelt & Ebersberger, 1986) that are thought to have originated from the second eruption in the 1890s. The CHIRON spectra are normalized through a comparison with a measured blaze function from the star HR 4468 (B9.5V), as was done in the analysis of Richardson et al. (2016). Example spectra are shown in Figure. 1, with a comparison to a spectrum used by Grant et al. (2020) and Grant & Blundell (2022). ## 3 Measured variability in the Balmer lines, H\(\alpha\) and H\(\beta\) Our observations are unique in providing both the spectral resolution and signal-to-noise to measure the line strength (equivalent width) and profile morphology of the emitting gas for the H\(\alpha\) and H\(\beta\) lines of \(\eta\) Carinae. Here, we detail the observations of the variability of the hydrogen lines. We estimate errors on equivalent width using the methods of Vollmann & Eversberg (2006). We note that the analysis of Richardson et al. (2015) includes many optical wind lines near the 2009 periastron passage and phases far from periastron. These line profiles all show minimum line strength near periastron as the secondary's high ionizing radiation goes behind the primary star's optically thick wind. We use a phase convention in which the low-ionization state observed by Gabvida (1953) in 1948 is deemed to be cycle 1, so that the low-ionization state starting in Feb. 2020 marks the start of cycle 14. We leave the kinematics analysis of the metal lines for a future analysis in order to confirm the results of Grant et al. (2020) and Grant & Blundell (2022) here, with plans of using higher signal-to-noise spectra in a future analysis. ### H\(\alpha\) Richardson et al. (2010) examined the variability of the H\(\alpha\) profile of \(\eta\) Carinae across the 2009 periastron passage. They found that the profile's strength decreased during the periastron passage and reached a minimum a few days following the X-ray minimum. They postulated that the changes were caused by the drop in the ionizing flux from the secondary when the companion moved to the far side. In addition, they observed an appearance of a P Cygni absorption profile and an absorption component at \(-\)145 km s\({}^{-1}\), that also appeared as the secondary's ionizing radiation was blocked by the primary star's optically thick wind. Richardson et al. (2015) expanded upon this model to describe the variations of the optical He i profiles while documenting the variability of the optical wind lines across the 2009 periastron passage. We measured the equivalent width of H\(\alpha\) for all of our spectra in the range 6500 - 6650A. These results are shown in Fig. 2, where we show the measurements both compared to time and to binary phase, assuming a period of 2022.7 d, and the epoch point given by Teodoro et al. (2016), which represents the time of the periastron passage based on a comparison of the He ii observations (Teodoro et al., 2016) to SPH models of the colliding winds. Broadly speaking, the strength of the line relative to the locally normalized continuum shows a fast decrease and recovery near each periastron passage. Richardson et al. (2010) found that the variability is smoother when considering the photometric flux in the determination of the equivalent widths. We did not make this correction in these data, but do see the similarities of the events in the context of the raw equivalent widths. There is no strong long-term variability in these observations, and the 2014 and 2020 observations were nearly identical in their variations. Recently, Damineli et al. (2019, 2021) found that there are long-term brightness and spectral changes of the system that has been ongoing for decades and accelerated since the mid-1990s, but now seems to be stabilizing. The shape of the H\(\alpha\) variability has remained similar over these three well-observed periastron passages, and the line strength has stabilized across the past two cycles, which could indicate that the system is mostly stable aside from the binary-induced variability. Richardson et al. (2010) also documented the timing of the appearance of the P Cygni absorption component for H\(\alpha\). In the 2009 observations we see the absorption occurring at approximately HJD 2454840.7 (\(\phi\approx 12.00\)) and still persisting through the last observation, 2454881.7 (\(\phi\approx 12.02\)). In 2014 a P Cygni absorption occurs at 2456874.5 (\(\phi\approx 13.00\)) persisting until the object was not observable at HJD 2456887.5 (\(\phi\approx 13.01\)). In 2020, the absorption is seen at 2458886.8 (\(\phi\approx 14.01\)) and still detected through the last observation on HJD 2458925.0 (\(\phi\approx 14.02\)). A narrow absorption component was observed near \(-\)145 km s\({}^{-1}\) in the 2009 observations (Richardson et al., 2010) from 2454836.7 (\(\phi\approx 12.00\)) through the last day of observation, 2454881.7 (\(\phi\approx 12.02\)). In 2014 an absorption in the same location is observed from 2456863.5 (\(\phi\approx 13.00\)) \(-\)2456977.8 (\(\phi\approx 13.06\)). There is no absorption at this location strong enough to make a definitive detection in 2020. Pickett et al. (2022) documented the changes in absorption behavior for the Na D complex at these velocities, showing that the absorption from these components associated with the Little Homunculus, formed during the second eruption in the 1890s, are weakening with time and moving to bluer velocities. ### H\(\beta\) While some of the H\(\beta\) variability was documented for the 2009 periastron passage of \(\eta\) Car by Richardson et al. (2015), the full variability and timing of the changes is still not well documented in the literature. The lack of a more quantitative assessment of the variability is in part due to the lower signal-to-noise in the H\(\beta\) data from the 2009 event. Similar to the H\(\alpha\) profile, H\(\beta\) experiences a P Cygni type absorption near \(-500\) km s\({}^{-1}\) near periastron. We note the absorption appears in 2009 at approximately HJD 2454837.7 (\(\phi\approx 12.00\)) and persist through the last observation taken on 2454879.7 (\(\phi\approx 12.01\)). In 2014, it appears at approximately 2456863.6 (\(\phi\approx 13.00\)) and ends during a seasonal gap in observations beginning at 2456887.5 (\(\phi\approx 13.01\)). In 2020, the P Cygni absorption is observed between 2458886.8 (\(\phi\approx 14.00\)) and continues through the last day of observations on 2458925.0 (\(\phi\approx 14.02\)). This transient absorption was determined to be originating from the downstream bowshock by Gull et al. (2022). A narrow absorption component, previously observed by Richardson et al. (2015), is detected near \(-145\) km s\({}^{-1}\) in the 2009 observations from 2454837.7 (\(\phi\approx 12.00\)) and proceeds through the end of observations on 2452879.7 (\(\phi\approx 12.01\)). In 2014, this absorption is observed between 2456864.5 (\(\phi\approx 13.00\)) and also persists through the last day of observations 2456887.5 (\(\phi\approx 13.01\)). As with H\(\alpha\), there is no discernible absorption at \(-145\) km s\({}^{-1}\) in 2020 observations. Figure 3 shows the time series variation in the H\(\beta\) equivalent width over the last two periastron cycles. We note that the 2009 observations are not included as they are recorded with the former echelle spectrograph and have lower signal-to-noise, though the appearance of the P Cygni absorption remains reliable. As with the H\(\alpha\) equivalent widths, there is a consistency in the decrease in equivalent width for the time period corresponding to times close to periastron. ## 4 Line Kinematics We measured the bisector velocity of H\(\beta\) and the centroid position of the He ii\(\lambda\)4686 line. H\(\beta\) measurements were taken during the 2009, 2014, and 2020 periastron events and the He ii 4686 measurements Figure 3: Variation in H\(\beta\) emission line with respect to time (left) and phase (right); with the data taken with CHIRON spectrograph as the FECH data were too noisy to determine equivalent widths. In the phase plot, we show the two recent cycles in different colors to clarify the timing of each data set. Furthermore, the errors are typically the size of the points or smaller. The phase convention shown in the right panel references the low-ionization spectrum near periastron first observed by Gaviola (1953). Figure 2: Variation in H\(\alpha\) emission line with respect to time (left) and phase (right); with the data taken between October 2008 and March 2020. Data taken from the previous echelle spectrograph is indicated by open squares and data from the new CHIRON spectrograph is indicated by solid dots. In the phase plot, we show the different cycles in different colors to clarify the timing of each data set. Furthermore, the errors are typically the size of the points or smaller. The phase convention shown in the right panel references the low-ionization spectrum near periastron first observed by Gaviola (1953). were taken for 2014 and 2018 and do not include time within \(\phi=0.95-1.05\) to avoid observations affected by periastron caused by colliding-wind effects which, to first order, behave with a \(D^{-1}\) trend for adiabatic and \(D^{-2}\) or steeper for radiative conditions, where \(D\) is the orbital separation, which is small and quickly changing at periastron. Teodoro et al. (2016) show the behavior of the He ii 4686 line near periastron in detail. All measurements are tabulated in online supplementary data. ### Biscector velocities of H\(\beta\) The process used to find the bisector velocity of H\(\beta\) is demonstrated in Fig. 4. Grant et al. (2020) and Grant & Blundell (2022) used a method of Gaussian decomposition using many components to moderate-resolution spectroscopy taken with Gemini-South and GMOS. Their GMOS spectra of \(\eta\) Car are limited in that the highest resolving power available is \(\sim 4400\), whereas our spectroscopy has a resolving power of \(40,000\) from the fiber echelle, and \(80,000\) for the CHIRON data. The profiles become more complex at higher spectral resolution, making this multiple-Gaussian method more difficult to implement, likely requiring more than twice as many components compared to the work of Grant et al. (2020). In order to create a simpler measurement that has reproducible results for any spectroscopic data set, we implemented a bisector technique. We began by fitting two fourth degree polynomials, one to the red side and another on the blue side of the profile in order to smooth over any noise inherent in the data. Through this fit, we were then able to establish the bisecting velocity position at each emission level with higher precision. Example fits are shown in red in Fig. 4. In the regions of heights of \(4\times\) the continuum up to \(10\times\) the continuum, we calculate the bisecting velocity. This area was chosen based on the relatively vertical nature of the bisector in this region. We then created comparisons of all spectra and found that the bisecting line was nearly always vertical in the region of \(5-6\times\) the normalized continuum. We therefore used this region, measuring the velocity at every \(0.1\) increment between these values, and adopting an average measurement as the radial velocity for the spectrum. The choice of a common emission height with which to measure the bisector velocities allows us confidence in the results as it would relate to gas emitting from the same region for all spectra, whether the line is weak or strong in that particular observation. The resulting velocities Figure 4: Example polynomial fits to H\(\beta\) emission lines from 2009, 2014, and 2020 periastron events. The profiles are shown in black with the portion of the line wings fit with a polynomial shown in red. The bisector velocity is shown as a vertical line corresponding to the normalized flux at the same level as the measurements. Near the edges of these ranges, the bisector often appears to curve due to either profile asymmetries or larger errors in the polynomial fits. The bisector velocities between normalized flux levels of 5 and 6, indicated by the dashed lines, were averaged to obtain a final relative velocity for each day. Further details are given in Section 4.1. are shown in Fig. 5. We provide this bisector code via GitHub2 for future use on comparable datasets. Footnote 2: [https://github.com/EmilyScode/Radial-Velocity-from-a-Polynomial-Fit-Bisector.git](https://github.com/EmilyScode/Radial-Velocity-from-a-Polynomial-Fit-Bisector.git) ### He ii \(\lambda\)4686 The region surrounding, but not blended with, the He ii \(\lambda\)4686 transition is complicated by several features including narrow emission lines from the Weigelt knots (Weigelt & Ebersberger, 1986) along with wind emission from Fe ii and He i lines (for a figure showing that region of the spectrum, see Teodoro et al., 2016). While these do not directly overlap with the core of the He ii line, they can complicate this fitting if not properly avoided. The He ii \(\lambda\)4686 line has usually been observed near periastron passage when the line is dominated by the wind-wind collisions, which has been documented and modeled by Teodoro et al. (2016). The line was discovered by Steiner & Damineli (2004). Since then, multiple studies have attempted to explain the formation of the stronger line observed near periastron (\(L_{\rm He}\) ii \(\sim\) 300 \(L_{\odot}\); Martin et al., 2006; Mehner et al., 2011, 2015; Teodoro et al., 2012; Davidson et al., 2015), but the colliding wind model best reproduces the emission near periastron. This emission is strongest for times within \(\pm\)0.05 in phase from periastron, as detailed in the recent analysis of Teodoro et al. (2016). Outside of the phase intervals near periastron, the He ii \(\lambda\)4686 line could only be properly observed with high spectral resolution and high signal-to-noise data (Teodoro et al., 2016). Our data taken with CHIRON, after the 2014 periastron passage has the necessary sensitivity to detect this notably weak emission line. We measure the radial velocity of this line outside of \(\phi=\pm\)0.05 of periastron, so that it minimizes the effects of the colliding winds that peak at periastron. As shown in Fig. 6, we fit a Gaussian to the He ii emission line and use the centroid position to determine the radial velocity. Unfortunately, the continuum placement for the feature is not reliable enough to measure equivalent widths with precision, but the line was nearly constant in equivalent width when considering the errors of these measurements. Before fitting the 2018 observations near apastron, we needed to average up to ten observations to improve the signal-to-noise ratio. The resulting velocities are shown in Fig. 7 with a resulting total of 19 data points. The averaging of the points from the 2018 data resulted in a smaller dispersion of the data than seen in the earlier points. The He ii line is normally absent in the spectra of luminous blue variables. The extreme mass-loss rate of \(\eta\) Car does not preclude this emission line originating in the primary star's wind, as there are some combinations of parameters used that can create this weak emission feature in CMFGEN models. These models and parameters are very sensitive and depend on the mass-loss rate and stellar radii used. The He ii can be formed through strong wind collisions at times close to periastron (e.g., Teodoro et al., 2016). However, this line moves in opposition to the primary star's motion, so we consider this feature as originating from the companion during these phases far from periastron for the remainder of this analysis. ### Orbital Kinematics and Observed Elements We began our fit of the kinematics of the primary star with the BinaryStarSolver software (Milson et al., 2020; Barton & Milson, 2020). The resulting orbit is broadly in agreement with the orbit derived with H\(\beta\) velocities by Grant et al. (2020), with the orbital elements given in Table 1. Our resulting fits are in agreement with those of Grant et al. (2020) so we did not perform the same correction for the stellar wind effects as in their analysis. Figure 5: Radial velocities from H\(\beta\) bisector measurements compared to time (left) and orbital phase (right). The orbital fit is described in Section 4.3 and typical errors are on the order of the size of the points. Figure 6: Gaussian fit to an example He ii emission line with a vertical line plotted at the fitted peak. This particular spectrum had a signal-to-noise ratio of 210 per resolution element. In an attempt to fully assess the errors of the parameters, we used the PHOEBE code (PHySics Of Eclipsing BinariEs; Prsa & Zwitter 2005; Prsa et al. 2016) to verify the orbital elements. The latest version of PHOEBE incorporates the Markov Chain Monte Carlo package e(Foreman-Mackey et al., 2013). Unlike traditional orbit fitting routines, PHOEBE fits using the variable of the projected semi-major axis (\(a\sin i\)) rather than the semi-amplitude \(K\), but these are easily interchangeable using \[a\sin i=\frac{(1-e^{2})^{1/2}}{2\pi}KP.\] These orbital elements are also similar to the other published orbital elements measured with H\(\beta\), and the resulting orbit is shown in Fig. 5. The distribution of the errors from the Monte Carlo simulation, shown in Fig. 8, is tightly constrained but shows that various orbital elements have errors that are interdependent with other parameters. While this represents the best solution to the entire data set, we explored how the parameters change if we kept only the densest of the three periastra observed (the 2014 event). Running the PHOEBE code with the MCMC package on just those data resulted in the eccentricity being slightly larger (\(e=0.824\)), the time of periastron being later (HJD 2,456,935.31), and the value of \(a\sin i\) (hence \(K_{1}\)) being slightly larger at 2620.4 \(R_{\odot}\). These values are outside the limits given with our MCMC fit of all of the data, so we caution that the errors in Table 1 are likely underestimated. We include the fit parameters in the same style as Fig. 8 in the online Fig. A1. Once the orbital elements for H\(\beta\) were fit, we proceeded to run a simpler model for the He ii emission. For this PHOEBE model, we keep \(\omega\) constant to that representing the primary star from the upper Balmer line results from Grant et al. (2020). However, we do allow the semi-major axis, \(\gamma\)-velocity, \(e\), and time of periastron passage to vary. The resulting orbit is more eccentric than that of the primary star when derived using H\(\beta\) (and a bit more eccentric than the Grant et al. (2020) solution) and is shown in Fig. 7. With future observations of the He ii line at times away from periastron, a combined double-lined orbit of the system with \(\omega\) being consistent for the two stars will be possible. ## 5 Discussion The optical spectrum of \(\eta\) Car is dominated by emission lines from the wind of the primary and its ejecta. The dominant emission lines are the hydrogen Balmer lines, but there are strong lines from He i and Fe ii in the spectrum as well. The He i lines, when considered in non-LTE stellar wind models, are a strong function of the adopted value of the stellar radius. However, if most of the He i emission comes from the colliding wind interaction region, it forces a larger stellar core radius value for the primary star, \(\sim 120R_{\odot}\) in the preferred models (see Hillier et al., 2001, for many further details). The model of Groh et al. (2012) improved previous spherically symmetric models of Hillier et al. (2001) in that the spectrum was modeled with a cavity carved from the wind of the secondary, which was included along with a central occiter or "coronagraph" that extended \(\sim 0.033^{\prime\prime}\) to allow for stronger He i emission, and better agreement for the P Cygni absorption lines. Given the spectral modeling agreement for the spectroscopically similar star HDE 316285 (Hillier et al., 1998), the strong disagreements for the absorption components and He i lines led to an interpretation that the He i lines are formed in the wind-wind collision region of the system (Nielsen et al., 2007). Indeed, the P Cygni absorption component variability of the optical He i lines seems to represent the outflowing shocked gas from the wind-wind collision region (Richardson et al., 2016). These results all indicate that the best lines in the optical for determination of the orbit may indeed be the upper hydrogen Balmer lines, even if they are likely modified by the wind collisions. All of the measured orbits, including ours, rely on measurements taken when the line profiles are most variable near periastron. This \begin{table} \begin{tabular}{c c c c c c c} \hline Line & \(T_{0}\) (HJD-240000) & \(e\) & \(K\) (km s\({}^{-1}\)) & \(\omega\) (degrees) & \(\gamma\) (km s\({}^{-1}\)) & Source \\ \hline Pay & 48800 \(\pm\) 33 & 0.63 \(\pm\) 0.08 & 53\(\pm\)6 & 286 \(\pm\)6 & \(-15\pm 3\) & Damineli et al. (1997) \\ Pay, He i 6678 & 48829\(\pm\) 8 & 0.802 \(\pm\) 0.033 & 65.4 \(\pm\) 3.5 & 286 \(\pm\) 8 & -12.1 \(\pm\) 2.7 & Davidson (1997) \\ Pay, Pa\(\delta\) & 50861 & 0.75 & 50 & 275 & -12 & Damineli et al. (2000) \\ H\(\beta\) & 54854.9 \(\pm^{4.5}_{-1.1}\) & 0.82 \(\pm\)0.02 & 53.0 \({}^{+2.1}_{-1.9}\) & 254 \(\pm\)4 & -25.5 \(\pm\)2.0 & Grant et al. (2020) \\ All Balmer lines & 54848.3 \(\pm\) 0.4 & 0.91 \(\pm\)0.00 & 69.0 \(\pm\) 0.9 & 241 \(\pm\)1 & & Grant et al. (2020) \\ Upper Balmer lines & 54848.4 \(\pm\) 0.4 & 0.89 \(\pm\)0.00 & 69.9 \(\pm\) 0.8 & 246 \(\pm\)1 & & Grant et al. (2020) \\ H\(\beta\) & 56912.2 \(\pm\) 0.3 & 0.8100 \(\pm\)0.0007 & 58.13 \(\pm\)0.08 & 251.43 \(\pm\)0.19 & 6.34 \(\pm\)0.10 & This work(BinaryStarSolver) \\ H\(\beta\) & 56927.4 \(\pm\) 0.5 & 0.8041 \(\pm\)0.0008 & 54.6\(\pm\)0.2 & 260.6 \(\pm\) 0.2 & 4.83 \(\pm\)0.09 & This work (PHOEBE) \\ He ii & 56973.5 \(\pm\) 0.2 & 0.937 \(\pm\)0.001 & 129.5\(\pm\)5.0 & 80.6 (fixed) & 63.1 \(\pm\)0.4 & This work \\ \hline \end{tabular} \end{table} Table 1: Orbital elements from previous publications and the results from this work. For the orbits of Grant et al. (2020), Grant & Blundell (2022), and our work, the period has been held constant at 2022.7 d, while it was fit in the earlier work of Damineli et al. (1997), Davidson (1997), and Damineli et al. (2000) with periods that agree with 2022.7 d within their errors. Note that our errors from the PHOEBE code may be underestimated, especially for the He ii line (see text for details). Figure 7: Radial velocity as determined using centroid positions in He ii emission at phases away from periastron during 2014–2018 with CHIRON. We overplotted the He ii orbit from Table 1, along with the H\(\beta\) solution from our work shifted to the same \(\gamma\)-velocity as the He ii orbit as a grey dashed line. likely causes additional errors in the parameters derived, but we tried to always sample emission from the same line formation region by taking bisector velocities at the same height. Furthermore, our technique produces nearly the same orbital elements as those from Grant et al. (2020) in the case of H\(\beta\). Grant et al. (2020) proceeded to correct the orbital elements by considering the effects of the outflowing wind. These results all show that the system is a long-period and highly eccentric binary where the primary star is in front of the secondary at periastron, causing the ionization in our line of sight to drop during the "spectroscopic events" due to a wind occultation of the secondary at these times. The results of Grant et al. (2020) show that the higher-order Balmer lines give different results than that of the lower-level lines such as H\(\alpha\) or H\(\beta\), which is expected as the higher level lines form deeper in the wind (e.g. Hillier et al., 2001). As such, the results of Grant et al. (2020) and Grant & Blundell (2022) should be considered the best for the primary star at the current time. Similar differences in the orbital kinematics is sometimes inferred for Wolf-Rayet stars (e.g., \(\gamma^{2}\) Vel; Richardson et al., 2017). Despite the detection of the He ii\(\lambda\)4686 emission at times near apastron by Teodoro et al. (2016), the exact formation channel for this line remains unclear. The emission lines in colliding wind binaries often vary as a function of the orbit due to the colliding wind line excess (e.g., Hill et al., 2000), and the modeling of these variations has been done in the context of the so-called Luhrs model (Luhrs, 1997). Recently, the excess emission was observed to be a strong cooling contributor when X-ray cooling becomes less efficient in the colliding wind binary WR 140 (Pollock et al., 2021). In WR 140, the Figure 8: Results of the Markov chain Monte Carlo fit for the H\(\beta\) velocities. Note that \(\omega_{0}\) refers to the value of \(\omega\) for the primary star. Luhrs model was used by Fahed et al. (2011) to explain the variations in the C III \(\lambda\)5696 line near periastron. The Luhrs model can explain changes in the radial velocity and the width of the excess emission. As can be seen in Fig. 6, we detect the He ii line with our spectra, but the actual characterization of this line will have large errors in line width due to the limited signal-to-noise for the detection in the spectroscopy. We used the models for WR 140 (Fahed et al., 2011) as a starting point, changing stellar and binary parameters as appropriate to the \(\eta\) Carinae system to investigate if the He ii velocities in Fig. 7 were from colliding wind excess emission. For the velocity of the outflow, we can see that during the periastron passage of 2014, \(\eta\) Car's outflow reached velocities faster than the primary star's wind speed based on the optical He i lines (Richardson et al., 2016), which are slower than the excess absorption seen to reach nearly 2000 km s\({}^{-1}\) in the meta-stable He i\(\lambda\)10830 line (Groh et al., 2010). With these velocities, we expect to see the observed amplitude of the excess increase between the times of 2015 and 2018 like we see in Fig. 7, but with amplitudes of at least 1000 km s\({}^{-1}\), much greater than the \(\sim 100\) km s\({}^{-1}\) observed. Therefore, the analysis of the He ii\(\lambda\)4686 emission line at times away from periastron from the CHIRON spectra is an important observation towards understanding the nature of the companion. We note that the data indicate a narrower emission line profile then expected from the parameters inferred for the secondary. However, the primary star dominates the spectrum, and the motion of this peak opposite the primary indicate that the He ii excess could be from the secondary's wind. In particular, the Luhrs models of the kinematics of the He ii line seem to exclude the possibility that the line is formed in the colliding winds at times away from periastron. The models of Smith et al. (2018) suggest that the companion should be a classical Wolf-Rayet star. The classical hydrogen-free Wolf-Rayet stars can be split into the WN and WC subtypes. The WN stars show strong He and N lines, with the He ii\(\lambda\)4686 typically being the strongest optical line, whereas the WC subtype exhibits strong He, C, and O lines with the C IV\(\lambda\lambda\)5802,5812 doublet often being the strongest optical line. There is also the rare WO subtype, which is similar to the WC subtype but shows more dominant O lines. The WO stars were recently shown to have higher carbon and lower helium content than the WC stars, likely representing the final stages of the WR evolution (Tramper et al., 2015; Aadland et al., 2022). Given the generalized characteristics of WR stars, a WN star would seem the most likely companion star if the He ii\(\lambda\)4686 line is from the companion at times further from periastron. For contrast, the Carina nebula is also the home to several hydrogen-rich Wolf-Rayet stars: WR 22, WR 24, and WR 25 (Rosslowe & Crowther, 2015)3. This type of WR star tends to be considered the higher mass and luminosity extension of the main sequence. As such, these stars have masses in excess of \(\sim 60M_{\odot}\), with the R145 system in the LMC having masses of the two WN stars being 105 and 95 \(M_{\odot}\)(Shenar et al., 2017). Like the classical WN stars, these stars have similar nitrogen and helium spectra, along with stronger emission blended on the Balmer lines which overlap with Pickering He ii lines. The region surrounding the He ii\(\lambda\)5411 line in our \(\eta\) Carinae spectra does not exhibit emission lines at the same epochs as our observations of He ii\(\lambda\)4686, making it difficult to quantify the companion's properties without the higher order He ii lines which would also be notably weaker than He ii\(\lambda\)4686. Footnote 3: [http://pacrowther.staff.shef.ac.uk/WKcat/](http://pacrowther.staff.shef.ac.uk/WKcat/) With the assumption that the He ii orbit shown in Table 1 is from the companion star, and that the semi-amplitude from the higher-order Balmer lines for the primary star (Grant et al., 2020), then the semi-amplitude ratio shows that the primary star is 2-3 times more massive than the secondary star. This is also an indicator that the companion is not likely a WNh star, as that would imply the primary star could have a mass of in excess of 100 \(M_{\odot}\). Models of the system, such as those by Okazaki et al. (2008) and Madura et al. (2013), typically have the masses of the primary and secondary as 90 and 30 \(M_{\odot}\) respectively, broadly in agreement with the kinematics of the orbits presented here. On the other hand, if \(\eta\) Carinae A has a mass of \(>100M_{\odot}\), the secondary would have a mass on the order of 50-60 \(M_{\odot}\). This is similar to the nearby WNh star in the Carina nebula: WR22. The mass of this WNh star in an eclipsing system is 56-58 \(M_{\odot}\)(Lenoir-Craig et al., 2022). The tidally-induced pulsations observed by Richardson et al. (2018) were modeled with stars of masses 100 and 30 \(M_{\odot}\), and therefore may also support the higher masses suggested here. Most models for \(\eta\) Car have a preferred orbital inclination of 130-145\({}^{\circ}\)(Madura et al., 2012), which agrees with forbidden [Fe iii] emission observed with _Hubble Space Telescope_'s Space Telescope Imaging Spectrograph. This inclination can be used with the mass function derived from the primary star's orbit \[f(M)=\frac{m_{2}^{3}\sin^{3}i}{(m_{1}+m_{2})^{2}}=(1.0361\times 10^{-7})(1-e^{2 })^{3/2}K_{1}^{3}P[M_{\odot}]\] to constrain the system's masses with the mass function using the standard units measured and our measured H\(\beta\) orbit using PHOEBE (Table 1). The mass function is \(f(M)=8.30\pm 0.05\) M\({}_{\odot}\), and would indicate a companion star with a mass of at least 60 \(M_{\odot}\) if we assume a primary mass of \(\sim 90\)\(M_{\odot}\). Given the actual mass functions for the measured upper Balmer lines and He ii\(\alpha\) orbits, the minimum masses required for these measured orbits are \(M\sin^{3}i=102M_{\odot}\) for the LBV primary and \(M\sin^{3}i=55M_{\odot}\) for the secondary, making the companion star's identification as a WNh star more likely. These results are still preliminary and require follow-up observations to constrain the orbits. A WNh star can account for the mass of the secondary star in \(\eta\) Car, but could cause some difficulty for the modeling of the Great Eruption models of Hirai et al. (2021). In that scenario, the companion star would be a hydrogen-stripped star, contrary to the hydrogen content of the WNh stars. Recently modelled WNh systems such as R144 (Shenar et al., 2021) show that the surface fraction of hydrogen is about 0.4. This does show some amount of lost hydrogen on the surface, so the scenario could still be relevant even if the final star is not a fully stripped classical Wolf-Rayet star, assuming that the evolution of the secondary star has not been significantly influenced by mass exchange prior to or during the merger event hypothesized by both Portegies Zwart & van den Heuvel (2016) and Hirai et al. (2021). ## 6 Conclusions In this paper, we provide an orbital ephemeris for \(\eta\) Carinae measured with a bisector method and high resolution ground-based spectroscopy of the H\(\beta\) emission line, along with an ephemeris for the He ii\(\lambda\)4686 emission line at times far from periastron. Our findings can be be summarized as follows: * The H\(\beta\) emission profile tracks the primary star, and our bisector method provides similar results as the multiple-Gaussian fitting method used by Grant et al. (2020). The results show a high eccentricity orbit of the system with the primary star in front of the secondary at periastron. * The weak He ii\(\lambda\)4686 emission tracks opposite the kinematics of the primary star, suggesting it is formed in the secondary star's wind at times away from periastron. This could support the hypothesis of the scenarios presented by Hirai et al. (2021) for a stellar merger being the cause of the Great Eruption as the secondary could be a Wolf-Rayet star that has leftover hydrogen on its surface. * With the assumed inclination of 130-145\({}^{\circ}\), the masses of the stars could be around \(\sim\)100 \(M_{\odot}\) for the primary and at least 60 \(M_{\odot}\) for the secondary. However, the mass ratio derived by comparing the two semi-amplitudes is about 1.9. New observations will be needed to better determine precise masses. Future studies will be able to better measure the He ii 4686 orbit and refine its parameters. As shown in Grant et al. (2020), the upper Balmer lines are more likely to reflect the orbital motion of the stars, and the upper Paschen lines will also be useful. However, our work shows that a simpler bisector measurement of higher resolution spectroscopy results in the same derived orbital elements as that of Grant et al. (2020). Furthermore, with better signal-to-noise spectra, we can better determine if the He ii emission near periastron can be reproduced with a Luhrs model or if it is a signature of the companion. With this information, we will be able to more precisely measure the kinematics of the two stars and the mass function, and then we can begin to better understand the current evolutionary status of the system. ## Acknowledgements We thank our referee, Tomer Shenar for many suggestions that improved this paper. These results are the result of many allocations of telescope time for the CTIO 1.5-m telescope and echelle spectrographs. We thank internal SMARTS allocations at Georgia State University, as well as NOIR Lab (formerly NOAO) allocations of NOAO-09B-153, NOAO-12A-216, NOAO-12B-194, NOAO-13B-328, NOAO-15A-0109, NOAO-18A-0295, NOAO-19B-204, NOIRLab-20A-0054, and NOIRLab-21B-0334. This research has used data from the CTIO/SMARTS 1.5m telescope, which is operated as part of the SMARTS Consortium by RECONS (www.recons.org) members Todd Henry, Hodari James, Wei-Chun Jao, and Leonardo Paredes. At the telescope, observations were carried out by Roberto Aviles and Rodrigo Hinojosa. C.S.P. and A.L. were partially supported by the Embry-Riddle Aeronautical University Undergraduate Research Institute. E.S. acknowledges support from the Arizona Space Grant program. N.D.R., C.S.P., A.L., E.S., and T.R.G. acknowledge support from the _HST_ GO Programs #15611 and #15992. AD thanks to FAPESP (2011/51680-6 and 2019/02029-2) for support. AFJM is grateful for financial aid from NSERC (Canada). The material is based upon work supported by NASA under award number 80GSFC21M0002. The work of ANC is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. ## Data Availability All measurements can be found in Appendix A. Reasonable requests to use the reduced spectra will be granted by the corresponding author.
2309.12242
Weakly-supervised Automated Audio Captioning via text only training
In recent years, datasets of paired audio and captions have enabled remarkable success in automatically generating descriptions for audio clips, namely Automated Audio Captioning (AAC). However, it is labor-intensive and time-consuming to collect a sufficient number of paired audio and captions. Motivated by the recent advances in Contrastive Language-Audio Pretraining (CLAP), we propose a weakly-supervised approach to train an AAC model assuming only text data and a pre-trained CLAP model, alleviating the need for paired target data. Our approach leverages the similarity between audio and text embeddings in CLAP. During training, we learn to reconstruct the text from the CLAP text embedding, and during inference, we decode using the audio embeddings. To mitigate the modality gap between the audio and text embeddings we employ strategies to bridge the gap during training and inference stages. We evaluate our proposed method on Clotho and AudioCaps datasets demonstrating its ability to achieve a relative performance of up to ~$83\%$ compared to fully supervised approaches trained with paired target data.
Theodoros Kouzelis, Vassilis Katsouros
2023-09-21T16:40:46Z
http://arxiv.org/abs/2309.12242v1
# Weakly-supervised automated audio captioning via text only training ###### Abstract In recent years, datasets of paired audio and captions have enabled remarkable success in automatically generating descriptions for audio clips, namely Automated Audio Captioning (AAC). However, it is labor-intensive and time-consuming to collect a sufficient number of paired audio and captions. Motivated by the recent advances in Contrastive Language-Audio Pretraining (CLAP), we propose a weakly-supervised approach to train an AAC model assuming only text data and a pre-trained CLAP model, alleviating the need for paired target data. Our approach leverages the similarity between audio and text embeddings in CLAP. During training, we learn to reconstruct the text from the CLAP text embedding, and during inference, we decode using the audio embeddings. To mitigate the modality gap between the audio and text embeddings we employ strategies to bridge the gap during training and inference stages. We evaluate our proposed method on Clotho and AudioCaps datasets demonstrating its ability to achieve a relative performance of up to \(~{}83\%\) compared to fully supervised approaches trained with paired target data. 1 Our code is available at: [https://github.com/zelaki/wsac](https://github.com/zelaki/wsac) Footnote 1: This work was conducted in the framework of the PREMIERE project (No. 101061303) that is funded by the European Union. Theodoros Kouzelis\(~{}\) Institute for Language and Speech Processing Athena Research Center 15125, Marousi, Greece [email protected] Automated audio captioning, multi-modal learning, contrastive learning. ## 1 Introduction Audio-Language tasks have recently gained the attention of the audio community with the introduction of Automated Audio Captioning and Language-Based Audio Retrieval in the DCASE Challenge and the release of publicly available Audio-Language datasets such as Clotho [1] and AudioCaps [2]. The intrinsic relationship between Audio and Language presents an opportunity for the development of models that can effectively establish a shared semantic space for the two modalities. Such an approach has recently achieved great success with models like COALA [3], AudioClip [4], and CLAP [5, 6, 7]. These models use parallel audio-text data to train a joint representation, where the embeddings of audio-text pairs are similar. Such models achieve high accuracy in a zero-shot setting in a variety of tasks including Sound Event Classification, Music tasks, and Speech-related tasks [5]. Automated Audio Captioning (AAC) is a multimodal task that aims to generate textual descriptions for a given audio clip. In order to generate meaningful descriptions, a method needs to capture the sound events present in an audio clip and generate a description in natural language. Training audio captioning models requires large datasets of audio-caption pairs, and these are challenging to collect. While great effort has been done, the data scarcity issue of audio captioning still withholds. The common datasets in AAC, AudioCaps and Clotho, contain together 50k captions for training, whereas 400k captions are provided in COCO caption [8] for image captioning. Kim et al. [9] observe that due to the limited data, prior arts design decoders with shallow layers that fail to learn generalized language expressivity and are fitted to the small-scaled target dataset. Due to this issue, their performance radically decreases when tested on out-of-domain data. Motivated by these limitations we present an approach to AAC that only requires a pre-trained CLAP model and unpaired captions from a target domain. This alleviates the need for paired audio-text data, and also allows for simple and efficient domain adaptation. Our approach is inspired by recent advances in zero-shot image captioning [10, 11], that leverage the aligned multi-modal latent space provided by CLIP [12] obtaining the need for image data during training and by the recent success of Contrastive Language-Audio models such as CLAP [5] in many downstream tasks. We train a lightweight decoder model to reconstruct texts from their respective CLAP embeddings, and at inference use this decoder to decode the audio embeddings. Our findings align with prior studies in image captioning suggesting that such an approach is suboptimal due to the presence of a phenomenon known as _modality gap_[13]. The _modality gap_ suggests that embeddings from different data modalities are located in two completely separate regions of the embedding space of multi-modal contrastive models [13]. To mitigate this issue we employ strategies that have been shown to effectively condense the gap in CLIP embeddings [10, 11] and show that they can be effectively utilized for CLAP models. These strategies can be divided into two categories, strategies that condense the gap during _training_ and during _inference_. Experiments on Clotho and AudioCaps datasets show that our weakly-supervised approach can achieve comparable performance to prior fully supervised arts, without requiring any target audio data during training. Our contributions can be summarized as follows: (1) We propose **WSAC:** Weakly-**S**upervised **A**udio **C**aptioning an AAC approach that requires no auditory in-domain data for training, (2) we demonstrate that the _modality gap_ phenomenon is present in CLAP models, and (3) employ methods that effectively mitigate it. ## 2 Text-only training Our goal is to learn a model that produces a caption for a given audio clip. Unlike fully supervised approaches, during training we only assume that we have access to a set of target domain captions \(\mathcal{C}\). We further assume a pre-trained CLAP model with an audio en -coder \(\mathcal{A}_{clap}\) and a text encoder \(\mathcal{T}_{clap}\) trained to project semantically similar audio-text pairs into similar embeddings in a shared embedding space as presented in Fig. 1 (Left). Given an audio clip \(x_{a}\) and text \(x_{t}\) let \(\mathbf{z_{a}}=\mathcal{A}_{clap}(x_{a})\in\mathbb{R}^{d}\) and \(\mathbf{z_{t}}=\mathcal{T}_{clap}(x_{t})\in\mathbb{R}^{d}\) be their embeddings. First we extract text embeddings \(\mathbf{z}_{t}\) for all \(x_{t}\in\mathcal{C}\), keeping \(\mathcal{T}_{clap}\) frozen. During training, our goal is to learn a network that inverts the CLAP text encoder \(\mathcal{T}_{clap}\). We use a textual decoder \(D\) consisting of a mapping network \(f\) and an auto-regressive language model, to reconstruct the original text \(x_{t}\) from the CLAP text embedding \(\mathbf{z_{t}}\). Following recent work [9], we train our decoder using the prefix language modeling paradigm. Specifically, after passing the text embedding through the mapping network \(f\) we regard \(\mathbf{p}=f(\mathbf{z_{t}})\) as a prefix to the caption. Given a text \(t=\{w_{1},w_{2},...,w_{T}\}\), our objective is to minimize the autoregressive cross-entropy loss: \[\mathcal{L}=-\sum_{i=1}^{T}\log D(w_{i}|w_{<i},\mathbf{p}) \tag{1}\] Since the CLAP text embedding is optimized to be similar to the CLAP audio embedding, we can directly infer the text decoder using the audio embeddings \(\mathbf{z}_{a}\) without any pairwise training on the target dataset. The training and inference stages are presented in Fig. 1 (middle) and (right) respectively. ## 3 Strategies to Bridge the Modality Gap Directly employing the audio embeddings to infer \(D\) is not optimal due to the presence of the modality gap. Fig. 2 is a visualization of generated embeddings from the pre-trained CLAP model from the Clotho training set. Paired inputs are fed into the pre-trained model and the embeddings are visualized in 2D using T-SNE [14]. This visualization clearly demonstrates the presence of the modality gap phenomenon, as a noticeable gap separates the paired audio and text embeddings. To address this issue, we utilize strategies that have demonstrated success in bridging the modality gap in CLIP embedding space [10, 11, 13]. We show that these strategies can be adopted for CLAP and show their effectiveness in mitigating the modality gap. These approaches can be divided into two categories: Bridging the gap either during the training phase or during the inference phase. ### Training strategies Attempting to reduce the modality gap during training we adopt the following strategies: (a) Noise injection [10], and Embedding Shift [13]. These strategies aim to narrow the disparity between the modality used to train the decoder, which is text, and the target modality, which is audio. #### 3.1.1 Noise injection In [10], the authors show that injecting the text embedding with Gaussian noise during training has the effect of creating a region in the embedding space that will map to the same caption. This method assumes that the corresponding audio embedding is more likely to be inside this region. Following [10], we add zero-mean Gaussian noise of standard deviation \(\sigma\) to the text embedding before feeding it to the decoder. We set \(\sigma\) to the mean \(L_{inf}\) norm of embedding differences between five captions that correspond to the same audio. Since we assume no access to target audio data we estimate \(\sigma\) using 50 audio-caption pairs from the WavCaps dataset [7]. Thus the prefix in Eq. 1 becomes \(\mathbf{p}=f(\mathbf{z_{t}}+\mathbf{n})\), where \(\mathbf{n}\in\mathbb{R}^{d}\) is a random standard Gaussian noise with standard deviation \(\sigma\). #### 3.1.2 Embedding shift Building upon the findings of [13], who investigated the impact of shifting embeddings in various multi-modal contrastive learning models on downstream tasks, we propose a method to align the text embeddings with the audio embeddings during training. First, we define the modality gap following [13], as the difference between the center of audio embeddings and text embeddings: Figure 1: Overview of our proposed approach. **Left:** An illustration of the CLAP training paradigm. The encoders are trained to map semantically similar audio-caption pairs to similar embeddings in a joint representation space. **Middle:** Our proposed weakly supervised training. A frozen CLAP text encoder embeds a caption and a decoder learns to reconstruct the caption from its embedding. **Right:** At inference, we decode the audio embedding extracted from a frozen CLAP audio encoder, using the trained decoder. \[\mathbf{\Delta_{gap}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{z_{a_{i}}}-\frac{1}{n}\sum_ {i=1}^{n}\mathbf{z_{t_{i}}} \tag{2}\] Then, we shift every text embedding toward closing the modality gap, and thus the prefix in Eq. 1 becomes \(\mathbf{p}=f(\mathbf{z_{t}}+\mathbf{\Delta_{gap}})\). ### Inference strategies At inference, we adopt two training-free strategies proposed in [11], and map an audio embedding extracted from the CLAP audio encoder \(\mathcal{A}_{clap}\) into the text embedding space. For both strategies, we will assume a decoder \(D\) trained on some target data as described in Section 2 and a set of text embeddings obtained from the target training set that we will refer to as _Memory_, \(\mathcal{M}=\{\mathbf{z_{t}^{1}},\mathbf{z_{t}^{2}},...\mathbf{z_{t}^{N}}\}\), where \(N\) is the size of the training set. #### 3.2.1 Nearest-neighbor decoding A straightforward strategy that can be adopted at inference time to mitigate the modality gap is to use the nearest text embedding as the prefix, instead of the audio embedding. We calculate the cosine similarity between the audio embedding \(\mathbf{z_{a}}\) and the text embeddings in \(\mathcal{M}\) and decode with the most similar: \[\mathbf{p}=\mathbf{z_{i}}\ \mid\ i=\underset{\mathbf{z_{t}}\in\mathcal{M}}{ argmax}\ sim(\mathbf{z_{a}},\mathbf{z_{t}}) \tag{3}\] Where \(sim(\mathbf{x},\mathbf{y})=\frac{\mathbf{x}\cdot\mathbf{y}}{\|\mathbf{x}\| -\|\mathbf{y}\|}\). Since the decoder is trained to reconstruct the original text conditioned on the text embedding, nearest-neighbor decoding can be successful if a sufficiently similar text embedding is present in \(\mathcal{M}\). #### 3.2.2 Projection-based decoding A better approach is to project the audio embedding into the text embedding space. This involves obtaining the representation of the audio embedding, by combining the embeddings in \(\mathcal{M}\) through a weighted combination. \[\mathbf{p}=\sum_{i=1}^{|\mathcal{M}|}w_{i}*\mathbf{z_{t_{i}}} \tag{4}\] The weights \(w_{i}\) for these text embeddings are determined by calculating the cosine similarity between the audio embedding \(\mathbf{z_{a}}\) and each embedding in \(\mathcal{M}\). Following [11] the similarity is then scaled by a temperature parameter \(\tau\) and normalized using a softmax function: \[w_{i}=\frac{\exp(sim(\mathbf{z_{a}},\mathbf{z_{t_{i}}})/\tau)}{\sum_{j=1}^{| \mathcal{M}|}\exp(sim(\mathbf{z_{a}},\mathbf{z_{t_{j}}})/\tau)} \tag{5}\] ## 4 Experiments ### Data We conduct experiments using two benchmarks, AudioCaps and Clotho. AudioCaps contains 50k, 10-second audio clips sourced from Audioset [15]. Each audio is annotated with one caption in the training set and five captions in the evaluation set. Clotho consists of 4981 audio samples of 15 to 30 seconds duration. Each audio is annotated with five captions. We follow the standard recipes of training, validation, and test splits on each dataset for our experiments. To adhere to a weakly-supervised setting we assume no access to audio data in the training and validation sets. ### Experimental setup To extract audio and text embeddings we employ a frozen CLAP model2 trained on WavCaps [7]. The audio encoder is a CNN14 from Pre-trained Audio Neural Networks (PANNs) [16], and the text encoder is a BERT-based model [17]. We choose this model as the embedding extractor because AudioCaps and Clotho datasets were not included in its training set. This choice is made under the assumption that target audio data are unavailable for training purposes. The decoder \(D\) consists of a mapping network \(f\) which is a 2-layered MLP, and the language model which is a 4-layer Transformer [18] with 4 attention heads. The size of the hidden state is 768. The decoder \(D\) is trained from scratch on the target captions. The noise variance for _Noise Injection_ training is set to \(\sigma^{2}=0.013\). We train the proposed model for 30 epochs using Adam optimizer [19] and a batch size of 64. The learning rate is linearly increased to \(2\times 10^{-5}\) in the first five epochs using warm-up, which is then multiplied by 0.2 every 10 epochs. We use greedy search for decoding. Footnote 2: [https://github.com/XinhaoMei/WavCaps/tree/master](https://github.com/XinhaoMei/WavCaps/tree/master) ### Compared methods and evaluation metrics Since no previous work has addressed AAC in similar supervision settings we compare our methods against fully supervised approaches trained on paired data. Koh et al. [23] use a latent space similarity objective and train a model with a PANNs encoder and a transformer decoder. Xu et al. [22] design a GRU for the decoder. Mei et al. [20] propose a full transformer encoder-decoder architecture. Gontier et al. [21] utilize a pre-trained language model based on BART [21], and finetune it for AAC using guidance from Audioset tags. Kim et al. [9] propose prefix tuning for AAC learning a prefix to guide the caption generation of a frozen GPT-2 [24]. Mei et al. [7] utilize a CLAP audio encoder pre-trained on WavCaps and a BART decoder achieving state-of-the-art results in both Clotho and AudioCaps. All the methods in this work are evaluated by the metrics widely used in the captioning tasks, including BLEU [25], METEOR [26], ROUGE-L [27], CIDEr [28], SPICE [29], and SPIDEr [30]. Figure 2: Visualization of audio and text embedding pairs randomly sampled from the Clotho training set. The modality gap phenomenon is present as the audio and text modalities are embedded in two completely separate regions. ### Results and Discussion In this section, we present the results of our proposed methods on the performance metrics and compare them with fully supervised arts. Additionally, we illustrate the effectiveness of each strategy in reducing the modality gap. As shown in Table 1 our methods demonstrate comparable performance to prior state-of-the-art models despite never encountering in-domain audio data during training. We present the results of our baseline approach described in Section 2 and the results of the baseline approach in conjunction with the strategies presented in Section 3. It is evident that all the strategies boost the performance of our baseline approach in both evaluation sets. Interestingly the _inference strategies_ outperform the _training strategies_ in most cases. We hypothesize that this is because they utilize the _Memory_\(\mathcal{M}\) which consists of in-domain text embeddings in order to bridge the modality gap. Our best-performing method, namely _Projection-based decoding_ achieves 80% and 83% of the SPIDEr performance of the current fully supervised state-of-the model in Clotho and AudioCaps evaluation sets respectively. Additionally _Projection-based decoding_ matches the performance of the of fully-supervised approaches proposed by Kim et al. [9]. Koh et al. [23] and Xu et al. [22] in the Clotho evaluation set. **Visualization of embeddings:** To further examine the effectiveness of the proposed strategies we illustrate the embeddings in 2D space using t-SNE in Fig. 3a and 3b we randomly sample audio and text embeddings from the Clotho training set after applying _Noise Injection_ and _Embedding Shift_ to the text embeddings. Fig. 3c and 3d illustrate randomly selected text embeddings from the Clotho evaluation set, alongside the embeddings utilized for decoding, namely the nearest neighbors and the projections, rather than the paired audio embeddings. It is evident that all strategies are effective in condensing the modality gap showcased in Fig. 2, where the audio and text modalities are embedded at arm's length in their shared representation space. ## 5 Conclusion and Feature Work In this work, we propose a weakly-supervised approach for Automated Audio Captioning that requires a pre-trained CLAP model and only additional text data to train on a target domain. Our method alleviates the necessity of paired data in a target domain, which are hard to collect. We demonstrate that by leveraging the shared embedding space of CLAP we can learn to reconstruct the text from the CLAP text embedding and during inference decode using the audio embeddings. We show that such an approach is suboptimal due to the presence of a modality gap and adopt strategies that effectively mitigate it. Our best-performing method achieves comparable results to prior arts trained in a fully supervised manner. For future work, we plan to study the effectiveness of our proposed approach on other tasks, such as Music Captioning and Audio Question Answering. We further aim to train a mapping network to learn the gap between the two modalities in a supervised manner.
2309.04434
Physics-Informed Neural Networks for an optimal counterdiabatic quantum computation
We introduce a novel methodology that leverages the strength of Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD) protocol in the optimization of quantum circuits comprised of systems with $N_{Q}$ qubits. The primary objective is to utilize physics-inspired deep learning techniques to accurately solve the time evolution of the different physical observables within the quantum system. To accomplish this objective, we embed the necessary physical information into an underlying neural network to effectively tackle the problem. In particular, we impose the hermiticity condition on all physical observables and make use of the principle of least action, guaranteeing the acquisition of the most appropriate counterdiabatic terms based on the underlying physics. The proposed approach offers a dependable alternative to address the CD driving problem, free from the constraints typically encountered in previous methodologies relying on classical numerical approximations. Our method provides a general framework to obtain optimal results from the physical observables relevant to the problem, including the external parameterization in time known as scheduling function, the gauge potential or operator involving the non-adiabatic terms, as well as the temporal evolution of the energy levels of the system, among others. The main applications of this methodology have been the $\mathrm{H_{2}}$ and $\mathrm{LiH}$ molecules, represented by a 2-qubit and 4-qubit systems employing the STO-3G basis. The presented results demonstrate the successful derivation of a desirable decomposition for the non-adiabatic terms, achieved through a linear combination utilizing Pauli operators. This attribute confers significant advantages to its practical implementation within quantum computing algorithms.
Antonio Ferrer-Sánchez, Carlos Flores-Garrigos, Carlos Hernani-Morales, José J. Orquín-Marqués, Narendra N. Hegade, Alejandro Gomez Cadavid, Iraitz Montalban, Enrique Solano, Yolanda Vives-Gilabert, José D. Martín-Guerrero
2023-09-08T16:55:39Z
http://arxiv.org/abs/2309.04434v2
# Physics-Informed Neural Networks for an Optimal Counterdiabatic quantum computation ###### Abstract We introduce a novel methodology that leverages the strength of Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD) protocol in the optimization of quantum circuits comprised of systems with \(N_{Q}\) qubits. The primary objective is to utilize physics-inspired deep learning techniques to accurately solve the time evolution of the different physical observables within the quantum system. To accomplish this objective, we embed the necessary physical information into an underlying neural network to effectively tackle the problem. In particular, we impose the hermiticity condition on all physical observables and make use of the principle of least action, guaranteeing the acquisition of the most appropriate counterdiabatic terms based on the underlying physics. The proposed approach offers a dependable alternative to address the CD driving problem, free from the constraints typically encountered in previous methodologies relying on classical numerical approximations. Our method provides a general framework to obtain optimal results from the physical observables relevant to the problem, including the external parameterization in time known as scheduling function, the gauge potential or operator involving the non-adiabatic terms, as well as the temporal evolution of the energy levels of the system, among others. The main applications of this methodology have been the \(\mathrm{H_{2}}\) and \(\mathrm{LiH}\) molecules, represented by a 2-qubit and 4-qubit systems employing the STO-3G basis. The presented results demonstrate the successful derivation of a desirable decomposition for the non-adiabatic terms, achieved through a linear combination utilizing Pauli operators. This attribute confers significant advantages to its practical implementation within quantum computing algorithms. Neural networks, counterdiabatic driving, Quantum computing, Quantum information, PINNs ## 1 Introduction Quantum computing has emerged as a dynamic and vibrant domain of research within the scientific community, primarily driven by notable achievements and advancements in applied quantum machine learning [1, 2, 3, 4], quantum simulation [5, 6], and optimization of circuits and systems [7, 8]. Optimization problems have garnered particular attention, given their pervasive presence in various domains, including medicine [9], economics [10], logistics [11], and numerous others [12, 13, 14]. Classical approaches to solving these problems from an industrial perspective often face challenges in terms of efficiency and speed, thereby motivating the exploration of quantum computing as a promising alternative. The escalating interest in these methods can be attributed primarily to recent experimental advancements. This surge in interest is particularly noticeable from an industrial and commercial perspective. Consequently, there is growing anticipation that both conventional computers and quantum computing, in general, could yield significant advantages, eventually achieving a state known as "quantum supremacy" [15]. This potential advantage and progress have, in turn, spurred developments in various scientific domains, wherein contemporary quantum computers serve as proof of concept. However, it is essential to underscore that the applicability of these quantum algorithms warrants extensive research and investigation, particularly in the current state of quantum computing, which is commonly referred to as the Noisy Intermediate-Scale Quantum (NISQ) era [16] whose defining characteristic is the utilization of quantum processors with capacities of up to 1000 qubits. Hybrid classical-quantum algorithms leverage NISQ devices while offloading a portion of their computational workload onto classical devices, offering considerable potential for practical applications in the field of quantum computing. A prominent example worth highlighting is the Variational Quantum Eigensolver (VQE) [17]. The primary objective of VQE is to determine the lowest energy quantum state through hybrid optimization, utilizing a designated Hamiltonian operator in conjunction with variational quantum circuits. The realm of quantum machine learning also falls under the purview of these algorithms, seeking to adapt classical algorithms to their quantum counterparts to enhance and expedite computations by capitalizing on the principles of quantum superposition, entanglement, and interference. Within this domain, one can identify supervised classification algorithms like binary classification and those based on Grover's search algorithm [18]. Notably, Grover's algorithm has demonstrated quadratic acceleration in solving problems such as \(k\)-medians [19] or \(k\)-nearest neighbors [20, 21]. On the other hand, an alternative methodology that has significantly progressed in the literature of this field and has laid the foundation for numerous studies is the Quantum Approximate Optimization Algorithm (QAOA) [22, 23]. This approach presents a valuable alternative for tackling combinatorial optimization problems using shallow quantum circuits through classical optimization of the associated parameters. In recent literature, substantial endeavors have been dedicated to employing these methodologies to solve the ground-state challenges of physical systems [24, 25, 26], reflecting the ongoing efforts to enhance and adapt these techniques for broader applications in quantum optimization. In recent years, significant attention and interest have been directed towards the development of adiabatic quantum optimization (AQO) methodologies for confronting optimization problems [27, 28] with direct practical implementations in the branches of physics and chemistry [29, 30, 31]. These algorithms begin by initializing a quantum system in the ground state of a known Hamiltonian. The system's Hamiltonian is then slowly evolved into one that represents the problem to be solved, with its ground state encoding the solution. Leveraging the adiabatic theorem [32, 33], it becomes feasible to ensure that the quantum system remains in its instantaneous state of lowest energy, provided the evolution of the Hamiltonian is carried out in a sufficiently slow and gradual manner and within a sufficiently extended period of time. Nevertheless, implementing slow adiabatic evolution at the experimental level is typically not feasible, necessitating the development of methodologies that accelerate these processes. In pursuit of this objective, recent scientific literature puts forth various approaches based on Shortcuts To Adiabaticity (STA) [34, 35]. These methodologies encompass diverse techniques, including fast-forward methods [36, 37], invariant-based inverse engineering [38, 39], and counterdiabatic (CD) protocols [40, 41]. Despite the noticeable progress in the first two methodologies, this study primarily centers around the CD protocols. These techniques are specifically designed to speed up the adiabatic evolution process from an initial Hamiltonian to a final Hamiltonian. This is achieved by incorporating non-adiabatic terms following Equation (1), which effectively nullify the transition amplitudes between any energy eigenstate of the original Hamiltonian [42]. Consequently, the quantum system undergoes an accelerated adiabatic evolution in practical applications. The resulting Hamiltonian by including the CD term is given by \[\mathbf{\mathcal{H}}(t):=\mathbf{\mathcal{H}}_{\text{AD}}(t)+\mathbf{\mathcal{H}}_{\text{ CD}}(t). \tag{1}\] The operator designated as \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\) in (1) will be tailored to facilitate the preparation of its ground energy state at the initial time of the evolution. Nevertheless, the main challenge of the CD protocols lies in the accurate and plausible determination of the operator encompassing the non-adiabatic terms of the process, denoted here as \(\mathbf{\mathcal{H}}_{\text{CD}}(t)\). In general, the computation and acquisition of this operator are exceedingly intricate tasks, particularly when dealing with many-body quantum systems [43]. As a customary approach in related literature, a time-dependent external parameterization, denoted as \(\mathbf{\lambda}(t)\), is introduced to which the operators [44] are dependent (explained in detail in Section 2.2). Efforts have been directed towards a method for the approximate determination of this operator, leading to significant progress, such as the development of the Nested Commutator (NC) methodology [45]. This advancement has led to recent contributions, exemplified by [23, 46]. Within this framework, the computation of these terms is simplified into a series of commutators involving \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\) and its derivatives concerning the mentioned external parameterization \(\mathbf{\lambda}(t)\). As a result, the non-adiabatic terms of these protocols are obtained approximately through an expansion in orders, where the complexity of obtaining them rises accordingly with the number of particles in the system and the order considered in the expansion. Nevertheless, these methodologies may exhibit problem-dependent characteristics, as their escalating complexity in non-trivial physical scenarios might necessitate the adoption of alternative perspectives to approach the issue at hand. In a distinct domain, within the rapid progression of the computer science field, the realm of Deep Learning (DL) has achieved prominence as a highly potent instrument for constructing diverse models, owing to its direct applicability in domains such as image processing [47], natural language generation and processing [48], time series prediction and classification [49], among a plethora of other possibilities. Present-day technologies encompass Recurrent Neural Networks (RNNs) [50], Long Short-Term Memory architectures (LSTMs) [51], the well-known transformers [52], sparse and submanifolds convolutional networks [53] utilized for images with limited information load, among other advanced techniques. The Physics-Informed Neural Networks (PINNs) methodology has emerged as a highly intriguing DL application within the realm of physical sciences since its first appearances in the literature [54, 55]. This approach aims to address specific problems by employing neural networks as powerful universal approximators for systems of Partial Differential Equations (PDEs) that govern the physics describing the evolution. Due to their remarkable potential as numerical solvers, PINNs have garnered considerable attention and established themselves as a viable alternative to conventional numerical solving algorithms. Extensive efforts have been undertaken in diverse branches of physics to apply this methodology and its adaptations. These fields encompass classical hydrodynamics [56], relativistic hydrodynamics [57], electrodynamics [58], chemistry [59], and many others. PINNs demonstrate their utility wherever differential equations are employed to describe the underlying physics of a given scenario. Their ability to unravel complex physical phenomena and offer numerical solutions has positioned them as promising tools for tackling intricate problems across various scientific domains. Consequently, the motivation to employ this methodology for addressing the challenge of CD protocols in quantum systems arises organically. The investigation of potential applications of PINNs in quantum systems and circuits becomes a natural course of study. In this paper, we present an innovative approach for designing the counterdiabatic terms in CD protocols. Our proposed method lays on the utilization of PINNs, thereby representing a direct application of DL. We introduce a PINN-based methodology without supplementary alterations, enabling it to effectively resolve the underlying physics of the problem. Through the neural network, we obtain both the counterdiabatic terms and the temporal parameterization \(\lambda(t)\), as expounded in Section 2.2. Additionally, we directly explore the experimental feasibility by decomposing the non-adiabatic operator into tensor products of the set of Pauli and identity matrices. This approach offers a comprehensive and direct means to address the experimental applicability of the method. The rest of the paper is structured as follows: Section 2 provides an introduction to the foundational theoretical framework concerning the operation of baseline PINNs. It further presents a comprehensive exposition of the theoretical framework under consideration, encompassing CD protocols and the specific problem that serves as the motivation for our research. Additionally, a thorough literature review of prior work in this domain is presented. Section 3 delves into a meticulous presentation of the adapted PINN methodology, particularized to address our specific case, while taking into account all pertinent physical factors that the network must conform to. In Section 4, we present notable outcomes obtained through our methodology and juxtapose them with previous findings in the field. Furthermore, we conduct comparisons and explore the scalability of the proposed methodology. Finally, Section 5 serves as a concluding segment, summarizing the principal conclusions drawn from our research and offering insights into potential avenues for future investigations. ## 2 General Concepts ### Physics-Informed Neural Networks The fundamental approach employed in PINNs methodologies [55] involves leveraging neural networks as powerful tools for approximating functions and solving physical problems by fitting sets of differential equations, known as PDEs. PINNs derive their name from the fact that they incorporate physical knowledge through the incorporation of inductive biases. These biases are manifested in various aspects of the methodology, including the design of the underlying neural network architecture, the formulation of appropriate cost functions (losses), and other characteristics that aim to ensure or enhance the convergence of the neural model. The underlying algorithm of these networks leverages the automated differentiation capabilities found in contemporary frameworks [60] to construct differential equations based on the output variables obtained from the network. These variables are essential for computing the specific problem at hand. By performing calculations, a minimization process is subsequently employed, guided by a designated loss function, to update the trainable parameters of the architecture. Consequently, this adjustment aligns the network with the requirements of the physical framework. Taking a broad perspective while maintaining generality, let us denote by \(\boldsymbol{\mathcal{U}}:=\boldsymbol{\mathcal{U}}(t,\boldsymbol{x})\) a collection of physical variables that serve as the output of the neural network. These variables, along with their derivatives, are the components of a system of PDEs defined within a domain of interest \(\Omega\) over a specific time interval \([0,T]\). Consequently, it is possible to write the following definition: \[\mathcal{F}\left(t,\boldsymbol{x};\frac{\partial\boldsymbol{\mathcal{U}}}{ \partial t},\frac{\partial^{2}\boldsymbol{\mathcal{U}}}{\partial t^{2}},\dots ;\frac{\partial\boldsymbol{\mathcal{U}}}{\partial x_{1}},\dots,\frac{ \partial\boldsymbol{\mathcal{U}}}{\partial x_{D}};\frac{\partial^{2} \boldsymbol{\mathcal{U}}}{\partial x_{1}\partial x_{1}},\dots,\frac{ \partial^{2}\boldsymbol{\mathcal{U}}}{\partial x_{1}\partial x_{D}},\dots \right)=0,\quad\boldsymbol{x}=(x_{1},\dots,x_{D})\in\Omega,\quad t\in[0,T], \tag{2}\] where \(D\) corresponds to the spatial dimension of the problem and \(\Omega\subset\mathbb{R}^{D}\). The operator \(\mathcal{F}\) defined in Equation (2) can be conceptualized as the comprehensive collection of physical constraints inherent to the system, which must be fulfilled in order to satisfy the underlying PDEs. It is worth noting that, in addition to these constraints, supplementary limitations can be established, such as the initial conditions that dictate the evolution of the system, or potential boundary conditions that may influence the behavior of the system at the spatial boundaries of the domain, namely, \[\mathcal{IC}(t,\boldsymbol{x})=0,\qquad(t,\boldsymbol{x})\in\{0 \}\times\Omega. \tag{3}\] \[\mathcal{B}(t,\boldsymbol{x})=0,\qquad(t,\boldsymbol{x})\in(0,T] \times\partial\Omega. \tag{4}\] In addition to the aforementioned conditions, other factors can be taken into consideration, such as imposing additional constraints on the final time (final conditions), or at specific points of significant physical significance within the spatiotemporal framework. Furthermore, if there are actual experimental measurements available for a subset of the domain, they can also be incorporated. Consequently, each of these physical conditions represents a segment of a priori knowledge regarding the physical scenario that can be integrated into the cost function as separate terms (referred to as "soft enforcement"), as denoted with \(\mathcal{L}_{i}\) in Equation (5): \[\mathcal{L}:=\omega_{\mathcal{F}}\mathcal{L}_{\mathcal{F}}+\sum_{i}\omega_{i} \mathcal{L}_{i}, \tag{5}\] where \(\mathcal{L}_{\mathcal{F}}\) corresponds to the metric pertaining to the underlying system of PDEs, while the collection \((\omega_{\mathcal{F}},\omega_{i},\dots)\) represents the weights assigned to each term within the mixture. The neural architecture employed in this methodology yields a set of essential physical variables, denoted as \(\boldsymbol{\mathcal{U}}(t,\boldsymbol{x};\Theta):=\boldsymbol{\mathcal{U}}_{ \Theta}(t,\boldsymbol{x})\), where \(\Theta\) encompasses all the trainable parameters of the network that are updated during the training process. Consequently, the output aims to closely align with the corresponding real-world values: \[\boldsymbol{\mathcal{U}}_{\Theta}(t,\boldsymbol{x})\approx\boldsymbol{ \mathcal{U}}(t,\boldsymbol{x}).\] The constraints expressed in (3) and (4) can be transformed into cost functions by employing a suitable difference measurement metric, such as _Mean Squared Error_ (MSE) or similar approaches. The determination of these cost functions together with \(\mathcal{L}_{\mathcal{F}}\) can be outlined as follows. \[\mathcal{L}_{\mathcal{I}\mathcal{C}}:=\frac{1}{N_{\mathcal{I}\mathcal{C}}} \sum_{\{0\}\times\Omega}|\boldsymbol{\mathcal{U}}_{\Theta}(0,\boldsymbol{x})- \boldsymbol{\mathcal{U}}(0,\boldsymbol{x})|^{2}, \tag{6}\] \[\mathcal{L}_{\mathcal{B}}:=\frac{1}{N_{\mathcal{B}}}\sum_{(0,T]\times\partial \Omega}|\boldsymbol{\mathcal{U}}_{\Theta}(t,\boldsymbol{x}\in\partial\Omega)- \boldsymbol{\mathcal{U}}(t,\boldsymbol{x}\in\partial\Omega)|^{2}, \tag{7}\] \[\mathcal{L}_{\mathcal{F}}:=\frac{1}{N_{\mathcal{F}}}\sum_{(0,T]\times\Omega} \underbrace{\sum_{k}|\mathcal{R}_{k}(t,\boldsymbol{x};\Theta)|^{2}}_{| \mathcal{F}|^{2}}. \tag{8}\] Here, the set \((N_{\mathcal{I}\mathcal{C}},N_{\mathcal{B}},N_{\mathcal{F}})\) represents the number of points in the sample for each respective domain considered. Additionally, \(\mathcal{R}_{k}\) denotes the fulfillment of specific physical constraints imposed at the level of the PDE. Through the utilization of the fundamental methodology employed in PINN models, it becomes feasible to smoothly enforce the imposed constraints by adjusting the PDEs associated with the given problem. Nonetheless, alternative approaches, such as the "hard enforcement" method [61], propose compelling the neural network to enforce predetermined constraints from the onset of training by modifying the output of the network. This technique necessitates incorporating several constraints, but it entails establishing a specific dependence on the problem at hand. Other researchers achieve a certain level of independence concerning the set of weights \((\omega_{\mathcal{R}},\omega_{i},...)\) in (5) through the application of the Augmented Lagrangian method [62]. This technique involves updating multipliers with corresponding names during training in accordance with the degree of violation for each respective condition. ### Physical background Combinatorial and optimization problems are of great interest in many industry and research fields [23, 46]. Adiabatic Quantum Computing (AQC) algorithms are used to solve this kind of problems and they are expected to outperform classical computers in the current NISQ era [63, 64]. In this paradigm, we can prepare a Hamiltonian in an initial ground state, driving or mixing Hamiltonian, and evolve it towards a desired final operator, whose ground state encodes the solution to the underlying problem, or it can also be the solution itself [43, 46]. This initial or driving Hamiltonian operator should be easy to prepare and evolve. We shall commence by establishing the Hamiltonian operator associated with the adiabatic transition of the system, \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\), which is characterized by its energy eigenvalues, \(E_{n}(t)\), and corresponding eigenstates, \(\ket{n(t)}\), as determined by: \[\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\ket{n(t)}=E_{n}(t)\ket{n(t)}. \tag{9}\] A time-dependent Hamiltonian, as the one defined in (9), generally could lead to modifications in the quantum states it governs. Nevertheless, when these changes are sufficiently minute, analytically tractable, and under controlled conditions, the adiabatic theorem [32] ensures that the system, assuming non-degeneracy in energy, will preserve its proximity to the initial energy state throughout its temporal evolution. Considering the aforementioned perspective, it is consistently feasible to formulate a Hamiltonian operator, denoted as \(\mathbf{\mathcal{H}}(t)\), that exhibits a direct correlation with \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\) and accurately reproduces the temporal progression of its eigenstates. In other words, it possesses transition amplitudes between energy levels that are precisely zero [42]. This operator will be designed to satisfy the following Schrodinger equation: \[i\hbar\,\partial_{t}\ket{\psi_{n}(t)}=\mathbf{\mathcal{H}}(t)\ket{\psi_{n}(t)},\] where \(\ket{\psi_{n}(t)}\) can be defined in terms of the corresponding eigenstate of the operator in (9), \(\ket{n(t)}\). These states represent each one of the \(n\) states driven by \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\) in a certain particular system. Through rigorous derivation and computation, one can obtain a plausible analytical representation for the operator \(\mathbf{\mathcal{H}}(t)\). For a detailed deducement, the interested reader is encouraged to refer to the works of [42, 65, 66]. This operator, defined in (10), effectively governs the evolution of energy states within the framework of \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\), ensuring the absence of transitions between them. \[\mathbf{\mathcal{H}}(t)=\underbrace{\sum_{n}\ket{n}E_{n}\bra{n}}_{\mathbf{\mathcal{H }}_{\mathrm{AD}}}+\underbrace{i\hbar\,\sum_{n}\left(\ket{\partial_{t}n}\bra{n}- \bra{n}\partial_{t}n\ket{n}\bra{n}\right)}_{\mathbf{\mathcal{H}}_{\mathrm{CD}}} =\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)+\mathbf{\mathcal{H}}_{\mathrm{CD}}(t). \tag{10}\] The Hamiltonian operator \(\mathbf{\mathcal{H}}_{\mathrm{CD}}(t)\) corresponding to the second term in (10), can be interpreted as the operator responsible for capturing the counterdiabatic effects during system evolution. These effects facilitate the acceleration of the underlying system dynamics by introducing additional accessible degrees of freedom allowing to reach the same results as the entire adiabatic evolution of the system and eliminating any possible transition between eigenstates [64, 67], as previously stated. These frameworks, which are the well-known as counterdiabatic Protocols, effectively expedite the adiabatic processes by introducing novel terms that precisely offset any excitations that may arise during system acceleration. These have been recently developed as part of the STA methods. With this in mind, it is feasible to extend the theoretical deduction by incorporating a set of time-dependent external parameters, denoted as \(\mathbf{\lambda}(t)\), upon which all our operators will be dependent [44]. By doing so, when retrieving Equation (10), it follows that the temporal derivatives of the \(\ket{n}\) states can be written as follows, \[\ket{\partial_{t}n}=\frac{d\mathbf{\lambda}}{dt}\ket{\mathbf{\nabla}_{\mathbf{\lambda}}n}. \tag{11}\] Equation (11) allows us to redefine the operator \(\mathbf{\mathcal{H}}(t)\) as \[\mathbf{\mathcal{H}}(t):=\mathbf{\mathcal{H}}_{\text{AD}}(t)+\underbrace{\frac{d\mathbf{ \lambda}}{dt}\mathbf{\mathcal{A}}_{\text{CD}}(t)}_{\mathbf{\mathcal{H}}_{\text{CD}}}, \qquad\mathbf{\mathcal{A}}_{\text{CD}}(t):=i\hbar\,\sum_{n}\left(\ket{\mathbf{\nabla}_{ \lambda}n}\bra{n}-\bra{n}\ket{\mathbf{\nabla}_{\lambda}n}\ket{n}\bra{n}\right).\lx@note{ footnote}{While the operators in this formulation exhibit time dependence through $\lambda$, for the sake of notation simplicity, we will denote the time dependency explicitly.} \tag{12}\] In general, this parameterization could encompassze a collection of values \(\mathbf{\lambda}:=(\lambda_{1},...,\lambda_{N})\). However, to align with the latest literature in the field, we will confine ourselves to a single scalar function. Consequently, \(\lambda(t)\), which usually receives the name of _scheduling function_ in the literature, carries out the counterdiabatic driving by terms of its temporal derivative. Indeed, it is evident that in the limit of zero velocity, \(\ket{\frac{d\lambda}{dt}}\to 0\), the Hamiltonian in Equation (12) simplifies to the adiabatic operator, as anticipated during its construction. When extrapolating this understanding to contemporary digitized versions of algorithms within the realm of quantum circuits [43], it is customary to initiate the process with a Hamiltonian which presents a ground state of energy that can be readily prepared in practical settings, denoted as \(\mathbf{\mathcal{H}}_{\text{initial}}\). Under adiabatic conditions, it is imperative for this operator to undergo a gradual transformation over time, eventually converging to the target Hamiltonian corresponding to the specific problem under investigation, written as \(\mathbf{\mathcal{H}}_{\text{problem}}\). \[\mathbf{\mathcal{H}}_{\text{AD}}(t):=(1-\lambda(t))\,\mathbf{\mathcal{H}}_{\text{ initial}}+\lambda(t)\,\mathbf{\mathcal{H}}_{\text{problem}}. \tag{13}\] In Equation (13), the scheduling function can once again be identified. Within the time interval \(t\in[t_{\text{min}},t_{\text{max}}]\), it is necessary for \(\lambda(t)\) to fulfill the conditions \(\lambda(t_{\text{min}})=0\) and \(\lambda(t_{\text{max}})=1\) based on its functional definition, which enables the interpolation of the process from the initial Hamiltonian to the desired endpoint of the process. On the other hand, the \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) operator (also written in the literature as \(\mathbf{\mathcal{A}}_{\lambda}\)) is defined as the Adiabatic Gauge Potential. Obviously, this operator should fulfill the hermiticity condition, i.e., \(\mathbf{\mathcal{A}}_{\text{CD}}=\mathbf{\mathcal{A}}_{\text{CD}}^{\dagger}\), in order to be interpreted as a physical observable. It is also easy to show that the potential \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) satisfies the condition (15), which is equivalent to minimizing the action \(\mathcal{S}\) defined in Equation (14) for the Hilbert-Schmidt operator \(\mathbf{\mathcal{G}}_{\lambda}\)[44, 68]. Consequently, Equation (15) can be understood as the Euler-Lagrange equation resulting from the minimization of the physical action. \[\mathcal{S}=\text{Tr}\left[\mathbf{\mathcal{G}}_{\lambda}^{2}\right],\qquad\mathbf{ \mathcal{G}}_{\lambda}\left(\mathbf{\mathcal{A}}_{\text{CD}}\right)=\partial_{ \lambda}\mathbf{\mathcal{H}}_{\text{AD}}+\frac{i}{\hbar}\left[\mathbf{\mathcal{A}}_{ \text{CD}},\mathbf{\mathcal{H}}_{\text{AD}}\right]. \tag{14}\] \[\left[i\hbar\frac{\partial\mathbf{\mathcal{H}}_{\text{AD}}}{\partial\lambda}-[\mathbf{ \mathcal{A}}_{\text{CD}},\mathbf{\mathcal{H}}_{\text{AD}}],\mathbf{\mathcal{H}}_{\text {AD}}\right]=0.\lx@note{footnote}{To provide clarity and consistency, from now on we will exclusively employ Planck units, wherein the reduced Planck constant, denoted as $\hbar$, is established as unity (i.e., $\hbar=1$).} \tag{15}\] This term should establish a connection between the aforementioned external parameter \(\lambda(t)\) and instantaneous eigenstates. Finding this exact CD terms is not easy without spectral information of the system and they are usually approximated using the NC approach [43, 46, 23] or through Variational Circuits [63]. This lead to a set of possible local CD terms and different techniques that have been developed to determine which of them are the most adequate for the particular problem. Nonetheless, these methods even though they are of great interest and may constitute the state-of-the-art approaches, they are still approximations. Consequently, there could be other relevant terms and aspects that are not considered within these methodologies. ### Quantum circuit design for the \(\mathrm{H}_{2}\) ground state problem The main numerical application of this paper will be to find the ground state of the \(\mathrm{H}_{2}\) molecule in the STO-3G basis assuming different bond distances where, in particular, this STO-3G basis corresponds to a minimal set that uses three Gaussian functions to approximate the atomic orbitals [69]. This can be described with the 2-qubit Full Configuration Interaction (FCI) Hamiltonian in Equation (16), where the value of the coefficients vary with the bond distances. This Hamiltonian has been obtained using the well-know _Oiskit_ module [70]. In a different domain, it is well-known that the Pauli matrices comprise a set of Hermitian and unitary matrices, namely \(\{\sigma_{\text{X}},\sigma_{\text{Y}},\sigma_{\text{Z}}\}\in\mathcal{M}_{2\times 2 }(\mathbb{C})\), which are highly suitable for representing quantum operators and physical observables. These matrices, together with the identity matrix \(\mathbf{\mathcal{I}}\) (also called \(\sigma_{0}\)), form an orthogonal basis of the Hilbert space of \(2\times 2\) Hermitian matrices. Similarly, when dealing with a system of \(N_{Q}\) qubits, the matrix space of Hermitian \(\mathcal{M}_{\mathcal{I}^{2N_{Q}}\times 2^{N_{Q}}}(\mathbb{C})\) matrices can be decomposed by means of tensor products involving the aforementioned set (the interested reader is referred to [71] Chapter 2 and [72] Chapter 3). This procedure, referred to as the Pauli basis expansion, enables us to achieve a comprehensive representation of any Hermitian operator on a system comprising a certain amount of qubits. From the perspective of quantum circuits, this approach offers a structured and concise depiction of quantum operations and physical observables, thereby facilitating the analysis and manipulation of quantum systems. \[\boldsymbol{\mathcal{H}}_{\text{FCI}}=c_{0}\boldsymbol{\mathcal{I}}^{(1)} \otimes\boldsymbol{\mathcal{I}}^{(2)}+c_{1}\,\boldsymbol{\mathcal{I}}^{(1)} \otimes\sigma_{\text{Z}}^{(2)}+c_{2}\,\sigma_{\text{Z}}^{(1)}\otimes \boldsymbol{\mathcal{I}}^{(2)}+c_{3}\,\sigma_{\text{Z}}^{(1)}\otimes\sigma_{ \text{Z}}^{(2)}+c_{4}\,\sigma_{\text{X}}^{(1)}\otimes\sigma_{\text{X}}^{(2)} +c_{5}\,\boldsymbol{\mathcal{I}}^{(1)}\otimes\boldsymbol{\mathcal{I}}^{(2)}. \tag{16}\] This corresponds to \(\mathcal{H}_{\text{problem}}\) in Equation (13). In this context, the numeric superscripts enclosed in parentheses mean that the written operator pertain to the specific qubit under consideration thereby resulting in distinct matrices associated with each one. The symbol \(\otimes\) denotes the Kronecker product. Furthermore, the first and last coefficients are written separately with the same operator since they correspond to different atoms in the molecule, even though they have the same interaction. The FCI Hamiltonian in Equation (16) can also be written in matrix form Equation (17). \[\boldsymbol{\mathcal{H}}_{\text{FCI}}=\begin{bmatrix}c_{0}+c_{1}+c_{2}+c_{3}+ c_{5}&0&0&c_{4}\\ 0&c_{0}-c_{1}+c_{2}-c_{3}+c_{5}&c_{4}&0\\ 0&c_{4}&c_{0}+c_{1}-c_{2}-c_{3}+c_{5}&0\\ c_{4}&0&0&c_{0}-c_{1}-c_{2}+c_{3}+c_{5}\end{bmatrix}. \tag{17}\] As detailed before, we start from a _driving_ Hamiltonian that should be easy to prepare. For this particular problem, the Hartee-Fock (HF) approximation Hamiltonian (18) is used, with its matrix form written in Equation (19). It is easy to see that both the FCI and the HF Hamiltonian are real-valued since they are composed only with the identity matrix, \(\boldsymbol{\mathcal{I}}\), and the Pauli matrices \(\sigma_{\text{X}}\) and \(\sigma_{\text{Z}}\). \[\boldsymbol{\mathcal{H}}_{\text{HF}}=p_{0}\,\boldsymbol{\mathcal{I}}^{(1)} \otimes\boldsymbol{\mathcal{I}}^{(2)}+p_{1}\,\boldsymbol{\mathcal{I}}^{(1)} \otimes\sigma_{\text{Z}}^{(2)}+p_{2}\,\sigma_{\text{Z}}^{(1)}\otimes \boldsymbol{\mathcal{I}}^{(2)}+p_{3}\,\sigma_{\text{Z}}^{(1)}\otimes\sigma_{ \text{Z}}^{(2)}+p_{4}\,\boldsymbol{\mathcal{I}}^{(1)}\otimes\boldsymbol{ \mathcal{I}}^{(2)}. \tag{18}\] \[\boldsymbol{\mathcal{H}}_{\text{HF}}=\begin{bmatrix}p_{0}+p_{1}+p_{2}+p_{3}+p_{4 }&0&0\\ 0&p_{0}-p_{1}+p_{2}-p_{3}+p_{4}&0&0\\ 0&0&p_{0}+p_{1}-p_{2}-p_{3}+p_{4}&0\\ 0&0&0&p_{0}-p_{1}-p_{2}+p_{3}+p_{4}\end{bmatrix}. \tag{19}\] The HF method approximates the ground state of the system using a single Slater determinant. The contributions of both the initial and final Hamiltonian operators in the adiabatic evolution can be determined using advanced numerical algorithms [70]. As stated before, the primary objective is to solve the adiabatic temporal evolution while considering non-adiabatic terms. The goal is to accelerate this evolution without allowing unexpected transitions between energy states. By incorporating the adiabatic operator defined in Equation (13) into the total Hamiltonian operator described in Equation (12), and considering that the initial and final Hamiltonians are represented as shown in Equations (16-19) with \(\boldsymbol{\mathcal{H}}_{\text{initial}}:=\boldsymbol{\mathcal{H}}_{\text{HF}}\) and \(\boldsymbol{\mathcal{H}}_{\text{final}}:=\boldsymbol{\mathcal{H}}_{\text{FCI }}=\boldsymbol{\mathcal{H}}_{\text{problem}}\), we can establish the following PDE, \[\boldsymbol{\mathcal{H}}(t):=(1-\lambda(t))\,\boldsymbol{\mathcal{H}}_{\text{ HF}}+\lambda(t)\boldsymbol{\mathcal{H}}_{\text{problem}}+\frac{d\lambda}{dt} \boldsymbol{\mathcal{A}}_{\text{CD}}(t). \tag{20}\] ### Related work The primary objective of CD driving processes is to attain a gauge potential, \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\), by expediting the adiabatic process through a temporal velocity parameter, specifically the time derivative of the scheduling function, \(\frac{d\lambda}{dt}\). The aforementioned operator possesses the property that the product \(\frac{d\lambda}{dt}\boldsymbol{\mathcal{A}}_{\text{CD}}\) in Equation (20) fully mitigates the occurrence of non-adiabatic transitions that would otherwise manifest in a specific eigenstate \(|n(t)\rangle\) under the total \(\boldsymbol{\mathcal{H}}(t)\), which has undergone evolution from the initial state \(|n(t=0)\rangle\) of the control Hamiltonian, \(\boldsymbol{\mathcal{H}}_{\text{AD}}(t)\)[73]. Therefore, the aim of the related methodologies is to ensure precise compensation of the adiabatic terms in the evolutionary dynamics within the moving frame [40]. This potential, comprising the non-adiabatic compensations, represents an operator that is typically intricate to derive. While an exact numerical construction is possible for certain systems with a small number of particles, resolving it for systems with numerous bodies (a substantial number of qubits) becomes exceedingly challenging, if not impossible, due to the requirement of diagonalizing the resulting Hamiltonian observable across the entire Hilbert space. Furthermore, \(\mathbf{\mathcal{A}}_{\text{CD}}\) often entails the emergence of highly intricate and non-local couplings, thus limiting its implementation to specific cases [74, 75, 76]. Part of the research focused on a robust obtaining of the potential \(\mathbf{\mathcal{A}}_{\text{CD}}\) has led to works such as [77] where fast-forward methods are presented through which it is possible to obtain an objective wave function in a shorter time. This investigation is conducted within the field of microscopic quantum mechanics, aiming to delve into the macroscopic domain primarily governed by the Schrodinger equation. These inquiries are expounded upon in references such as [36, 78]. Despite these notable progressions, there presently exists no viable approach to extend these studies to the context of many-body systems. Consequently, these methodologies are not applicable to complex problem sets. On the other hand, several recent studies such as [63] have suggested alternative approaches for addressing the aforementioned issues. The authors suggest to employ Variational Circuits (VC) in conjunction with classical optimizers to optimize the choice of CD terms, with these being treated as trainable parameters, being tested on the specific case of an Ising model of interactions with nearby neighbors and carrying out a comparison with some of the state-of-the-art QAOA methodologies [79, 80, 23]. Some assumptions are made regarding the form of the function \(\lambda(t)\) thereby predefining a temporal parameterization, which may exhibit certain dependencies on the problem under investigation. Returning to the latest approaches to derive \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) as described in the existing literature, based on its definition in Equation (12) and assuming that the parameterization is completely determined by the function \(\lambda(t)\), by differentiation of Equation (9) [42], it is straightforward to arrive at \[\left\langle m\right|\mathbf{\mathcal{A}}_{\text{CD}}\left|n\right\rangle=i\left\langle m \right|\partial_{\lambda}n\right\rangle=-i\frac{\left\langle m\right|\partial_ {\lambda}\mathbf{\mathcal{H}}_{\text{AD}}\left|n\right\rangle}{E_{m}-E_{n}}. \tag{21}\] However, it can be readily observed from Equation (21) that determining the operator \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) can become highly intricate and computationally expensive, especially when dealing with many-body problems. This potential necessitates exact diagonalization of the operator \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\) over time and, additionally, the difference in energies \(E_{m}-E_{n}\) can introduce complications, leading to divergent and mathematically ill-defined matrix elements. In an attempt to address these challenges, the authors in [45] suggest employing the so-called method of NC for approximating the representation of \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\). \[\mathbf{\mathcal{A}}_{\text{CD}}^{(l)}=i\sum_{k=1}^{l}\alpha_{k}\underbrace{[\mathbf{ \mathcal{H}}_{\text{AD}},[\mathbf{\mathcal{H}}_{\text{AD}},\ldots[\mathbf{\mathcal{H}} _{\text{AD}},\partial_{\lambda}}_{2k-1}\mathbf{\mathcal{H}}_{\text{AD}}]]]}_{2k-1}. \tag{22}\] Equation (22) presents a methodology for obtaining an approximate numerical approach for the operator \(\mathbf{\mathcal{A}}_{\text{CD}}\). In this expression, \(l\) represents the order of the expansion, and the set of coefficients \(\{\alpha_{k}\}_{k=1}^{l}\) can be determined by minimizing the action in Equation (14) specifically tailored for the \(l\)-th order. For further details and a comprehensive demonstration, interested readers are referred to the reference, where it is demonstrated that the exact potential \(\mathbf{\mathcal{A}}_{\text{CD}}\) is recovered in the limit as \(l\) tends towards infinity. Notwithstanding its utilization as an approximation limited to an expansion order \(l\), this ansatz has proven to be a methodological framework for attaining cutting-edge outcomes in the field of research [23, 43, 46, 81, 82]. In the absence of an analytical solution to the problem of the CD protocol, the most compelling alternative at our disposal is to employ techniques rooted in DL, which possess the capacity to comprehend and acquire knowledge of the fundamental physics governing the problem. ## 3 Methodology ### PINN framework Our approach involves employing a Physics-Informed Neural Network (PINN) that incorporates a neural network structure comprising multiple fully connected dense layers, with the total number of layers denoted as \(K\). This includes both the input and output layers. Each layer consists of a variable number of neurons, represented by \(N_{k}\), and is characterized by a common output activation function denoted as \(\sigma_{k}\). This activation function remains consistent across all neurons within a given layer after it is specified. Let \(\mathbf{W}^{(k)}\in\mathbb{R}^{N_{k}\times N_{k-1}}\) be the weight matrix connecting the \((k-1)\)-th layer to the \(k\)-th layer, and \(\mathbf{b}^{(k)}\in\mathbb{R}^{N_{k}}\) be the bias vector for the \(k\)-th layer. Consequently, the output of the \(k\)-th layer, denoted as \(\mathbf{\mathcal{U}}_{\Theta}^{(k)}\), can be expressed as the application of the activation function \(\sigma_{k}\) to the weighted sum of the output of the previous layer, followed by the addition of the biases, as denoted in Equation (23): \[\mathbf{\mathcal{U}}_{\Theta}^{(k)}=\sigma_{k}\left(\mathbf{W}^{(k)}\mathbf{\mathcal{U}}_{ \Theta}^{(k-1)}+\mathbf{b}^{(k)}\right) \tag{23}\] In this manner, contingent upon the specific problem under consideration, additional network parameters beyond weights and biases may be taken into account. Nevertheless, if only these two sets are considered, the trainable variables would be limited to those typically found in a conventional network, denoted as \(\Theta:=\{\mathbf{W}^{(k)},\mathbf{b}^{(k)}\}_{1\leq k\leq K}\). Regarding the activation functions, particularly those employed at the output layer, they need to be tailored to suit the requirements of the specific physical problem at hand. For instance, certain physical variables may exhibit limitations within a defined range of values. An example of such a case is the scheduling function as described in (20), which is constrained to the interval \([0,1]\) by definition, or the velocity of a specific fluid within relativistic scenarios which is subject to an upper limit that cannot surpass the speed of light [57]. In the context of our specific problem, the sole independent variable is time, thereby allowing us to exclude spatial dimensions as inputs to the neural model. Consequently, by considering (23) and the time interval \(t\in[0,T]\), we can express the ensemble of output variables of the architecture in the following manner: \[\mathbf{\mathcal{U}}(t)\approx\mathbf{\mathcal{U}}_{\Theta}(t)=\sigma_{K}\left(\mathbf{ \mathcal{U}}_{\Theta}^{(K)}\circ\sigma_{K-1}\circ\mathbf{\mathcal{U}}_{\Theta}^{( K-1)}\circ\ldots\circ\sigma_{1}\circ\mathbf{\mathcal{U}}_{\Theta}^{(1)}\right)(t) \tag{24}\] Equation (24) showcases the mathematical representation of the approximation proposed by the underlying neural network in our methodology. This approach aims to closely resemble the actual physical solution following the completion of the training process. In this context, the symbol \(\circ\) represents the composition operator. Recent research in the field asserts that PINNs enhance their performance by incorporating dynamic activation functions that vary with the training process and are distinct for each neuron [83]. However, our focus lies primarily on establishing the definition of physical inductive biases within the network. Consequently, each layer \(k\) will be associated with an activation function denoted as \(\sigma_{k}\), which uniformly affects the output tensor of that particular layer. ### Inductive biases and optimization Our approach involves employing a methodology based on PINNs, which allows us to incorporate strong inductive biases and a priori knowledge into the neural network. This incorporation is intended to ensure that the underlying physics governing the problem is adequately satisfied. To achieve this objective, it is essential to consider that the output of the underlying network in our methodology will consist of a set of variables denoted as \[\mathbf{\mathcal{U}}_{\Theta}(t):=\left(\lambda,\mathbf{\mathcal{A}}_{\text{CD}},\mathbf{ \mathcal{C}}\right)_{\Theta}\lx@note{footnote}{Henceforth, the utilization of the symbol “\Theta” to denote the network prediction will be omitted and presumed understood, except in cases where the terminology may potentially result in misconceptions.}. \tag{25}\] Here, \(\lambda\in\mathbb{R}\) denotes the scheduling function, while \(\mathbf{\mathcal{A}}_{\text{CD}}\in\mathcal{M}_{2^{N_{Q}}\times 2^{N_{Q}}}( \mathbb{C})\) represents the counterdiabatic terms of the evolution, and \(\mathbf{\mathcal{C}}\in\mathbb{R}^{4^{N_{Q}}}\) stands for the set of coefficients in which these counterdiabatic terms can be linearly decomposed attending to all the possible tensors that come from the Kronecker products of the possible combinations according to both the number of qubits considered and the set of identity and Pauli matrices, \(\{\mathbf{I},\sigma_{\!\!\!X},\sigma_{\!\!\!Y},\sigma_{\!\!\!Z}\}\) (see [84] Chapter 2.1). In general, \(\mathbf{\mathcal{A}}_{\text{CD}}\) is an operator composed of complex terms and will be of size \(2^{N_{Q}}\times 2^{N_{Q}}\) with \(N_{Q}\) being the number of qubits into consideration. Moreover, notwithstanding that the dependency may not always be explicitly stated, all variables emerging from the PINN are contingent upon the input time. Hence, the network yields solutions for each of the considered inference times. The underlying neural network will be optimized using the so-called "soft enforcement" technique, as introduced in equations (6,7,8). We will adapt this methodology to our specific scenario of Hamiltonian dynamics, where the global cost function can be decomposed into multiple terms. Specifically, we will address the initial and final time conditions for an input time interval \(t\in[t_{\text{min}},t_{\text{max}}]\) as described in Equations (26) and (27), respectively. \[\mathcal{L}_{\mathcal{I}\mathcal{C}}:=\frac{\omega_{\mathcal{I}\mathcal{C},1} }{N_{\mathcal{I}\mathcal{C}}}\sum_{\{t_{\text{min}}\}}|\lambda(t_{\text{min}}) |^{2}+\frac{\omega_{\mathcal{I}\mathcal{C},2}}{N_{\mathcal{I}\mathcal{C}}} \sum_{\{t_{\text{min}}\}}\left|\mathbf{\mathcal{H}}(t_{\text{min}})-\mathbf{\mathcal{ H}}_{\text{HF}}\right|^{2}, \tag{26}\] \[\mathcal{L}_{\mathcal{F}\mathcal{C}}:=\frac{\omega_{\mathcal{F}\mathcal{C},1} }{N_{\mathcal{F}\mathcal{C}}}\sum_{\{t_{\text{max}}\}}\left|\lambda(t_{\text{ max}})-1\right|^{2}+\frac{\omega_{\mathcal{F}\mathcal{C},2}}{N_{\mathcal{F} \mathcal{C}}}\sum_{\{t_{\text{max}}\}}\left|\mathbf{\mathcal{H}}(t_{\text{max}})- \mathbf{\mathcal{H}}_{\text{problem}}\right|^{2}. \tag{27}\] In the definitions above, \((\omega_{\mathcal{I}\mathcal{C},1},\omega_{\mathcal{I}\mathcal{C},2})\) and \((\omega_{\mathcal{F}\mathcal{C},1},\omega_{\mathcal{F}\mathcal{C},2})\) are the weights of the mixture in the calculation of the total loss function and whose values will depend on the knowledge applied as well as on the problem being treated, while \(N_{\mathcal{I}\mathcal{C}}\) and \(N_{\mathcal{F}\mathcal{C}}\) represent the number of sample points at the initial and final instants, respectively. Regarding the scheduling function of the problem, denoted as \(\lambda(t)\), this function shall delineate the progression of physical states subsequent to the introduction of counterdiabatic terms as stipulated in Equation (20). At the initial time instant, we shall impose a condition where \(\lambda(t_{\text{min}})\) equals zero, i.e., \(\lambda(t_{\text{min}})=0\). Similarly, at the terminal time instant, we would prescribe that \(\lambda(t_{\text{max}})\) assumes a value of one, i.e., \(\lambda(t_{\text{max}})=1\), as per its formal definition [46]. The aforementioned conditions outlined correspond to the physical limitations that are inherently necessary to be satisfied within our specific scenario. At the initial moment, denoted as \(t=t_{\text{min}}\), we enforce the scheduling function to be zero, while ensuring that the resulting Hamiltonian operator (20) is equivalent to the one obtained through the Hartree-Fock method (see [85] Chapter 2.2), defined as \(\boldsymbol{\mathcal{H}}(t_{\text{min}})=\boldsymbol{\mathcal{H}}_{\text{HF}}\). Additionally, by incorporating counterdiabatic terms, our intention is to accelerate the adiabatic transition and reduce the computational complexity of the underlying circuits [23, 43]. This, however, does not impede our knowledge of the final Hamiltonian operator, denoted as \(\boldsymbol{\mathcal{H}}_{\text{problem}}\), which can be computed via advanced numerical methods in the chemistry field [70]. These conditions, combined with the requirement that the scheduling function must be equal to one at the conclusion of the evolution, collectively constitute the final conditions mandated by the methodology. After establishing the initial and final inductive biases explicitly, it becomes crucial to elucidate the physics governing the intermediate time periods within the interval. Upon acquiring all the aforementioned components, we possess the necessary elements to construct the complete Hamiltonian operator for the given problem, as delineated in Equation (20). However, for the purpose of coherence, we reiterate the expression here to avoid any disruption in the logical progression. \[\boldsymbol{\mathcal{H}}(t)=\boldsymbol{\mathcal{H}}_{\text{AD}}(t)+\frac{d \lambda}{dt}\boldsymbol{\mathcal{A}}_{\text{CD}}(t),\qquad\text{with}\qquad \boldsymbol{\mathcal{H}}_{\text{AD}}(t):=(1-\lambda(t))\,\boldsymbol{\mathcal{ H}}_{\text{HF}}+\lambda(t)\boldsymbol{\mathcal{H}}_{\text{problem}}. \tag{28}\] As it is well-known, the Hamiltonian operators in their general form are operators comprising complex numbers, which necessitate adherence to the property of Hermiticity. By satisfying this requirement, the operators can be appropriately interpreted as physical observables, enabling extraction of values from energy states that are purely real. Consequently, it is obvious that our operators should possess Hermitian properties. Furthermore, it is imperative to impose the condition that the neural network yields the solution for the Gauge potential (counterdiabatic terms) that achieves the utmost reduction in the physical action within the given scenario. This entails selecting the solution that results in achieving \[\frac{\delta\mathcal{S}(\boldsymbol{\mathcal{A}}_{\text{CD}})}{\delta \boldsymbol{\mathcal{A}}_{\text{CD}}}=0. \tag{29}\] The minimization of the physical action represents a crucial requirement for ensuring that the employed methodology yields an optimal operator \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\) under specific conditions, encompassing local temporary development, robustness, availability, and experimental accessibility, among other factors. Research studies, such as [64] and related literature, demonstrate the impact of the action in recovering the Euler-Lagrange Equation (15). Consequently, demanding the neural network to minimize the action is entirely equivalent to defining the term associated with the physical loss, as described in (30). Moreover, it is well-established that the temporal rate of change of the scheduling function, denoted as \(\frac{d\lambda}{dt}:=\dot{\lambda}\), represents the velocity or rate at which the non-adiabatic components drive the evolution of the system in the presence of the total Hamiltonian operator. Consequently, when the derivative approaches zero, i.e., \(\dot{\lambda}=0\), the conventional adiabatic Hamiltonian is recovered. However, it is undesirable for the counterdiabatic terms to greatly surpass the adiabatic counterparts, as their purpose is to expedite the process without exerting dominant influence. Hence, it is essential that the time derivative of \(\lambda\) remains small, yet not entirely nullified. This particular information must be communicated to the PINN through Equation (31). \[\mathcal{L}_{\text{Least\ Action}}:=\frac{\omega_{\text{Action}}}{N_{\mathcal{ F}}}\sum_{(t_{\text{min}},t_{\text{max}})}\left[i\frac{\partial\boldsymbol{ \mathcal{H}}_{\text{AD}}(t)}{\partial\lambda}-\left[\boldsymbol{\mathcal{A}}_{ \text{CD}}(t),\boldsymbol{\mathcal{H}}_{\text{AD}}(t)\right],\boldsymbol{ \mathcal{H}}_{\text{AD}}(t)\right]. \tag{30}\] \[\mathcal{L}_{\text{Adiabaticity}}:=\frac{\omega_{\text{Ad}}}{N_{\mathcal{F}}} \sum_{(t_{\text{min}},t_{\text{max}})}\left|\frac{d\lambda}{dt}\right|^{2}. \tag{31}\] As mentioned in Section 2.3, the set \(\{\sigma_{0},\sigma_{\text{X}},\sigma_{\text{Y}},\sigma_{\text{Z}}\}\in \mathcal{M}_{2\times 2}(\mathbb{C})\) form an orthogonal basis of the Hilbert space of \(2\times 2\) Hermitian matrices. Consequently, it is both logical and well-founded to seek the computation of the operator \(\boldsymbol{\mathcal{A}}_{\text{CD}}\) as a composite of tensor products involving the previously introduced Pauli matrices for a certain system of qubits. By doing so, it is possible to construct a linear combination of tensor products involving these matrices, yielding a set of coefficients denoted as \(\mathbf{\mathcal{C}}\in\mathbb{R}^{4^{N_{Q}}}\). Each element within this set represents the relative magnitude of each term within the operator. Therefore, the decomposition expressed in Equation (32) would enable us to perform efficient simulations and facilitate a more accessible analysis of physical systems through the utilization of quantum circuit models. \[\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}(t):=\sum_{i,j,\ldots,N_{Q}\in\{0, \text{X},\text{Y},\text{Z}\}}\mathbf{\mathcal{C}}_{i,j,\ldots,N_{Q}}(t)\left(\sigma _{i}\otimes\sigma_{j}\otimes\ldots\otimes\sigma_{N_{Q}}\right). \tag{32}\] In this study, we employ \(\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}(t)\) to represent the non-adiabatic terms. This notation serves to distinguish this expansion, which takes the form of a linear combination, from the operator that serves as an output of the PINN. To achieve this objective, it is necessary to introduce an additional term into the loss function of the neural network, which directly affects the set of coefficients denoted as \(\mathbf{\mathcal{C}}(t)\), as shown in Equation (33). Consequently, these terms are dynamically adjusted during the training process in order to construct the decomposition of the Gauge operator. By employing the specified procedure and adhering to the prescribed requirement, our methodology exhibits the capability to yield these scalar quantities not only at the initial and final moments but also throughout the entire temporal interval. This is attributable to the fact that these scalars, in a general sense, are functions contingent upon time. \[\mathcal{L}_{\text{Conpling}}:=\frac{\omega_{\text{Conpling}}}{N_{\mathcal{F}}} \sum_{(t_{\text{min}},t_{\text{max}})}\left|\mathbf{\mathcal{A}}_{\text{CD}}(t)- \mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}(t)\right|^{2}. \tag{33}\] Once all the requisite components have been specified according to Equations (30), (31), and (32), it becomes feasible to establish the loss term for our PINN, Equation (34), which is solely linked to the underlying differential equations. In this context, the vector \((\omega_{\text{Action}},\omega_{\text{Adinicity}},\omega_{\text{Conpling}})\) denotes the weights employed in the combination process when constructing the resultant term. Thus, by considering Equation (5) and acknowledging the pre-established mixture weights within our loss terms, the formulation of the final loss function, Equation (35), becomes straightforward. This incorporates the loss associated with the PDEs outlined in Equation (34), as well as the temporal constraints at the initial and final temporal steps, as indicated in (26) and (27). Consequently, the defined loss function serves as the physical metric that guides the optimization process, dictating the objective for the neural network within our methodology to minimize. \[\mathcal{L}_{\mathcal{F}}:=\mathcal{L}_{\text{Least Action}}+\mathcal{L}_{ \text{Adinicity}}+\mathcal{L}_{\text{Conpling}}. \tag{34}\] \[\mathcal{L}:=\mathcal{L}_{\mathcal{IC}}+\mathcal{L}_{\mathcal{FC}}+\mathcal{L} _{\mathcal{F}}. \tag{35}\] In addition, the network has not explicitly been required to satisfy the necessary condition of operators hermiticity even though it can be achieved by including an additional term in Equation (34) that minimizes the difference \(\left|\mathbf{\mathcal{A}}_{\text{CD}}-\mathbf{\mathcal{A}}_{\text{CD}}^{\dagger} \right|^{2}\). Nonetheless, such a restriction is not obligatory and would be redundant for the PINN. If the coefficients \(\mathbf{\mathcal{C}}\in\mathbb{R}^{N_{Q}}\) are defined as real, i.e., \(\mathbf{\mathcal{C}}=\mathbf{\mathcal{C}}^{*}\), then it is evident from the decomposition of \(\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}\) in Equation (32) that we recover \(\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}-\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{ \prime}}=0\) naturally, without necessitating any additional requirements. Therefore, the physical condition expressed in Equation (33) is more than sufficient for the neural network to ensure the hermiticity of the operator \(\mathbf{\mathcal{A}}_{\text{CD}}\), and since \(\mathbf{\mathcal{H}}_{\text{HF}},\mathbf{\mathcal{H}}_{\text{problem}}\in\mathcal{M}_{2 ^{N_{Q}}\times 2^{N_{Q}}}(\mathbb{R})\)[23], the hermiticity of the complete Hamiltonian operator is also ensured. After considering all the relevant factors, a comprehensive description of our methodology can be provided. In order to accomplish this, we will utilize a visual aid in the form of a diagram (Figure 1), and a step-by-step algorithm outlined in Algorithm 1. The initial step involves identifying the independent variable(s) that will influence our outcomes. In our particular case, we only have the temporal variable, denoted as \(t\), defined within the interval \([t_{\text{min}},t_{\text{max}}]\), commonly specified as \([t_{\text{min}},t_{\text{max}}]=[0,1]\). Consequently, we need to select an appropriate sampling method. Various methods are available, including uniform sampling with equidistant points, random sampling, Latin hypercube sampling [86, 87], and Sobol sampling [88]. The choice of method is somewhat arbitrary, with greater impact when considering fewer time points, as its significance diminishes as the sample size approaches infinity. In particular, the Sobol sampling has exhibited significant advancements in prior inquires within the relevant body of literature [89]. In our investigation, we have employed this approach, which entails generating pseudo-random samples using powers of two. This technique results in a more homogeneous distribution of points within the space, with reduced overlap compared to completely random sampling. Nonetheless, it is important to acknowledge the inherent randomness involved in this approach. After generating the time domain, it serves as the input to the network, which will be constructed as a sequence of fully connected \(K\) dense layers, encompassing both the input and output layers. The number of layers and the number of neurons in each layer (\(N_{k}\)) are hyperparameters that will be predetermined prior to training. While these parameters can be adjusted, it is anticipated that a relatively conventional configuration, such as 6 or 7 layers with 30 to 40 neurons each, will suffice for characterizing the counterdiabatic driving problem. Nevertheless, the complexity of individual problems may necessitate varying numbers of layers and/or neurons. With the establishment of the input domain and the construction of the network, we are now capable of extracting the output and interpreting it as the tensor of physical variables denoted by \(\mathcal{U}_{\Theta}(t)=\left(\lambda,\boldsymbol{\mathcal{A}}_{\text{CD}}, \boldsymbol{\mathcal{C}}\right)_{\Theta}\), where \(\lambda\in\mathbb{R}\) Figure 1: The methodology follows a general procedure where time, represented by the variable \(t\), is the only independent physical variable considered, incorporated into our network as a tensor. The output of the methodology comprises three elements: the scalar variable of the scheduling function denoted as \(\lambda(t)\); the operator representing the counterdiabatic terms of the process, expressed as \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\), with each of its components treated as an independent output; and the coefficients \(\boldsymbol{\mathcal{C}}(t)\) denote the general decomposition coefficients of the operator \(\boldsymbol{\mathcal{A}}_{\text{CD}}\) expressed as a linear combination of tensors resulting from all possible Kronecker products formed between the set of operators comprising the identity matrix and the Pauli operators. Subsequently, the computations required to construct the total Hamiltonian, denoted as \(\boldsymbol{\mathcal{H}}(t)\), are carried out. Various physical constraints are imposed during these computations, including inductive biases. These constraints adhere to the principle of minimum action, satisfy the initial and final conditions, and ensure the hermiticity of the physical operators among other specifications. At each step of the training process, the total loss is calculated by aggregating the contributions from all imposed conditions, denoted as \(\mathcal{L}\). This calculation continues until a specific training period is reached or until a predetermined error threshold is achieved. \(\mathbf{\mathcal{A}}_{\text{CD}}\in\mathcal{M}_{2^{N\mathcal{O}\times 2^{N_{Q}}}}(\mathbb{C})\), and \(\mathbf{\mathcal{C}}\in\mathbb{R}^{4^{N_{Q}}}\). Subsequently, the derivative \(\frac{d\lambda}{dt}\) can be straightforwardly computed using automatic differentiation, getting the Hamiltonian operator \(\mathbf{\mathcal{H}}(t)\) as described in (20). Furthermore, the initial and final temporal constraints, defined in equations (26) and (27), are necessary. These constraints are accompanied by the calculation of the loss associated with the underlying PDEs stated in Equation (34), which incorporates the physical constraints outlined in equations (30), (31), and (33). These physical restrictions represent the fulfillment of the Principle of Least Action, the agreement with the adiabaticity, and the decomposition of the operator \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) as previously explained. By combining all these components, the final loss metric \(\mathcal{L}\) given in Equation (35) is computed, and the set of trainable variables \(\Theta\) is updated via backpropagation, minimizing the loss through an optimization process. ## 4 Numerical experiments and results In this section, various numerical results obtained through our DL-based methodology are presented, as outlined in Section 3. This approach has been applied to address the \(\mathrm{H}_{2}\) molecule problem within the STO-3G basis, considering different bond distances between the particles and utilizing a 2-qubit representation. The initial and final Hamiltonian operators used for the evolution, as denoted by the PDE in (20), are listed in Table 1 and were configured following the guidelines in [70]. Since our numerical example involves a 2-qubit representation, these operators will possess a matrix size of \(4\times 4\). Furthermore, it is pertinent to note that these operators are real-valued by definition, i.e., \(\mathbf{\mathcal{H}}_{\text{HF}},\mathbf{\mathcal{H}}_{\text{problem}}\in\mathcal{M}_{4 \times 4}\left(\mathbb{R}\right)\). In a general context, and unless explicitly stated otherwise, all conducted trainings comprised a total of 500,000 epochs (iterations) dedicated to updating the \(\Theta\) parameters of the used PINN. The training procedure utilized the _PyTorch_ module [91], employing the _Adam_ optimizer [92] with a fixed learning rate of \(10^{-5}\). This learning rate was chosen to be sufficiently small, ensuring convergence without excessive numerical noise. Throughout all examples, the sampling method used has been the Sobol sampling [88] as stated in Section 3. The number of sampling points at the initial and final time instances of the evolution, denoted as \(N_{\mathcal{I}\mathcal{C}}\) and \(N_{\mathcal{F}\mathcal{C}}\) respectively, has remained constant at \(2^{13}\). However, the number of points in the inner part of the interval, which is linked to the underlying PDE and represented as \(N_{\mathcal{F}}\), was subject to variation and examination in the experiments. Moreover, if not otherwise specified, the neural architecture used for all the examples consists of six hidden layers with 30 neurons each. To illustrate initial general results, Figure 2 presents the physical loss functions employed for optimizing the neural network, as defined in Section 3. It is essential to emphasize that each training process would exhibit in general distinctive loss curves, influenced by factors such as the number of qubits used for representation and the specific molecular system being studied, including the considered bond distances between molecules, among other features. However, the figure showcases the cost function for the specific scenario under investigation: the \(\mathrm{H}_{2}\) molecule, represented by a 2-qubit system in the STO-3G basis, with specifications outlined in Section 2.3. The left subfigure displays the three constituents comprising the total loss \(\mathcal{L}\) (35), namely, \(\mathcal{L}_{\mathcal{F}}\), \(\mathcal{L}_{\mathcal{I}\mathcal{C}}\), and \(\mathcal{L}_{\mathcal{F}\mathcal{C}}\), corresponding to the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{\(\mathbf{\mathcal{H}}_{\text{HF}}\)} & \multicolumn{4}{c}{\(\mathbf{\mathcal{H}}_{\text{problem}}\)} \\ \cline{2-9} \(d=1.0\) Å & -0.5490812 & 0 & 0 & 0 & -0.5490812 & 0 & 0 & 0.19679058 \\ & 0 & -1.0661087 & 0 & 0 & 0 & -1.0661087 & 0.19679058 & 0 \\ & 0 & 0.00400595 & 0 & 0 & 0.19679058 & 0.00400595 & 0 \\ & 0 & 0 & -0.5490812 & 0.19679058 & 0 & 0 & -0.5490812 \\ \hline \multirow{2}{*}{\(d=1.5\) Å} & -0.6610488 & 0 & 0 & 0 & -0.6610488 & 0 & 0 & 0.22953594 \\ & 0 & -0.91083753 & 0 & 0 & 0 & -0.910837535 & 0.22953594 & 0 \\ & 0 & 0 & -0.3944683 & 0 & 0 & 0.22953594 & -0.3944683 & 0 \\ & 0 & 0 & 0 & -0.6610488 & 0.22953594 & 0 & 0 & -0.6610488 \\ \hline \multirow{2}{*}{\(d=2.0\) Å} & -0.66539884 & 0 & 0 & 0 & -0.66539884 & 0 & 0 & 0.25913846 \\ & 0 & -0.7837927 & 0 & 0 & 0 & -0.7837927 & 0.25913846 & 0 \\ & 0 & 0 & -0.5412806 & 0 & 0 & 0.25913846 & -0.5412806 & 0 \\ & 0 & 0 & 0 & -0.66539884 & 0.25913846 & 0 & 0 & -0.66539884 \\ \hline \multirow{2}{*}{\(d=2.5\) Å} & -0.649429 & 0 & 0 & 0 & -0.649429 & 0 & 0 & 0.28221005 \\ & 0 & -0.7029436 & 0 & 0 & 0 & -0.7029436 & 0.28221005 & 0 \\ & 0 & 0 & -0.5944048 & 0 & 0 & 0.28221005 & -0.5944048 & 0 \\ & 0 & 0 & 0 & -0.649429 & 0.28221005 & 0 & 0 & -0.649429 \\ \hline \hline \end{tabular} \end{table} Table 1: Numerical configurations for the initial Hamiltonian operator, denoted as \(\mathbf{\mathcal{H}}_{\text{HF}}\), and the final Hamiltonian operator, denoted as \(\mathbf{\mathcal{H}}_{\text{problem}}\), obtained using the quantum computing library _Qiskit_[70]. These configurations correspond to the evolution of the molecule \(\mathrm{H}_{2}\) in the STO-3G basis and are represented by their respective matrix descriptions given in Equations (19) and (17), respectively. residual conditions of the PDE, the initial conditions over time, and the final conditions, respectively. The latter two diminish rapidly, converging to magnitudes on the order of \(10^{-4}\) or even lower, especially evident for \(\mathcal{L}_{\mathcal{FC}}\), where the error is so minute that any variation induces substantial noise on a logarithmic scale. Conversely, the residual loss \(\mathcal{L}_{\mathcal{F}}\) (34) encompasses three internal terms, generating discernible tension among them, which impedes its reduction to such small orders of magnitude. This becomes more evident in the right subfigure, which provides a detailed breakdown of these terms. It is evident that the loss term responsible for minimizing the Principle of Least Action or, equivalently, the Euler-Lagrange equations (30), attains the lowest value, thus ensuring a highly satisfactory minimization of the action and, consequently, the attainment of a potential gauge \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) that adheres closely to its prescribed guidelines. Additionally, the graphical representation includes a loss term denoted as \(\mathcal{L}_{\text{Hermicity}}\), which assesses the quadratic discrepancy between \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) and its corresponding operator, defined in a similar way to how the other terms are. However, this factor is not incorporated in the total loss; instead, it serves as a representation allowing us to confirm that minimizing the loss \(\mathcal{L}_{\text{Coupling}}\) (33) through the decomposition of the potential gauge directly enforces its hermiticity, given the strict real-valued nature of the coefficients \(\mathbf{\mathcal{C}}(t)\) involved in the decomposition (32). Thus, for all the examples shown in this article, the mix weights considered for each of the individual loss terms have been preset as: \[\left(\omega_{\mathcal{IC}},\omega_{\mathcal{FC}},\omega_{\text{Action}}, \omega_{\text{Ad}},\omega_{\text{Coupling}}\right)=\left(10^{3},10^{3},10^{2}, 5\times 10^{-1},2.5\times 10^{2}\right), \tag{36}\] where \(\omega_{\mathcal{IC},1}=\omega_{\mathcal{IC},2}=\omega_{\mathcal{IC}}\) and \(\omega_{\mathcal{IC},1}=\omega_{\mathcal{FC},2}=\omega_{\mathcal{FC}}\) in Equations (26) and (27). These weights play a crucial role in determining the relative emphasis that the neural network places on each term, essentially representing their respective priorities. For our specific objectives, ensuring prompt and robust satisfaction of the initial and final conditions in time is of crucial importance. Consequently, we set \(\omega_{\mathcal{IC}}\) and \(\omega_{\mathcal{FC}}\) to significantly larger values than \(\omega_{\text{Action}}\), \(\omega_{\text{Ad}}\), and \(\omega_{\text{Coupling}}\). Following the same thoughts, the minimization of the action is also essential in our methodology as it leads us to identify the most appropriate operator \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) for the system [64], and we express it through its linear combination (32). Consequently, these two weights will be considerably higher, yet an order of magnitude lower than the first two. Particularly, \(\omega_{\text{Coupling}}\) is considered slightly higher, as this condition inherently encompasses the constraint on the hermiticity of the operator. Lastly, the condition of adiabaticity will be assigned the lowest weight, reflecting our intent to recover the limit of the adiabatic theory without permitting it to dominate the overall metric. We could continue our analysis of the right-hand subfigure by specifically putting our attention on the curve that corresponds to the loss of recovery of the adiabatic theory, denoted as \(\mathcal{L}_{\text{Adiabaticity}}\) (31) and represented in a solid black curve. This loss exhibits a marginal decline, commencing from a value proximal to zero at the onset of the training process. Subsequently, it rises to approximately \(10^{2}\) before descending to around \(10^{0}\) and eventually maintaining a constant level of magnitude for the duration of the evolution. The physical reason behind this phenomenon is clear: while the theory necessitates a decrease in the adiabatic speed, \(\frac{d\lambda}{dt}\), the PINN fails to recognize the appropriateness of taking it to values lower than what is observed in our results. This is primarily attributed to the presence of multiple Figure 2: Analysis of the evolution of the loss function during the training process. On the left we illustrate the dynamic changes of the components contributing to the total loss, \(\mathcal{L}\), as defined in Equation (35). On the right side of the graph, each individual constituent of the loss, \(\mathcal{L}_{\mathcal{F}}\), is presented, corresponding to the different physical aspects under consideration. It is important to note that the loss term \(\mathcal{L}_{\text{Hermicity}}\) is included in the plot, although it remains undefined and unused throughout the training. This term quantifies the discrepancy between \(\mathbf{\mathcal{A}}_{\text{CD}}\) and its adjoint operator but is solely provided for visualization purposes, tracking its reduction relative to \(\mathcal{L}_{\text{Coupling}}\) due to their mathematical equivalence. terms within the loss function, not solely related to adiabaticity recovery, thereby creating inherent tension in defining the physical problem as a whole. Consequently, the solution predicted by the methodology does not significantly benefit from further reductions in the adiabatic speed. The fast saturation of adiabatic recovery is the main cause of the considerably elevated value for \(\mathcal{L}_{\mathcal{F}}\) observed in the left subfigure, underscoring the importance of isolating and analyzing each term separately. Thus, a zoom has been applied to \(\mathcal{L}_{\text{Adiabaticity}}\) between 30,000 and 50,000 iterations, a range during which the \(\lambda\) function transitions from a sigmoidal to a linear behavior. Consequently, it assumes a temporal derivative of 1, i.e., \(\frac{d\lambda}{dt}=1\). A more comprehensive discussion on this topic will be presented in the following section. It is important to note that unless otherwise specified, all the results shown in this section have been carried out for the system formed by 2 qubits representing the \(\mathrm{H}_{2}\) molecule in the STO-3G basis, following the guidelines of Section 2.3. ### Scheduling function, \(\lambda(t)\) The output of our methodology is triple. Firstly, it enables the retrieval of the scheduling function \(\lambda\) as a time-dependent variable. Secondly, it facilitates the extraction of the matrix components \((i,j)\) of the potential gauge \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\), considering also the temporal variable, and it also enables the obtaining of the \(\mathbf{\mathcal{C}}(t)\) components of its decomposition through time. This achievement is made possible by the utilization of a cost metric defined on the underlying PDE comprising three distinct terms, as stated in Equation (34). Each of these terms compels the neural network to update one of the outputs while adhering to the imposed physical constraints. In particular, the term \(\mathcal{L}_{\text{Adiabaticity}}\) plays a crucial role in enforcing the adherence of the function \(\lambda(t)\) to the physical evolution, thereby recovering, in the most accurate approximation, the adiabatic theory that has been empirically demonstrated to hold true. Among all the terms taken into account within the total cost function, the neural network exhibits the lowest attention towards fulfilling the requirement of recovering the fully adiabatic theory, \(\mathcal{L}_{\text{Adiabaticity}}\). As illustrated in Figure 2, our PINN ceases to actively optimize this particular term after approximately 50,000 iterations. Consequently, the scheduling function \(\lambda(t)\) converges to an optimal solution wherein its time derivative, representing the adiabatic velocity, maintains an approximately constant value of 1 throughout the temporal evolution. As a consequence, the optimal form of the \(\lambda\) function predicted for the system is the one that adheres to the condition \(\lambda(t)=t\) for the evolution, as governed by the counterdiabatic differential Equation (20). This phenomenon is exemplified in Figure 3, where we show the values of \(\lambda(t)\) and its temporal derivative in the left and right subfigures, respectively, for different training steps in both cases concerning the 2-qubit system representing the \(\mathrm{H}_{2}\) molecule. As observed, even after 15,000 and 25,000 epochs, the function maintains a sigmoidal shape, a widely employed representation in the existing literature [23, 46]. This sigmoidal form is considered appropriate from a theoretical perspective due to its derivative taking the form of a "bell curve", facilitating the adiabatic terms \(\mathbf{\mathcal{A}}_{\text{CD}}\) to present higher values at intermediate values during time evolution while effectively being turned off at the initial and temporal instants. It should be noted, however, that this sigmoidal shape for \(\lambda\) emerges predominantly during the early phases of the training thereby helping in the driving of the convergence of the methodology towards a more optimal solution. From a theoretical point of view, our ultimate goal is to recover the fully adiabatic theory while incorporating the non-zero presence of counterdiabatic terms. Consequently, our neural network converges towards a function \(\lambda\) that precisely matches the temporal variable \(t\). This outcome signifies the restoration of the original formulation of counterdiabatic driving, as explained in [42], thereby undoing the temporal parameterization of physical operators through a specific set of parameters \(\mathbf{\Lambda}(t)\), which, in our case, corresponds to a single scalar parameterization. Through this procedure, our objective is to begin with a differential equation containing non-zero counterdiabatic terms and then strive to recover the adiabatic theory in the limiting case. By doing so, we can obtain all the necessary results in accordance with the theory of adiabaticity. In Figure 4, we present the profiles of \(\lambda(t)\) and its time derivative for the counterdiabatic (CD) protocol from Equation (20) on the left, while on the right subfigure, we depict the same results but without considering the presence of counterdiabatic terms, i.e., working directly with the adiabatic theory (13). It is evident that for both cases we obtain \(\lambda(t)=t\), except for some numerical noise at the edges of the time interval, which is directly related to the nature of the automatic differentiation process [60, 93, 94]. Notably, during the initial stages of training, the neural network capitalizes on the sigmoid-shaped \(\lambda(t)\), while simultaneously adjusting other physical conditions. This aids the network in achieving a more optimal solution in recovering the adiabatic theory. ### Temporal evolution of the \(\boldsymbol{\mathcal{H}}(t)\) operator and the energy levels From the three outputs obtained from the network (see diagram in Figure 1), all the necessary information can be derived. As an initial step, considering the potential gauge \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\) that minimizes the physical action of the system, the total Hamiltonian operator \(\boldsymbol{\mathcal{H}}(t)\) can be computed. The time evolution of all its components is illustrated in Figure 5. Notably, \(\boldsymbol{\mathcal{H}}\in\mathcal{M}_{2^{N_{Q}}\times 2^{N_{Q}}}\left(\mathbb{C}\right)\), where \(N_{Q}\) represents the number of qubits, implying that for a 2-particle system, both \(\boldsymbol{\mathcal{H}}(t)\) and the remaining operators will have a matrix size of \(4\times 4\). In both depictions, the inherent hermiticity of the observable is evident; the real part of the components exhibits symmetry with respect to the diagonal, while the Figure 4: Evolution over time of the _scheduling function_\(\lambda(t)\) (in solid black) along with its derivative (in dashed blue) predicted by our methodology for the \(\mathrm{H}_{2}\) molecule in the STO-3G basis set using the CD protocol on the left. On the right, the same is depicted for a fully adiabatic evolution according to expression (13). Figure 3: Analysis of the scheduling function \(\lambda\) and its temporal derivative for distinct iterations (epochs) of the training process. Observations reveal that during the initial stages of neural optimization, \(\lambda\) exhibits characteristics resembling a sigmoidal function. However, as the training advances, it converges towards a linear behavior \(\left(\frac{d\lambda}{dt}\approx 1\right)\). imaginary part displays complete antisymmetry, leading to diagonal elements being as close to zero as possible (around the order of \(10^{-3}\sim 10^{-4}\)). This condition can be further reinforced by increasing its relative weight, \(\omega_{\text{Coupling}}\). In the mentioned figure, the real and imaginary parts of distinct components of the operator are depicted within the same graph, distinguished by black and blue colors, respectively, and with individual scales. By considering the fulfillment of hermiticity for \(\mathbf{\mathcal{H}}(t)\), we can now extract its instantaneous eigenvalues, thereby obtaining information about the energy levels of the 2-qubit physical system representing the \(\mathrm{H}_{2}\) molecule with which we are currently conducting our investigation. The operator \(\mathbf{\mathcal{H}}(t)\), as defined in Equation (20), is specifically designed to yield the eigenstates \(|n(t)\rangle\) of \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\) exactly over time. It ensures that no transitions between energy levels, \(E_{n}(t)\), are possible [42]. This property holds for all eigenstates of \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\), allowing us to interpret the set of states \(|n(t)\rangle\) as "moving eigenstates" of the total operator \(\mathbf{\mathcal{H}}(t)\). Consequently, we can extract the energy levels corresponding to \(\mathbf{\mathcal{H}}(t)\) throughout the entire time evolution and compare them to the energy levels obtained under the assumption of a completely adiabatic transition, i.e., considering \(\frac{d\lambda}{dt}=0\). Figures 6 and 7 depict the real and imaginary components, respectively, corresponding to two distinct scenarios considered. The first one employs the CD protocol and is shown in the top row, while the second scenario involves Figure 5: The evolution of the real and imaginary components of the Hamiltonian operator \(\mathbf{\mathcal{H}}(t)\) for the \(\mathrm{H}_{2}\) molecule is examined using a bond distance of \(d=1.0\) Å. Black and blue colors have been used for real and imaginary parts, respectively, using dashed lines for the latter and a different scale for each one. The findings reveal substantial fluctuations in the values across various time scales. Notably, the natural symmetry and antisymmetry for the real and imaginary components, respectively, arise due to the hermiticity of the operator. a totally adiabatic transition and is displayed in the bottom row. These investigations were conducted for the \(\mathrm{H}_{2}\) molecule. To broaden the scope of our study, we examined different bond distance values between particles, specifically \(d\in\{1.0,1.5,2.0,2.5\}\) A. The numerical values of the corresponding initial (\(\mathbf{\mathcal{H}}_{\mathrm{HF}}\)) and final (\(\mathbf{\mathcal{H}}_{\mathrm{problem}}\)) Hamiltonian operators are described in Table 1. Notably, Figure 7 reveals that the imaginary part of the eigenvalues (energies) obtained is on the order of \(10^{-3}\), indicating their proximity to zero. As such, these values have negligible influence on observable analyses. However, it is essential to recognize that this outcome directly arises from the hermiticity of the physical observable. Furthermore, by adjusting the weight \(\omega_{\mathrm{coupling}}\), it is possible to further fortify the enforcement of this property. Moreover, in view of the formulation of the Hamiltonian operator under the entirely adiabatic scenario, \(\mathbf{\mathcal{H}}_{\mathrm{AD}}\) (13), and the consideration of the initial and final operators as detailed in Table 1, it is important to note that the scalar nature of the function \(\lambda(t)\) ensures the absence of complex numbers. As a result, the imaginary component of the energy levels in the completely adiabatic case is strictly zero, as evidenced in the bottom row of Figure 7. Regarding the real component of the energies, which is of primary interest, it is observed that in the case of a fully adiabatic transition, these energies demonstrate nearly linear temporal evolution, with diminishing separation as particle bonding increases. The close proximity of energy levels leads to an unfavorable outcome wherein transitions between them become more likely for the system. However, such a challenge is addressed in the CD protocol, wherein energy levels tend to be notably more separated throughout the entire evolution domain. This phenomenon becomes more noticeable for the energy ground state, \(E_{0}\), and the first excited level, \(E_{1}\). Notably, when the bound distance (\(d\)) is set at \(2.0\) A and \(2.5\) A, it is evident that these two energy levels remain substantially distant, especially at the initial stages of the evolution of the system. This occurrence is highly desirable from an experimental perspective, as it aims to minimize the probability of transitions between energy levels, especially between the ground and the first excited state, given that the system is initially prepared in the \(E_{0}\) level. Figure 6: Temporal evolution of the real component of the instantaneous energy levels, namely the eigenvalues \(E_{n}(t)\), describing the molecule \(\mathrm{H}_{2}\) within a system of 2 qubits utilizing the STO-3G basis. These computations are conducted for diverse values of the interparticle bond distance, \(d\). A comparative analysis of the energy levels is presented, showing the results obtained from the CD protocol (20) in the top row, juxtaposed against the levels obtained from the same methodology but with a fully adiabatic transition (13) in the bottom row. It is noteworthy that the energy levels demonstrate a tendency to exhibit greater separation under the CD protocol, a phenomenon that becomes particularly pronounced at \(d=2.0\) Å and \(d=2.5\) Å. ### \(\mathbf{\mathcal{A}_{\text{CD}}}(t)\) operator and its decomposition Our methodology enables us to directly obtain the components of the potential gauge, denoted as \(\mathbf{\mathcal{A}_{\text{CD}}}(t)\). Consequently, we are capable of visualizing the temporal evolution of both its real and imaginary components. Analogous to the presentation of \(\mathbf{\mathcal{H}}(t)\) in Figure 5 above, we display the temporal evolution of the corresponding operator \(\mathbf{\mathcal{A}_{\text{CD}}}\) in Figure 8 of this section. We differentiate their respective real and imaginary components on two distinct scales. Observing the plot, it becomes evident that as a physical observable from an experimental standpoint, the real part of the operator, \(\text{Re}\left(\mathbf{\mathcal{A}_{\text{CD}}}\right)\), exhibits complete symmetry, while its imaginary part, \(\text{Im}\left(\mathbf{\mathcal{A}_{\text{CD}}}\right)\), is fully antisymmetric. This property comes directly from the natural hermiticity of the operator, which, in turn, is imposed indirectly by the condition \(\mathcal{L}_{\text{Coupling}}\) (33), together with \(\mathbf{\mathcal{C}}(t)\in\mathbb{R}^{\nicefrac{{4N}}{{Q}}}\). While exploring the components of \(\mathbf{\mathcal{A}_{\text{CD}}}(t)\) holds theoretical significance, our primary focus from an experimental point of view lies in obtaining the set of coefficients \(\mathbf{\mathcal{C}}(t)\) over time. These coefficients play a crucial role as they enable the gauge potential to be expressed as a linear combination of all the possible combinations and interactions between the qubits of the system, thereby allowing a better implementation in a real quantum circuit [23, 46]. The theoretical formulation of this decomposition is represented by Equation (32), wherein the potential is expressed using the required Kronecker products. This formulation takes into account all possible combinations for the 2-qubit system under current investigation. Given the relatively small size of our example system, it remains feasible to consider all the possible combinations of the Pauli matrices as the number of these combinations scales with \(4^{N_{Q}}\), being \(N_{Q}\) the number of qubits. Nevertheless, in certain experimental scenarios, it may be unnecessary to explore all combinations and interactions. In such cases, specific specializations and additional requirements can be applied, guided by the methodology we present. In Figure 9, we present the temporal evolution of the coefficients derived from the decomposition of the \(H2\) system on the STO-3G basis, as a function of the bond distance (\(d\)) between the particles. The upper row illustrates the evolution itself, while the lower row displays a bar chart presenting the specific coefficients arranged in descending order based on their average values throughout the observed time interval. This visualization enables us to identify the most significant contributions in terms of the absolute value when implementing this system in a real quantum circuit. Notably, the two coefficients that exhibit the highest values are denoted as \(\mathcal{C}_{\text{XY}}\) and its symmetric counterpart \(\mathcal{C}_{\text{YX}}\). These findings align with previous literature that employed the NC methodology [23, 46]. Our approach naturally reveals these prominent contributions, facilitating an explicit Figure 7: Time-dependent variation of the imaginary component of the eigenvalues \(E_{n}(t)\) investigated for the molecular system \(\mathrm{H}_{2}\), represented by a 2-qubit configuration in the STO-3G basis. The computational analysis encompasses two scenarios: one employing the CD protocol in the top row, and the other considering a purely adiabatic transition in the bottom row. In the former case, the imaginary components exhibit magnitudes on the order of \(10^{-3}\), whereas in the latter case, these are precisely zero, as dictated by the definition of the underlying PDE (13). understanding of their respective orders of magnitude. Additionally, there are less prominent contributions, such as \(\mathcal{C}_{\text{ZZ}}\), \(\mathcal{C}_{\text{XX}}\), and \(\mathcal{C}_{\text{H}}\), which warrant consideration as well. The determination of these coefficients is universally applicable, i.e., it emerges as an outcome of implementing our methodology for any system, irrespective of the number of qubits considered. Nevertheless, as previously commented in the preceding section, certain specific instances may arise in experimental scenarios where it becomes impractical to account for all possible contributions in the decomposition of \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\). In such circumstances, it would be interesting to adapt the method and restrict the output \(\mathbf{\mathcal{C}}(t)\) of the neural network to a subset of the entire set, which becomes particularly relevant when dealing with an increasing number of qubits. Consequently, a trade-off between purely theoretical outcomes, exemplified herein, and those of particular experimental significance may always exist. Figure 8: Time evolution of the real and imaginary components of the gauge potential \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) for the \(\text{H}_{2}\) molecule using the STO-3G basis and a bond distance of \(d=1.0\) Å, represented respectively in black and blue colors using two different scales. It is observed that the values exhibit variations across different orders of magnitude. Notably, the natural symmetry and antisymmetry arises naturally in these components over time due to the hermeticity of the operator. ### Scalability So far, we have conducted a study on a 2-qubit system representing the \(\mathrm{H}_{2}\) molecule in the STO-3G basis. However, it is straightforward and easily feasible to modify the base or adjust the number of qubits used in our methodology. This process primarily involves examining the matrix dimensions of the respective network outputs. Specifically, the matrices \(\mathcal{H},\mathbf{\mathcal{A}}_{\mathrm{CD}},\mathbf{\mathcal{H}}_{\mathrm{HF}},\mathbf{ \mathcal{H}}_{\mathrm{problem}}\) are elements of the matrix space \(\mathcal{M}_{\mathrm{2}^{N_{Q}}\times\mathrm{2}^{N_{Q}}}(\mathbb{C})\), while \(\mathbf{\mathcal{C}}(t)\) belongs to \(\mathbb{R}^{4^{N_{Q}}}\). Consequently, the number of components of the variables obtained may increase with the number of qubits, but it is important to note that the neural architecture for all calculations has consistently comprised six internal layers, each containing 30 neurons. This architectural choice is widely documented in the literature and has resulted in state-of-the-art outcomes across various domains of research [56, 57, 95]. In general, the scale and arrangement of the neural architecture of the PINN will depend on the specific problem being addressed, meaning that it is more or less directly related to the difficulty presented by the underlying PDEs. In our case, we are dealing with a single differential equation, which essentially defines \(\mathbf{\mathcal{H}}(t)\) in Equation (20) whereby involving only one derivative function. Consequently, our chosen neural network architecture efficiently addresses the problem of the CD protocol, as increasing the number of trainable weights usually does not get translated into better results. Moreover, such an increase could even negatively impact computation time and model convergence during the backpropagation process [96]. Apart from the \(\mathrm{H}_{2}\) molecule, others can also be considered on the STO-3G basis such as lithium hydride, \(\mathrm{LiH}\), which can be represented using 4 qubits. The initial and final Hamiltonian operators of the process can be computed again using [70]. Discussing scalability within these methodologies holds substantial importance as it elucidates the extent to which an approach can be experimentally applicable. Undoubtedly, our DL-based methodology provides a wealth of information, as we have shown throughout the entire paper. However, when addressing the scalability of the method, we need an examination of two key factors: the number of qubits and the quantity of points encompassed within the time interval (\(N_{\mathcal{F}}\)), in conjunction with the graphics processing unit (GPU) or hardware employed, as a whole. All computations considered employed an NVIDIA A16 card with a memory capacity of 16 GB. Consequently, we present, in Figure 10 (top row), a comprehensive analysis of the final physical loss (or final total residual) which marks the conclusion of network training concerning the number of points within the interval \((t_{\text{min}},t_{\text{max}})\) denoted as \(N_{\mathcal{F}}\). This analysis encompasses diverse bond distances for the \(\mathrm{H}_{2}\) molecule (left) and a singular value of \(d\) for the \(\mathrm{LiH}\) molecule (right), solely for the purpose of facilitating comparisons. Figure 9: In the upper row, we present the temporal evolutions of the coefficients \(\mathbf{\mathcal{C}}(t)\) resulting from the decomposition of the operator \(\mathbf{\mathcal{A}}_{\mathrm{CD}}(t)\) (32) applied to the \(\mathrm{H}_{2}\) molecule utilizing a 2-qubit configuration in the STO-3G basis. In the lower row, a bar chart illustrates the average values of each coefficient, arranged in descending order of magnitude. It is evident that both the coefficient \(C_{\text{XY}}\) and its symmetric counterpart \(\mathcal{C}_{\text{YX}}\) exert the most substantial influence throughout the entire process, followed by \(\mathcal{C}_{\text{H}}\), \(\mathbf{\mathcal{C}}_{\text{ZZ}}\) and \(\mathcal{C}_{\text{XX}}\). The physical loss depicted in the top row of the figure has been normalized with respect to the minimum value observed for that particular magnitude across all training sessions considered in this section, encompassing both the \(\mathrm{H}_{2}\) and \(\mathrm{LiH}\) simulations. This normalization, denoted as \(\mathcal{L}_{\mathrm{min}}\), allows us to represent the loss in a manner independent of specific training instances. An analysis of the results reveals that the general loss for the \(\mathrm{LiH}\) case is marginally higher, although the values are within the same order of magnitude as the \(\mathrm{H}_{2}\) case. It is worth noting that both simulations employed an identical neural architecture comprising 6 internal layers, each containing 30 neurons. Furthermore, a common training regimen of 500,000 iterations (epochs) was applied to all cases. Consequently, a longer training duration would likely result in reduced final physical errors, owing to the common training approach. The consistency in architecture and training duration enables us to draw meaningful comparisons between both simulations and enables us to fairly evaluate their respective final performances. On the other side, for the purpose of effectively quantifying the computational time involved in various training sessions and facilitating a meaningful comparison, it is imperative to consider external factors that could impact the simulations. These factors include the specific GPU utilized, available free memory in the CPU (as some calculations are delegated to the CPU), among others. In the lower section of the figure, we present a comparative analysis between the two molecules concerning the time consumed (\(\Delta T\)), which is also normalized with respect to the minimum value obtained for this metric, denoted as \(T_{\mathrm{min}}\). The results demonstrate that an increase in the number of data points, \(N_{\mathcal{F}}\), generally leads to a multiplication of approximately 3.5 times in the compute time consumed, when transitioning from \(2^{7}\) to \(2^{14}\) data points, for the \(\mathrm{H}_{2}\) molecule. However, it is crucial to highlight that with only \(N_{\mathcal{F}}=2^{11}\) the final physical error \(\mathcal{L}\) obtained is minimal. This observation indicates that augmenting the sampled time domain does not necessarily enhance the performance of the model. In other words, with this particular number of data points in the training domain, the PINN exhibits significant capabilities in making inferences, extrapolating information to new temporal instances, and interpolating. Consequently, the adoption of \(2^{11}\) data points implies a multiplicative factor of slightly more than 1.5 in comparison to the minimum elapsed computation time and it can be regarded as a favorable trade-off between training time and performance. Figure 10: Graphical investigation of the scalability of our methodology for the \(\mathrm{H}_{2}\) molecule in the STO-3G basis. Various bond distances of the hydrogen atoms are considered, represented by distinct colors as indicated in the legend to the right. The top side of the graph illustrates the total physical loss \(\mathcal{L}\) (35) after completing the training process, plotted against the number of points \(N_{\mathcal{F}}\) considered within the domain \(t\in(t_{\mathrm{min}},t_{\mathrm{max}})\). On the bottom side of the graph, we present the time required to complete the entire training process, normalized to the minimum time. We conducted these experiments using an NVIDIA A16 GPU. Concerning the training of the \(\mathrm{LiH}\) molecule which involves 4 qubits, a significant increase in the required time is observed compared to the case of \(\mathrm{H}_{2}\) with 2 qubits. This discrepancy arises due to the consideration that the latter necessitates a decomposition of the potential gauge \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) comprising 16 possible combinations, each represented by a \(4\times 4\) matrix. However, in the case of the lithium hydride, there are 256 possible combinations in total, each represented by a matrix size of \(16\times 16\). Consequently, both hardware memory and training time experience a considerable surge from one case study to another. It is important to note, however, that this increase in resources is essential for extracting all possible information from the problem. This includes the scheduling function, all components of the potential gauge at each time instant, as well as the instantaneous values of each coefficient of the decomposition. In practical situations, theoretical analysis often focuses on a subset of all the possible interactions of the qubits constituting the system. By reducing the number of interactions from 256 to a more manageable figure, the problem becomes more amenable to study. Under these circumstances, the primary contributors to the memory and computing time consumption are both \(\mathcal{L}_{\text{Coupling}}\) (33) and \(\mathcal{L}_{\text{Least Action}}\) (30). The former involves simultaneous manipulation of numerous matrices, while the latter involves performing two commutators. Moreover, both terms span \(N_{\mathcal{F}}\) defined points in time, which further adds to the computational complexity. ## 5 Discussion and conclusions In this study we have shown that deep learning methodologies such as Physics-Informed Neural Networks (PINNs) can be used to tackle the problem of counterdiabatic (CD) driving for analog quantum computation, deviating from conventional numerical methodologies [45]. Our proposed method is a generalized problem-independent methodology that offers a comprehensive solution, encompassing the determination of counterdiabatic terms and the temporal parametrization which empirically demonstrates that the adiabatic theory holds true. The suggested approach provides an unified and effective way to resolve the underlying physics of the problem while also giving the theoretical Pauli matrix decomposition needed to be implemented experimentally. Furthermore, using the CD approach allows to get a greater separation between the ground state, \(E_{0}\), and the first excited level, \(E_{1}\), throughout a significant portion of the temporal progression. This is a desirable property from an experimental perspective, as mentioned in Section 4.2. Several questions may emerge from these findings. Firstly, an exploration of the computational capacity regarding the maximum qubit count achievable through PINN-based simulation is desirable, i.e., scalability. Secondly, there is an opportunity to enhance our methodology by integrating recent PINN advancements, including the incorporation of causality as an additional constraint and enabling the network to autonomously learn activation functions. Lastly, the restrictions of the theoretical Pauli matrix decomposition to encompass hardware-specific limitations introduce the prospect of improving the operational performance of analog quantum computers within experimental contexts. Currently, all methodologies formulated within this context have been exclusively applied to systems comprising two and four qubits, as discused in Section 4.4. Indeed, our study reveals that the training duration of a PINN for a 4-qubit system is approximately 15 times greater than that required for the 2-qubit counterpart. However, it is imperative to acknowledge that, empirically, the possible permutations of gates and qubit interconnections are usually restricted. Hence, despite the potential exponential increase in training time for an N-qubit system, the imposed experimental limitations render it feasible to effectively train the methodology for a substantial quantity of qubits. Aside from reducing the problem, it is also possible to improve the overall performance of our proposal by forcing it to respect the causality principle [97] further to restrict the time evolution of our Hamiltonian operators. Using this approach is sufficient to achieve substantial improvements in terms of physical accuracy. Therefore, it is feasible to reduce the number of temporal points used during the training process without altering the performance of the network. Moreover, implementing dynamically trainable activation functions may help improve performance and convergence time [83]. Furthermore, the physical losses presented in Figure 2 are around \(10^{-4}\), thus underscoring the imperative to comprehend the attainable precision of a base PINN in the context of CD protocols. Enhancing the presented methodology could entail the imposition of temporal causal prerequisites and the optimization of the neural architecture. The incorporation of mixture coefficients (36) within the loss function is a predetermined selection based on inductive physical biases, thereby specifying greater weights for the initial and final loss components to steer the progression of the system. Other constituents associated with physical conditions encompass hyperparameters that can be selected through iterative experimentation. The said coefficients are amenable to alteration during the training process, potentially employing techniques such as the Augmented Lagrangian approach [62], which adapts them according to the deviation from each physical condition. Consequently, the presented approach offers opportunities for enhancing the achievements and mitigating physical losses, improving, if feasible, the robustness of the model from a physical perspective. In conclusion, our work has shown that PINNs-based approaches are a promising methodology that can be effectively used to optimize CD protocols for adiabatic quantum computing. However, despite the substantial advancements achieved within this context, it is evident that ample opportunities for enhancement exist. In particular, it would be worth studying the impact of these results in digitized-counterdiabation quantum computing (DCQC) algorithms [98, 99]. The aforementioned questions stand as open paths for our future research, aiming to evolve and elaborate upon them. ## Acknowledgements The authors express their sincere appreciation for the thoughtful attention provided by the Kipu Quantum team. AFS and JDMG are partially supported by the agreement funded by the European Union, between the Valencian Ministry of Innovation, Universities, Science and Digital Society, and the network of research centers in Artificial Intelligence (Valencian Foundation valgrAI). It has also been funded by the Valencian Government grant with reference number CIAICO/2021/184; the Spanish Ministry of Economic Affairs and Digital Transformation through the QUANTUM ENIA project call - Quantum Spain project, and the European Union through the Recovery, Transformation and Resilience Plan - NextGenerationEU within the framework of the Digital Spain 2025 Agenda.
2309.16030
Protein container disassembly pathways depend on geometric design
The majority of viruses are organised according to the structural blueprints of the seminal Caspar-Klug theory. However, there are a number of notable exceptions to this geometric design principle. Prominent examples are the cancer-causing papilloma viridae and the \textit{de novo} designed AaLS cages that exhibit non-quasiequivalent capsid structures with protein numbers excluded by Caspar-Klug theory. The biophysical properties of these geometrically distinct architectures and the fitness advantages driving their evolution are currently unclear. We investigate here the resilience to fragmentation and disassembly behaviour of these capsid geometries by introducing a percolation theory on weighted graphs. We show that these cage architectures follow one of two distinct disassembly pathways, preferring either hole formation or capsid fragmentation. This suggests that preference for specific disassembly scenarios could be a driving force for the evolution of the non Caspar-Klug protein container architectures.
Q. Roussel, S. Benbedra, R Twarock
2023-09-27T21:20:17Z
http://arxiv.org/abs/2309.16030v1
# Protein container disassembly pathways depend on geometric design ###### Abstract The majority of viruses are organised according to the structural blueprints of the seminal Caspar-Kug theory. However, there are a number of notable exceptions to this geometric design principle. Prominent examples are the cancer-causing papilloma viridae and the _de novo_ designed AaLS cages that exhibit non-quasiequivalent capsid structures with protein numbers excluded by Caspar-Kug theory. The biophysical properties of these geometrically distinct architectures and the fitness advantages driving their evolution are currently unclear. We investigate here the resilience to fragmentation and disassembly behaviour of these capsid geometries by introducing a percolation theory on weighted graphs. We show that these cage architectures follow one of two distinct disassembly pathways, preferring either hole formation or capsid fragmentation. This suggests that preference for specific disassembly scenarios could be a driving force for the evolution of the non Caspar-Kug protein container architectures. Keywords: virus structure, virus disassembly, percolation theory, fragmentation threshold, weighted graphs ## 1 Introduction The majority of viruses package their genomes into icosahedral protein containers, called viral capsids, that provide protection for their genetic material during rounds of infection. These containers must be stable enough to protect their genetic cargo, yet also sufficiently unstable to enable its timely release at the appropriate time in the viral life cycle. Recently, we showed that capsids organised according to distinct types of surface lattices can have widely different resilience to fragmentation [1]. This analysis was limited to capsids abiding by the quasiequivalence principle introduced by Caspar and Klug [3], i.e. to those in which protein subunits make the same type of interaction across the entire capsid surface. A comparative analysis of three quasiequivalent surface lattice architectures - a triangulation, a rhomb and a kite tiling - was carried out, revealing different propensities to fragment for these distinct surface lattice types. The majority of icosahedral viruses are quasiequivalent, including those following Archimedean surface lattice architectures [8], and they can therefore all be studied with the approach reported earlier [1]. However, it is not directly applicable to the non-quasiequivalent architectures, in which protein units make several distinct types of interactions with other capsid proteins. A prominent example are the cancer-causing papilloma viridae, which exhibit two distinct types of interaction mediated by the C-terminal arms of the protein units. We address here the question whether such non-quasiequivalent cage architectures have stability properties, in terms of their propensity to fragment and their disassembly pathways, that differ from those of the quasiequivalent cage structures. For this, we generalise the percolation theory for quasiequivalent surface structures in Ref. [1] in two ways. First, we introduce a percolation theory approach based on weighted graphs, which tracks the fragmentation threshold in dependence of the "energy" equivalent of the total number of bonds removed, rather than the number of bonds removed as had previously been the case. Second, we adapt our computational strategy to correct the "energy" of protein units in disassembly intermediates to account for partially broken bonds. Both is required to adequately model the non-quasiequivalent surface architecture of these viruses because distinct interaction types make different contributions to container disassembly. We start by introducing our mathematical model of papillomavirus according to Viral Tiling theory [6], and introduce the graph modelling its interaction network. We then compute the fragmentation threshold at which the particle breaks into two disjoint components both under the removal of protein units, and as a consequence of bond breakage. The result is shown over a three-dimensional landscape, representing the three distinct types of bonds that occur in the capsid. Comparison with the Caspar Klug geometry corresponding to the special case that all bonds have equal strength, sheds new light on the possible evolutionary driving forces underpinning non-quasiequivalent viral architectures. ## 2 The Structure of Papillomavirus in Viral Tiling Theory Caspar-Klug theory models virus architecture in terms of triangulations [3] that indicate the positions of the capsid proteins (CPs) in the capsid surface. Geometrically distinct cage architectures are labelled by the triangulation number \(T\), and correspond to different planar embeddings of the icosahedral surface into a hexagonal lattice (Fig. 1a). By construction, Caspar-Klug capsid architectures are formed from \(60T\) CPs that are organised as 12 pentagonal, and \(10(T-1)\) hexagonal protein clusters, called pentamers and hexamers, respectively. Papillomavirus capsids are formed from 72 pentamers and therefore cannot be modelled using the Caspar-Klug construction. Such capsid architectures are not quasi-equivalent in the sense of Caspar and Klug, because their CPs (indicated schematically by dots) are involved in two distinct types of interactions, mediated by C-terminal arm extensions, with neighbouring pentamers: dimers interactions between two protein subunits, and trimer interactions between three. Viral Tiling Theory models the surface architectures of these non-quasiequivalent viral capsids in terms of different types of tiles, that each represent a distinct interaction type [6, 7, 8]: rhombi representing dimer, and kites trimer interactions. Note that the centres of the pentamers in the papillomavirus tiling coincide with those of the pentamers and hexamers in a \(T=7\) Caspar-Klug structure (compare Figs. 1b & 1c). However, in contrast to the Caspar-Klug geometry, this capsid is formed from only 360 proteins (dots in Fig. 1d), a number that is not possible in the framework of the Caspar-Klug construction. There are three distinct types of bonds between pentamers in the papilloma capsid: a bond corresponding to two C-terminal arms connecting a pair of proteins in a pentamer with a pair in a neighbouring pentamer (type \(a\), red); a single C-terminal arm on a kite tile connecting two individual capsid proteins (type \(b\), blue); and a dimer interaction, represented by a rhomb tile, with two C-terminal arms between two individual proteins (type \(c\), yellow) (Fig. 1e). In particular, a type \(a\) bond corresponds to two C-terminal arms between two pairs of proteins along the shared edge of two kite-shaped tiles. Type \(b\) refers to the bond between the two proteins on a kite-shaped tile that are not involved in a type \(a\) interaction with Figure 1: Geometric models of virus architecture. (a) The Caspar-Klug surface lattice of a \(T=7\) capsid architecture. (b) It provides the layout for the spherical architecture shown as a polyhedron formed from 12 pentagons (magenta) and 60 hexagons. (c) The surface organisation of the papilloma capsid is shown with tiling superimposed. (d) Pentamer positions in the papilloma virus tiling coincide with pentamers and hexamers (grey) of the \(T=7\) Caspar-Klug virus architecture. (e) Kite and rhomb tiles represent three types of interactions, that are mediated by C-terminal arm extensions: type \(a\) in red, type \(b\) in blue, type \(c\) in orange. (f) The weighted interaction network (wIN) of the papilloma virus capsid is obtained by placing vertices at the centres of the pentamers, and edges between pentamers that are connected via the interactions shown in (b); colours (weights) refer to the three distinct types of bonds. Geometric representations have been rendered using a purpose-designed software. each other. Type \(c\) bonds correspond to the bonds between the two proteins of a rhombic tile. ## 3 A percolation theory model of virus disassembly for weighted interaction networks In this section, we introduce a percolation theory model for the disassembly of weighted interaction networks. The procedure broadly follows previous work for quasiequivalent capsid architectures [2, 1]. However, as the network has different weights reflecting different types of bonds in the capsid, we modify the method to account for differences in the bond strengths. We start by formally introducing the weighted interaction network, and then present our method for both pentamer and bond removal scenarios. ### The weighted interaction network A prerequisite for modelling capsid disassembly is to encode the structural information in Fig. 1e as an interaction network, which captures topological information regarding the locations of the assembly units (capsomers) and the interactions between them. The interaction network is represented as a graph, in which pentamers are represented as vertices, and interactions between pentamers as edges. In the case of non-quasiequivalent capsid architectures, such as the papillomavirus capsid considered here as an example, it is a weighted interaction network (wIN), in which edges are labelled according to different bond strengths. For the papillomavirus wIN, different weights are indicated by colours (Fig. 1f) matching the three interaction types \(a\), \(b\), and \(c\) in Fig. 1e. In the following, we will investigate the propensity of the network to fragment when pentamers (vertices) or interactions (edges) are randomly removed from the wIN. We therefore attribute a weight to each edge that reflects the energy required to break that bond. The energies associated with type \(a\), \(b\), and \(c\) bonds (shown in red, blue and yellow respectively in Fig. 1e) will be referred to as \(E_{a}\), \(E_{b}\) and \(E_{c}\). Since proteins of rhombic tiles are involved in dimer interactions, whereas proteins of kite-shaped tiles are involved in the weaker trimer interactions, the corresponding bonds have different strengths. In particular, type \(a\) bonds correspond to two C-terminal arm extensions in a trimer (two red lines), while type \(b\) bonds is associated with a single C-terminal arm (blue line). Therefore, red edges in the interaction network have about double the bond energy of the blue edges. Moreover, type \(c\) bonds correspond to a dimer interaction that is mediated by two C-terminal arm extensions. As yellow and red edges in the interaction network are both mediated by two C-terminal arm extensions, we assume that they are roughly equal. However, the dimer interactions are likely a bit stronger than two C-terminal arms in neighbouring trimer interactions. Therefore, we assume the following relations between the bond energies \(E_{a}\), \(E_{b}\) and \(E_{c}\); \[2E_{b}=E_{a}<E_{c}\,, \tag{1}\] where the difference between \(E_{a}\) and \(E_{c}\) is assumed to be not very large. Note that in this case the 12 pentamers at the particle 5-fold axes and the 60 additional pentamers, all have a similar energy in the capsid as \(5E_{a}=E_{a}+2E_{b}+3E_{c}\). This reflects the fact that they all interact with neighbouring pentamers via five C-terminal arm extensions. ### Models of capsid disassembly We consider two distinct ways of modelling virus disassembly: either by removing bonds, or by removing vertices (i.e. pentamers) in the graph in the wIN. Both methods have been implemented before for the quasiequivalent capsid architectures in Caspar-Klug theory [2] and its extensions in the framework of Archimedean lattices [1]. In the computation of the fragmentation threshold of the viral capsids under bond removal, all bond energies had been assumed to be equal, so that bonds were broken randomly with a fixed known probability. We introduce below an approach that takes weighting of the edges according to their bond strengths into account. #### 3.2.1 Capsid fragmentation under bond breakage As the papillomavirus capsid has three distinct types of bonds with different bond energies, we associate with each pentamer an energy that is equal to the sum of the energies of its bonds to neighbouring pentamers. Each of the 12 pentamers at the particle 5-fold axes therefore has energy \(5E_{a}\), and the 60 other pentamers are associated with energy \(E_{a}+2E_{b}+3E_{c}\). The total energy of the viral capsid is therefore \[E=60E_{a}+60E_{b}+90E_{c}\,. \tag{2}\] Since the energy needed to break a bond is different for each type of bond, it is reasonable to assume that bonds are not being removed in an equal manner. In order to account for this, each bond is given a probability weight which is inversely proportional to its bond energy. The process of bond removal applied in previous publications therefore has to be adapted. Instead of removing a certain fraction of bonds, we chose to remove a certain fraction \(E_{r}\) (\(r\) denoting removal) of the total capsid energy \(E\). To do so, we pick a bond in a random manner (the probability for a bond to be chosen is directly proportional to its probability weight) and check if there is enough energy to break the bond, i.e. if \(E_{r}>E_{i}\) where \(E_{i}\) is the bond energy under consideration. If so, we remove the bond and subtract its energy from \(E_{r}\). We continue the process until no bond can be removed as the leftover energy is insufficient to do so. We then test the connectivity of the graph: if there are two or more isolated subgraphs, the graph is considered to be fragmented. This process is repeated a sufficient number of times to obtain a value for the probability of graph fragmentation, depending on the energy of bonds removed \(E_{r}\), within a certain range of accuracy (see Methods). We then use the values obtained to find the energy fragmentation threshold, i.e. the fraction of energy that needs to be removed for the probability of graph fragmentation to be equal to 0.5 using a classic bisection method. For this, the outcome of the simulation is benchmarked against the fragmentation threshold curve. Chebychev's inequality is used to determine a condition on the number of iterations required for each step of the bisection process (see Methods). #### 3.2.2 Capsid fragmentation under pentamer removal As viral capsids in the papillomavirus family disassemble into pentamers, we also consider pentamer removal, which corresponds to removal of nodes, rather than edges, from the wIN. For this, we associate with each node a probability weight that is directly proportional to the pentamer's total bond energy, as defined above. Then, in analogy to the procedure for edge removal, given a fraction of energy \(E_{r}\) to remove, we remove nodes and their associated edges until we cannot do so as there are no nodes of the appropriate energy remaining in the wIN. As nodes are removed, some bonds that were previously connected to neighbouring nodes are now broken, thus reducing the energy of the remaining nodes. We have therefore included a routine into our simulations that updates the energy of any remaining nodes, and consequently their probability weights, after a node has been removed from the wIN. By repeating this fragmentation process, we obtain a value for the probability of graph fragmentation depending on the fraction of energy removed (\(E_{r}\)), but this time in terms of pentamer/node removal, rather than bond removal. ## 4 Results ### Stability of quasiequivalent versus non-quasiequivalent capsid architectures We implemented the above described methods of edge and node removal to the papillomavirus wIN in Fig. 1f. The results depend on the relative values of the three bond strengths \(E_{a}\), \(E_{b}\) and \(E_{c}\) (Fig. 2). Equal weights (\(E_{a}=E_{b}=Ec\), black) represent the quasiequivalent interaction network of a \(T=7\) Caspar-Kling (CK) geometry, and \(E_{a}=2E_{b}=Ec\) (green) the non-quasiequivalent papillomavirus (P) scenario. Both are more resilient to fragmentation than most other scenarios (e.g. \(2E_{a}=4E_{b}=Ec\), blue), albeit with CK being slightly more resilient than P (note the displacement of the black line to the right of the green curve). The positions of these scenarios in the energy landscape are indicated by black and green dots, respectively. These results suggest that viruses have evolved geometries that confer more stability to the capsid than most alternatives. They also reveal how protein container architectures might be designed in virus nanotechnology, by configuring bond energies appropriately, to achieve less stable cage architectures if desired. We note that the probability of fragmentation in Fig. 2b tends to 0 as the fraction of energy removed approaches 1. This is a consequence of our model set-up. In contrast to previous methods, the energy of neighbouring nodes decreases when a node is removed, reflecting the absence of broken bonds. Therefore, the probability weights of such nodes increase and they are more likely to be chosen, consistent with expectations. The larger the fraction of the total energy removed, the larger the number of nodes removed. As a result, it is likely that the subgraph obtained after removal of a large fraction of the total energy, is composed of only a small number of connected nodes. Such graphs are naturally connected, leading to a decreasing probability of fragmentation. However, at that stage, the remaining graph is so small that the cargo has already been released, so this does not pose any problem for the biological conclusions from this work. ### Comparing hole formation with capsid fragmentation Before the capsid fragments into two disjoint parts, it is possible that a hole can form via removal of individual pentamers that is large enough to enable cargo release before capsid fragmentation is taking place. We therefore study here the process of hole formation, and investigate whether the formation of a large hole occurs prior or post capsid fragmentation for different wINs. Figure 2: Resilience to fragmentation of the papillomavirus wIN under bond breakage and pentamer removal. Probability of fragmentation for various bond strengths (choices of weights in the wIN) under (a) edge and (b) node removal. The energy percolation threshold for edge (c) and node (d) removal for different combinations of bond strengths reveals the Caspar-Klug (CK) scenario (\(E_{a}=E_{b}=Ec\), black dot) and the papillomavirus (P) scenario (\(E_{a}=2E_{b}=Ec\), green dot) to be located in the more stable range (red values in the landscape), with the CK geometry being more resilient to fragmentation than the P architecture. Figure 3: Comparison of hole formation with capsid fragmentation. (a) Fragmentation probabilities (dashed curves) are compared with the probability of having a hole of size larger than half the capsid (solid curve) for different edge energy distributions in the papilloma wIN; in all cases, hole formation occurs prior to capsid fragmentation. (b) Curves corresponding to the interaction network of the AaLS cage made of 72 pentamers, shown in (c), and the HK97 virus, shown in (d), show the opposite trend, with capsid fragementation (dashed lines) occurring prior to hole formation (solid curve). For this we compute the probability that the size of the largest hole in the capsid is larger than half of the capsid. We compare the "removal" energy \(E_{r}\) for which this probability surpasses 0.5, a proxy for the transition from small to large hole sizes, with the fragmentation probability, see Fig. 3. Interestingly, the papillomavirus wIN exhibits a different behaviour from that of other protein cages of similar size: a non-quasiequivalent _de novo_ designed protein cage (AaLs, shown in (b)), and a quasiequivalent \(T=7\) viral cage (HK97, (d)) formed from rhombic building blocks. Whilst hole formation occurs prior to capsid fragmentation in the papillomavirus architecture, the opposite is the case for the other cages. This hints at a principally different disassembly mechanism in the papillomavirus. This conclusion is further supported by ternary graphs comparing the energies \(E_{F}\) at which fragmentation occurs, with \(E_{H}\) when hole formation occurs, for both node removal (Fig. 2, top row) and edge removal (bottom row). Denoting as \(f_{a}\), \(f_{b}\) and \(f_{c}\) different fractions associated with each type of bond in the total capsid energy, i.e. \[\begin{split}& f_{a}=\frac{60E_{a}}{E}=\frac{E_{a}}{E_{a}+E_{b}+ \frac{3}{2}E_{c}}\\ & f_{b}=\frac{E_{b}}{E_{a}+E_{b}+\frac{3}{2}E_{c}}\\ & f_{c}=\frac{E_{c}}{\frac{2}{3}E_{a}+\frac{2}{3}E_{b}+E_{c}}\\ \end{split} \tag{3}\] we plot the energy fragmentation threshold for different energy distributions in Fig. 4. Figure 4: Comparison of fragmentation and hole formation. (a) Ternary graphs[5] for fragmentation energy \(E_{F}\) (left) and energy of hole formation \(E_{H}\) (right) when removing nodes/pentamers (top row) or edges/bonds (bottom row) from the interaction network/capsid. The black dot indicates the CK scenario of identical bond strengths, while the red dot corresponds to the papilloma scenario \(E_{a}=2E_{b}=E_{c}\). The red lines indicates parameters with \(E_{a}=2E_{b}\) and \(E_{c}>E_{a}\). The actual value for the papillomavirus capsid is on this line and close to the red dot. Using the relations (1) and (3), we deduce the following conditions for \(f_{a}\), \(f_{b}\) and \(f_{c}\): \[f_{a} =2f_{b} \tag{4}\] \[f_{c} >\frac{3}{2}f_{a}\] These relations define the red line in the ternary graph: it connects the point \((f_{a}=0,f_{b}=0,f_{c}=1)\), corresponding to bond energies \(E_{a}=E_{b}=0\), with \((f_{a}=2/6,f_{b}=1/6,f_{c}=3/6)\), which corresponds to the ideal scenario of bond strength \(E_{a}=2E_{b}=E_{c}\). The realistic value will be in the vicinity to this line close to the ideal value (red dot). Note that this is in the region corresponding to higher fragmentation energies, indicating capsid structures that are more resilient to fragmentation. It is interesting to compare the ternary graphs for node and edge removal. Whilst graphs for \(E_{F}\) and \(E_{H}\) are similar for the node removal case, they differ markedly for edge removal. The capsid now opens a hole before fragmentation (on average \(\frac{E_{H}}{E_{F}}=1.71\)). This difference is particularly pronounced for capsids with weak \(a\) bonds: their resistance to fragmentation diminishes rapidly to 0, in contrast to their resistance to hole formation. This makes sense as removal of \(a\) bonds from the wIN results in "floating" nodes that fragment the graph. As those holes are only of size 1, this does not affect the largest hole size significantly. Unlike \(a\) bonds, \(c\) bonds have a crucial role in the structure of the capsid, in term of resistance to both fragmentation and hole formation. This is consistent with the fact that \(c\) bonds form a connected subgraph, which corresponds to a "whiffle ball" architecture [4], and the fact that they are the strongest bonds in the wIN. For comparison, the CK scenario of a \(T=7\) capsid with equal bond strengths \(E_{a}=E_{b}=E_{c}\) corresponds to: \[f_{a} =f_{b}=\frac{2}{7} \tag{5}\] \[f_{c} =\frac{3}{7}\,, \tag{6}\] which is indicated by a black dot. In all graphs, the non-quasiequivalent geometry of the papilloma capsid is less resilient to fragmentation than its quasiequivalent counterpart. However, it is still relatively stable (yellow/green range), consistent with its function to offer sufficient protection to its genetic material, while enabling its timely release when infecting its host. ### Analysis of disassembly pathways As hole formation occurs prior to capsid fragmentation in papillomavirus according to Fig. 3a, we further analyse the process of hole formation. Fig.5 shows the distribution of hole sizes for different values of the removal energy \(E_{r}\). Up to a certain threshold of energy removed (\(E_{r}=E_{H}\)), the holes in the capsid do not exceed a third of the capsid in size and the probability distribution retains a low standard deviation. Above that threshold, the size of the largest hole is consistently above 2/3 of capsid size. For fragmentation energies close to \(E_{H}\) we observe a transition regime where the standard deviation increases and the average size of the largest hole rapidly increases. For the _de novo_ designed AaLS72 cage, no hole size is significantly favoured during this regime (see the flat distribution in black), i.e., no particular intermediary value is favoured for transitioning from a small to a large hole size (Fig. 5a and 5b). However, the papillomavirus capsid exhibits a peak for capsid intermediates with a hole size close to half the capsid size during this regime (see arrow in Fig. 5d). This can also be seen quantitatively by comparing the normalized entropies of the hole size distribution at \(E_{r}=E_{H}\): This value is approximately 0.61 for the AaLS72 capsid, but 0.56 for the papillomavirus capsid (\(E_{a}=E_{c}=2E_{b}\)). The maximal peak height over the average peak height is 5.39 for the papillomavirus wIN, but only 1.98 for the AaLS72 cage. Interestingly, a similar distribution (and indeed the same entropy value of 5.36) occurs also for the unweighted interaction network, i.e. for the \(T=7\) CK architecture. This shows that the papillomavirus capsid and CK geometry structurally favour an intermediary state during disassembly in which the capsid is missing half of the pentamers. An example of a capsid intermediate with a hole size of 36 is shown in Fig. 5e. ### De novo designed versus natural protein cages The difference in disassembly behaviour between the _de novo_ designed AaLS72 cage and the virus examples is striking, and begs the question whether this phenomenon occurs more widely in _de novo_ designed cage architectures. The AaLS pentamer is known to assemble into a wide range of cage structures with distinct symmetries and shapes (Fig. 6a). Whilst the smallest and largest cage have icosahedral symmetry, the four intermediate-sized cages exhibit tetrahedral symmetry. Resilience to fragmentation drops rapidly amongst the tetrahedral cage architecture with increasing size. However, there is a gain in resilience in the transition from the tetrahedral 60-pentamer cage to the icosahedral 72-pentamer cage, suggesting that symmetry has an impact on stability (Fig. 6b). A similar trend is observed for hole formation, but generally that curve is flatter, suggesting only limited variation in hole formation across the ensemble of AaLS cage Figure 5: Hole size distribution during disassembly. (a) Hole size distributions in AaLS72 disassembly intermediates under node removal for different values of \(E_{r}\). (b) Close up of the distributions for energies \(E_{r}\) corresponding to 20%, 65% and 95 % of the total capsid energy \(E\), respectively. (c) & (d) Equivalent data for the papillomavirus capsid graph with \(E_{a}=E_{c}=2E_{b}\). (e) Example of the interaction network of a papilloma disassembly intermediate with a hole size of 36, corresponding to the peak of the distribution in (d), see arrow. Figure 6: Fragmentation behaviour of the _de novo_ designed AaLS cages. (a) The interaction networks of symmetric AaLs cage architectures, computed using a purpose-designed software. (b) The hole formation thresholds \(E_{H}\) and fragmentation thresholds \(E_{f}\) for the different cage structures. Note the lack of a data point for the fragmentation threshold for the smallest cage as the fragmentation probability remains consistently under 0.5. (c) The AaLS cages exhibit increasing entropies with size in their distributions at the cusp between small and large holes (red), resulting in a significantly higher value than for the papillomavirus capsid (green). (d)-(g) Hole size distributions for disassembly intermediates for tetrahedral AaLS cages with with with 24, 36, 48 and 60 pentamers. architectures. There is a cross-over in the curves between the 24-pentamer and the 36-pentamer cage, making hole formation more likely in the smaller cages, and fragmentation more likely in the larger ones. This analysis reveals distinct assembly pathways for different capsid sizes. The maximal normalised entropy is increasing with cage size (Fig. 6c), consistent with the individual hole size distributions for the tetrahedral intermediate-sized cages shown in Fig. 6d-6g. These reveal a pattern similar to the icosahedral 72-pentamer AaLS cage in Fig. 5a. It is characterised by the absence of a defined pathway of hole formation for these architectures, in contrast to the papillomavirus case. This suggests that _de novo_ designed containers can exhibit disassembly behaviour that is principally different from that of naturally occuring cage structures. ## 5 Methods ### Generation of the interaction network in 3D The following geometric approach was used to visualise capsid architectures and their interaction networks. Starting with a list of edges corresponding to a tile, this tile was translated along two given vectors \(\overrightarrow{T}_{x}\) and \(\overrightarrow{T}_{y}\) to generate the lattice grid. Then three 6-fold symmetry axes of the grid were chosen to indicate the vertices of an equilateral triangle. Only edges that intersect with, or are contained within, this triangle, were identified, effectively "cutting" this triangle out of the underlying planar lattice. The position of this triangle in the capsid surface was then defined by two integers \((h,k)\), where \((h\overrightarrow{T}_{x},k\overrightarrow{T}_{y}^{\phi})\) is the vector between two vertices of the triangle. This algorithm was used for a triangular tiling with \((h,k)=(2,1)\) to generate one of the twenty faces of the papillomavirus capsid. We then manually assign weights to the edges before copying this face twenty times. After assembling icosahedral faces in 3D, we obtain the graph of the viral capsid. A similar method has been used for the generation of the AaLS cages in Fig. 6a (see also Gihub). ### Edge and node removal from a weighted graph For edge/node removal from a weighted interaction network (wIN), we assign probability weights to each edge which are inversely proportional to their bond energy. Instead of working with a probability of removal, we pick an amount of "energy equivalent" \(E_{r}\) to randomly remove from the wIN, that is typically indicated as a percentage of the total energy \(E\). The Monte Carlo simulation is conducted as follows: We randomly choose bonds until we find one which has less energy than \(E_{r}\). We remove this bond and subtract its energy from \(E_{r}\). We repeat this process until all bonds have more energy than \(E_{r}\) or \(E_{r}=0\). We then check whether the graph is fragmented or not (see README on Gihub), and compute the fragmentation threshold of such a capsid using the bisection method described below 5.3. Similarly, for node removal, we first compute the energy of each node by adding up the bond energies of each edge connected to it, and then compute its reciprocal to obtain its probability weight. We again choose an amount of energy to randomly remove (\(E_{r}\)) and randomly select a node for removal. For each edge connected to the chosen node, we subtract its bond energy from the energy of its neighbouring nodes. If a node is now isolated, i.e. its energy is zero, it is removed from the graph and its energy subtracted from \(E_{r}\). We stop this process once the energy of each remaining node is greater than \(E_{r}\), and then check if the graph is fragmented. Fragmentation under edge removal is then again determined with the same algorithm as in 5.3. ### The bisection method To determine the fragmentation threshold, we use a bisection method. For each step of the algorithm, we determine whether the probability of fragmentation \(p_{f}\) is above or below 0.5 with a certain accuracy, i.e., with a high enough probability. For this, let \(N\) be the number of simulations, \(\epsilon\) the upper bound for the probability of having a wrong value for the next step (i.e. for getting a value above 0.5 were the actual one is below or vice versa). Let \(F(f_{r})\) be a random variable which returns 1 if the graph is fragmented after removing a node/edge with probability \(f_{r}\), or 0 otherwise. This variable has a Bernoulli distribution \(F(f_{r})\sim\text{B}(p_{f})\). Let \((F_{i})_{i\in[1,N]}\) be N independent variables such that \(\forall i\in[1,N],F_{i}\sim\text{B}(p_{f})\), then \(S_{N}=\sum_{i=1}^{N}F_{i}\sim\text{B}(N,p_{f})\). \(S_{N}\) is a new random variable that represents the number of simulations that resulted in a fragmented capsid after N tries. We know that \(\text{E}(\frac{S_{N}}{N})=p_{f}\). Chebyshev's inequality then yields \(\forall a>0\): \[\begin{split}\text{P}(|\frac{S_{N}}{N}-p_{f}|>a)& \leq\frac{\text{V}(\frac{S_{N}}{N})}{a^{2}}\\ &=\frac{Np_{f}(1-p_{f})}{N^{2}a^{2}}\leq\frac{1}{4Na^{2}}\end{split} \tag{7}\] If \(|\frac{S_{N}}{N}-p_{f}|<|S_{N}-0.5|\), then \(\frac{S_{N}}{N}\) lies in the red area in figure 7, i.e. closer to the black than the blue curve, implying that \(S_{N}\) is in the correct range for the next step of the bisection method, and we therefore stop the simulation at this point. This gives us \[\mathbf{P}(\text{error})\leq\mathbf{P}(|\frac{S_{N}}{N}-p_{f}|>|S_{n}-0.5|) \tag{8}\] By applying 7 with \(a=|S_{n}-0.5|\) we get \[\mathbf{P}(|\frac{S_{N}}{N}-p_{f}|>|S_{n}-0.5|)\leq\frac{1}{4N|S_{N}-0.5|^{2}}\,, \tag{9}\] hence \[4N|\frac{S_{N}}{N}-0.5|^{2}>\frac{1}{\epsilon}\implies\mathbf{P}(\text{error})<\epsilon \tag{10}\] This inequality defines the stop condition for each step of the bisection method. As long as \(\lim_{N\rightarrow\infty}\frac{S_{N}}{N}\neq 0.5\) the algorithm will stop. However, the number of iterations this will take is potentially unbounded. Therefore, a maximal number of iterations is set at which the bisection process terminates. In none of our simulations that value was ever reached. ### Definition of the largest hole size For algorithmic purposes, we need a formal definition of the largest hole size. **Definition 5.1**: _Let \(G=(V,E)\) be a connected graph, and \(G^{\prime}=(V^{\prime},E^{\prime})\) a subgraph of \(G\) where \(G\neq G^{\prime}\) and \(V^{\prime}\neq\emptyset\). Consider the set of connected components of maximal size (i.e., with the largest number of nodes) \(\{C_{0},...,C_{p-1}\}\). Let \(i\in\{0,...,p-1\}\), \(C_{i}=(V_{i},E_{i})\) and \(\bar{C}_{i}=(\bar{V}_{i},\bar{E}_{i})\) where \(\bar{V}_{i}=V\setminus V_{i}\) and \(\bar{E}_{i}=\{\{u,v\}:\{u,v\}\in E,u\in\bar{V}_{i},v\in\bar{V}_{i}\}\). Further, let \(H_{i}\) be the size of the largest connected component of \(\bar{C}_{i}\). Then the hole size of \(G^{\prime}\) is defined as_ \[H_{G}(G^{\prime})=\max_{0\leq j\leq p-1}H_{j}\,.\] _By convention, we set \(H_{G}(G)=0\) and \(H_{G}(\emptyset)=-V-\)._ Some instructive examples illustrate the rationale underpinning this definition. In order to describe the size of the largest hole in the bulk ("main component") of the capsid, one approach would be to compute the size of the largest connected component of \(G\setminus G^{\prime}\). However, note that this definition would find the graph of Fig.8 as having a hole size of 1, because the isolated node of \(G^{\prime}\) is still considered part of the graph, even though it is no longer part of the "main component" that corresponds to the bulk of the capsid. Figure 7: A step in the bisection method stops when the probability of \(\frac{S_{N}}{N}\) being in the red area is above \(1-\epsilon\). For this reason we only consider the largest connected component. We denote by \(\bar{C_{0}}\) the graph made of the "missing" pieces from \(C_{0}\), i.e. the graph corresponding to the "holes" in \(C_{0}\). In case there are multiple largest connected components as in Fig. 9, the algorithm has to decide which to pick. Intuitively, this is equivalent to choosing which is the main component. This can happen in practice for instance if a capsid graph breaks into three equal-sized pieces with a "middle ring" connecting two "disks". The question we need to ask is whether we consider such a graph as having two holes 1/3 of the capsid size, or one hole 2/3 of the capsid size. By using \(H_{G}(G^{\prime})=\max_{0\leq j\leq p-1}H_{j}\) in Def. 5.1, we opt for the latter case. However, we note that these cases are rare. Typically, we can easily determine the size of the largest "hole" present in the capsid by considering the largest connected component of the fragmented capsid as the "main part" or "bulk" of the capsid. Any group of neighbouring missing subunits would then be a "hole", and the largest group would correspond to the largest hole, as illustrated by an example in Fig. 10. Figure 8: An example of a graph \(G^{\prime}\) with two connected components. The set of connected components contains one subgraph \(C_{0}\). We have \(H_{0}=3\), hence \(H_{G}(G^{\prime})=3\). Figure 9: An example of a graph \(G^{\prime}\) with three connected components with the same number of vertices\(\{C_{0},C_{1},C_{2}\}\). We have \(H_{0}=H_{2}=6\) and \(H_{1}=3\). Thus, \(H_{G}(G^{\prime})=6\). the hole sizes tend to be consistently small. On the other hand, when removing most of the capsid energy, the largest hole tends to consist of most of the capsid. However, the transition between these two regimes is not linear and happens abruptly for a specific energy value \(E_{H}\). This value can be formally defined as the removal energy for which the probability of the largest hole being larger than half of the capsid is 0.5. This values can be interpreted as the energy needed to break the structure of the capsid and is a measure of the graph's resilience to fragmentation. ### The entropy of the hole distribution in disassembly intermediates The randomness of each distribution can be quantitatively estimated using its entropy. For a capsid of size \(n\), with a hole size ranging from 0 to \(n\), this entropy ranges from 0 for the distribution of a deterministic random variable (i.e., the hole size is always the same for this distribution) to \(\log_{2}(n+1)\) for a uniform distribution over all hole sizes. This entropy value \(H\) is observed to be maximal for \(E_{r}=E_{H}\). For this value to be comparable between graphs of different sizes, we need to normalize it by \(\log_{2}(n+1)\). ### Simulation parameters When computing fragmentation and hole formation probabilities for given values of energy removed (\(E_{r}\)), the only free parameter is the number of Monte Carlo steps. Estimation of fragmentation and hole formation thresholds is done using a bisection method, which is characterized by its number of steps, the probability of error in each step, and the maximal number of simulations per step. These parameters are given in Tables 5.6 & 5.6. fact that in contrast to viral capsids, some protein subunits in their capsomers do not interact with other capsomers in the cage, leading to the formation of larger holes in the cage surface. Interestingly, a similar behaviour is seen also in viruses formed from 72 capsomers (12 pentamers and 60 hexamers) that are organised according to a rhomb tiling, as for example in bacteriophage Hong-Kong 97 (HK97). This might explain why these viruses have evolved additional capsid features, such as the chain-mail organisation in HK97 [9], to stabilise their capsids. In summary, different capsid architectures follow principally different disassembly mechanisms, with a preference for either hole formation or fragmentation. Our analysis shows evidence of both in naturally occuring viruses depending on their geometric design principles. These results provides a guide for protein nanoparticle design targeted at specific applications, contributing to the rational design of specific desired cargo release mechanisms. ## Acknowledgements RT thanks the Wellcome Trust for financial support through the Joint Investigator Award (110145 & 110146), the EPSRC for an Established Career Fellowship (EP/R023204/1) which also provided funding for QR and the Royal Society for a Royal Society Wolfson Fellowship (RSWF/R1/180009), which provided funding for QR and SB.
2309.06223
Unveiling Single-Bit-Flip Attacks on DNN Executables
Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are primarily launched over high-level DNN frameworks like PyTorch and flip bits in model weight files. Nevertheless, DNNs are frequently compiled into low-level executables by deep learning (DL) compilers to fully leverage low-level hardware primitives. The compiled code is usually high-speed and manifests dramatically distinct execution paradigms from high-level DNN frameworks. In this paper, we launch the first systematic study on the attack surface of BFA specifically for DNN executables compiled by DL compilers. We design an automated search tool to identify vulnerable bits in DNN executables and identify practical attack vectors that exploit the model structure in DNN executables with BFAs (whereas prior works make likely strong assumptions to attack model weights). DNN executables appear more "opaque" than models in high-level DNN frameworks. Nevertheless, we find that DNN executables contain extensive, severe (e.g., single-bit flip), and transferrable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.
Yanzuo Chen, Zhibo Liu, Yuanyuan Yuan, Sihang Hu, Tianxiang Li, Shuai Wang
2023-09-12T13:42:20Z
http://arxiv.org/abs/2309.06223v2
# Unveiling Single-Bit-Flip Attacks on DNN Executables ###### Abstract Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are primarily launched over high-level DNN frameworks like PyTorch and flip bits in model weight files. Nevertheless, DNNs are frequently compiled into low-level executables by deep learning (DL) compilers to fully leverage low-level hardware primitives. The compiled code is usually high-speed and manifests dramatically distinct execution paradigms from high-level DNN frameworks. In this paper, we launch the first systematic study on the attack surface of BFA specifically for DNN executables compiled by DL compilers. We design an automated search tool to identify vulnerable bits in DNN executables and identify practical attack vectors that exploit the model structure in DNN executables with BFAs (whereas prior works make likely strong assumptions to attack model weights). DNN executables appear more "opaque" than models in high-level DNN frameworks. Nevertheless, we find that DNN executables contain extensive, severe (e.g., single-bit flip), and transferable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains. ## 1 Introduction Recent years have witnessed increasing demand for applications of deep learning (DL) in real-world scenarios. This demand has led to extensive deployment of deep neural network (DNN) models in a wide spectrum of computing platforms, ranging from cloud servers to embedded devices. To date, a promising trend is to use DL compilers to compile DNN models in high-level model specifications into optimized machine code for a variety of hardware backends [63, 42, 12]. Hence, instead of being interpreted in frameworks like PyTorch, DNN models can be shipped in a "standalone" binary format and executed directly on CPUs, GPUs, or other hardware accelerators. More and more DNN executables have been deployed on mobile devices [47, 48, 30, 54] and cloud computing scenarios [76, 3]. Despite the prosperous adoption of DNN executables in real-world scenarios, its attack surface is largely unexplored. In particular, existing research has demonstrated that bit-flip attacks (BFAs) enabled by DRAM Rowhammer (RH) are effective in manipulating DNN models [58, 28]. However, existing works only launch BFAs toward DNN models in DL frameworks like PyTorch. Since DNN models executed as low-level machine code have distinct execution paradigms, runtime systems, and more "obscure" code formats than those in the DL frameworks, it is highly timely that a systematic study on the attack surface of BFAs on DNN executables be conducted. To this end, our work provides the first and in-depth understanding of the severity of BFAs on DNN executables, whose findings suggest the need to incorporate comprehensive mechanisms in DNN compilation toolchains to harden real-world DNN executables against exploitations. We identify practical attack vectors that exploit the model structures in DNN executables (often stored in the.text section in DNN executables) with BFAs. Previous attacks rely on rather strong assumptions of the knowledge of full model details. In contrast, we make a practical assumption that the attacker only knows the model structure, which is usually public or recoverable [79, 7, 29]. Thus, benefiting from mechanisms that maximize memory utilization on commercial clouds (e.g., Kernel Same-Page Merging [4]), we unveil that BFAs can be launched in a cross-virtual-machine (cross-VM) manner. We also propose a set of strategies to enable effective and efficient BFAs on DNN executables. We particularly show that, a "vulnerable bit" searcher can be launched over DNN executables sharing the same model structure with the victim DNN executable, which automates the process of locating vulnerable bits in a given DNN executable. Moreover, we adopt and augment an RH attack technique [33] to launch exploitations on DNN executables deployed on real-world DDR4 devices. Our extensive study is conducted over DNN executables emitted by two commonly used production DL compilers, TVM [12] and Glow [63], developed by Amazon and Facebook, respectively. We assess the attack surface on diverse combinations of different configurations, including DNN models, datasets, and compilation options, covered by a total of 21 DNN executables in this study. We made important observations, including 1) _Pervasiveness_. We identify on average 16,599 vulnerable bits in each of the DNN executables we studied, even for quantized models, which have been known to be more robust than full-precision models [28, 78]. 2) _Effectiveness & Stealthiness_. 71.1% of the RH attacks reported in this work succeed with only a single bit flip, while 95.6% succeed within 3 flips, making RH attacks highly effective in practice. 3) _Versatility_. We show that BFAs can achieve various attack end-goals over both classification and generative models. 4) _Transferability_. We also find many "super-bits" -- vulnerable bits that exist across the.text section of DNN executables sharing the same structure but with different weights. We further conduct reverse engineering and manual analysis to characterize those vulnerable bits. Our work highlights the need to incorporate security mechanisms in future DNN compilation toolchains to enhance the reliability of real-world DNN executables against exploitations. In summary, we contribute the following: * This paper launches the first in-depth study on the attack surface of DNN executables under BFA, one major and practical hardware threat toward DNN models. * We show how BFAs can be carried out in realistic, cross-VM scenarios. We extend de facto RH techniques on DDR4 devices and design a novel vulnerable bit searcher to automate locating vulnerable bits. * Our empirical findings uncover the pervasiveness, stealthiness, versatility, and transferability of BFA vectors on DNN executables. We show that DNN executables contain severe attack surfaces that are not present in corresponding DNN models in high-level DL frameworks. ## 2 Preliminary and Related Works ### DL Compiler and DNN Executable **DNN Compilation.** DL compilers typically accept a high-level description of a _well-trained_ DNN model, exported from DL frameworks like PyTorch as their input. During the compilation, DL compilers often convert the DNN model into intermediate representations (IRs) for optimizations. High-level, platform-agnostic IRs are often graph-based, specifying the DNN model's computation flow. Platform specific IRs, such as TVM's TensorFlowIR and Glow's High Level Optimizer (HLO), specify how the DNN model is implemented on a specific hardware backend and support hardware-specific optimizations. Finally, DL compilers convert their low-level IRs into assembly code (or first into standard LLVM/CUDA IR [37, 46]) for execution. **DNN Executable and Runtime.** Popular DL compilers like TVM [12] and Glow [63] emit DNN executables in the standard ELF format and can be executed on mainstream CPUs and GPUs. The emitted DNN executables can be in a standalone executable or a shared library (a.so file that can be loaded by other programs). Without loss of generality, we focus on the.so format in this paper, but our attack pipeline and findings can be easily extended to standalone executables. Fig. 1 compares deploying DNN executables in CPU & main memory with running DNN models in PyTorch. Typically, a DNN executable contains all computation logic of a DNN model in a single executable. DNN executables show a distinct paradigm compared to DL frameworks that essentially interpret the DNN model in a computational graph and offloads low-level computation to external kernel libraries like cuDNN [13] and MKL-DNN [31] on GPUs and CPUs. As will be reviewed later, existing BFAs on DL frameworks (e.g., [28, 58]) primarily target pre-trained weights of DNN models (which are secret in our assumption, marked using the gray locker in Fig. 1). Moreover, to locate vulnerable bits in model weights, prior works often use gradient-based searching and thus require model weights to be known to the attacker. On the other hand, launching hardware exploitations toward interpretation-based environments is an open problem [52] since the Python runtime contains a more obscure runtime environment, and it is much harder, if not infeasible, to identify BFA vectors in it. Also, it is unclear if GPU memories are vulnerable to BFAs. In contrast, this research analyzes the attack vectors in DNN executables, particularly their.text sections (the 2nd "attackable" box) that contain the model structure information. As noted in Sec. 3, we assume that the.rodata section, which often contains the model weights, is unknown ("secret") to the attacker. **Real-World Usage.** The real-world usage of DL compilers and DNN executables has been illustrated in recent research [12, 32, 42, 63] and industry practice. The TVM com Figure 1: Comparing the runtime systems of DNN models in DL frameworks and DNN executables. Different from prior work, we assume the trained weights are unknown to attackers and thus marked as “secret.” munity has reported that TVM has received code contributions from many companies, including Amazon, Facebook, Microsoft, and Qualcomm [15]. TVM has been used to compile DNN models for CPUs [32, 41]. Facebook has deployed Glow-compiled DNN models on CPUs [45]. Overall, DL compilers are increasingly vital to boost DL on CPUs, embedded devices, and other heterogeneous hardware backends [3, 76]. This work exploits the output of DL compilers, i.e., DNN executables, and we, for the first time, show the pervasiveness and severity of BFA vectors in DNN executables. ### Bit-Flip Attacks **BFA via Rowhammer Attack.** BFA is a type of hardware fault injection attack that corrupts the memory content of a target system (by flipping its bits). While BFA can be initialized via a variety of hardware faults [16], RH [35] manifests as one highly effective, practical, and controlled fault injection attack. In short, RH exploits DRAM disturbance errors, such that for some modern mainstream DRAM devices, repeatedly accessing a row of memory can cause bit flips in adjacent rows. RH roots in the fact that frequent accesses on one DRAM row introduce voltage toggling on DRAM word lines. This results in quicker leakage of capacitor charge for DRAM cells in the neighboring rows. If sufficient charge is leaked before the next scheduled refresh, the memory cell will eventually lose its state, and a _bit flip_ is induced. As a result, by carefully selecting neighboring rows (aggressor rows) and intentionally performing frequent row activations ("hammering"), attackers can manipulate some bits without directly accessing them. To date, RH attacks have been successfully applied to a variety of systems, such as kernel memory regions and browsers [14, 21, 33, 35, 65, 72]. It has also led to the rise of corresponding security mitigation techniques [73, 25, 75, 34]. For DDR3 DRAM, RH attacks are most commonly launched by alternating accesses to the two rows adjacent to the victim row ("double-sided hammering"). However, DDR4 DRAM devices have been widely deployed for many years, and due to the implementation of on-chip RH mitigations such as Target Row Refresh (TRR) [19], techniques applicable to DDR3 are no longer effective. More recent research has pointed out that, to launch RH on DDR4, attackers need to precisely control the frequency-domain parameters (phase, frequency, and amplitude) of the hammering procedure in order to induce bit flips [33], which is much more challenging than DDR3. While contemporary BFA research is mostly demonstrated on DDR3 DRAM devices, this work demonstrates RH attacks on recent DDR4 DRAM devices; see Sec. 5.1. **BFA over DNN.** Recent research has specifically launched BFA toward DNN models [11, 78, 58, 59, 60, 71]. The attack goals of BFAs can be generally divided into two categories: first, BFA may be launched to extensively manipulate model predictions over all inputs, possibly reducing the model accuracy to random guesses. In contrast to typical DNN training that aims to minimize the loss, BFA strives to maximize the loss function \(\mathcal{L}\) to temper inputs [78, 58]. Meanwhile, targeted BFA (T-BFA) aims to manipulate prediction output over specific inputs [60]. T-BFA retains the original predictions over the other inputs to offer a stealth attack. Therefore, existing works primarily launch BFA to flip bits in the model weights. For example, ProFlip [11] identifies and flips the weights of specific salient neurons to extremely large values, which can control the overall predictions for certain classes. RH attacks are inherently costly, and launching extensive amounts of RH attacks may cause obvious hardware fault and be detected [73, 75, 34, 25]. With this regard, recent BFA-based research strives to manipulate model predictions while flipping as few bits as possible. Prior works have shown gradient-based bit search methods [5, 11] can facilitate both BFAs with varying end goals to identify a small set of bits that, once flipped, can satisfy the attack objectives. The bit search methods often perform intra-layer bit searching toward a layer \(l\), and then identify the top \(n\) weights the largest gradients as important. Given that weights are stored as floating-point numbers, BFA often flips the bits of the exponent part of those weights to maximize the impact of bit flips. **BFA over Quantized DNN.** Recent BFA (with different end goals) are increasingly demonstrated on quantized DNN models, mainly because quantized DNN models are, by design, more robust and harder to attack by BFA [78, 57]. More specifically, quantized DNN models are created by mapping the original floating-point weights and computations to their discrete, integer versions. This way, the possible ranges for numerical values inside quantized models are significantly compressed (usually down to 8-bit integers), and bit flips occurring in model weights have a much more limited impact on the overall model behavior. Recent research has also shown that it takes more bit flips to achieve the same attack objectives on quantized DNN models than on their floating-point versions (2x to 23x more bit flips) [78]. In this work, we show that quantization does not grant DNN executables more robustness against BFAs, and we can still successfully deplete their intelligence by flipping a single bit, as shown in Sec. 7.5. ## 3 Threat Model and Assumptions **Environment.** Our attack targets DNN executables compiled by DL compilers. We assume that the attackers are able to perform BFA using currently mature RH exploitation techniques on the DRAM of a host. Moreover, we for the first time illustrate the feasibility of launching practical cross-VM BFAs on DNN executables; see relevant assumptions and details in Sec. 5.1. For the attacker, the steps for RH include memory templating [62], memory massaging [71, 65], and hammering. Our assumption is reasonable and shared by prior BFAs over DNN models. Often, the attacker can place a malicious program on a PC or cloud host machine co-located with the victim DNN executable, then launch RH [75, 28, 78] attacks while not requiring any software-level vulnerabilities on the victim DNN executables or the attack host. Moreover, launching hardware attacks does not involve preparing malicious DNN inputs ("adversarial examples" [20]) and feeding them to the victim DNN executables. The victim DNN executable exposes its public interface (e.g., for medical image diagnosis); while the attacker does not need to interact with the victim DNN executable, she can query its outputs via its public interface during RH. **Knowledge of the Victim DNN Model.** Our attack is distinct from _all_ prior BFAs in terms of knowledge of the victim DNN model. Existing works generally assume that attackers have full knowledge of the victim DNN model, including the model structure and the trained weights [56, 59, 75, 11, 5]. For example, DeepHammer [78] requires the trained weights to compute gradients. Nevertheless, we deem this is an overly strong assumption: DNN weights are generally trained on private data and viewed as the key intellectual property of DNNs. In practice, only DNN owners have access to the trained weights, and no existing (query-based model extraction) attacks can fully recover DNN weights.1 Footnote 1: Query-based model extraction [49, 70] only trains a new DNN of similar _functionality_ with the victim DNN on a limited set of inputs; they cannot recover the exact values of DNN weights. As introduced in Sec. 2, DL compilers generally accept well-trained DNN models and compile them into DNN executables. We assume the only victim DNN model's structure is known to the attacker. This is more practical because model structures are often fully or partially public.2 On the other hand, even if the structure is private, recent works have demonstrated the feasibility of successfully recovering the full structures of DNNs [82, 77, 29]; attackers can infer the structure with these techniques before launching BFA. Besides, given that DL compilers only provide very limited compilation settings (e.g., full optimizations and AVX2) compared with traditional C/C++ compilers, we assume that replicating the same compilation setting is not difficult (e.g., by iterating all possible settings to "match" the victim executable). In other words, knowing the model structure and compilation setting, attackers can get the.text section of the victim DNN executable, but not the.rodata section, as shown in Fig. 1. Footnote 2: Most commercial DNNs are built on public well-defined backbones, e.g., Transformer [74] is the building block of the GPT models [8]. Overall, our requirements are quite permissive for the attacker; it is indeed a looser set of assumptions when compared to prior techniques launching BFAs toward DNN models in high-level DL frameworks, which work in a purely white-box scenario. See our attack details in Sec. 5.1. **Attacker's Goal.** For DNN models in DL frameworks, their weights are often stored in a so-called "weights file." Existing works assume the full knowledge of the victim DNN models, and accordingly, they strive to flip bits in that weights file. As noted in Fig. 1, attacking other components appears technically challenging and unscalable [52]: DNN models are interpreted in the DL framework, forming a complex runtime environment. Attacking certain bits in the process memory space may likely lead to unexpected behaviors instead of a deterministic attack controlled by the attacker. In contrast, we show that launching effective and controllable BFAs is feasible against the model structure, i.e., the.text section of the victim DNN executable. In fact, this paper uncovers a much larger attack surface than that of a "weights file." We are the first to consider attacking both classification and generative models. Accordingly, we demonstrate two representative end goals: for classifiers, we successfully downgrade the inference accuracy of the victim DNN model to that of a random guess, and for generative models, we temper the generation results to get biased or messed outputs. As clarified in Sec. 5.1 and empirically shown in Sec. 7.3, our launched downgrading attack can be easily extended to more targeted attacks (often referred to as T-BFA), e.g., manipulating the classification outputs of specific inputs to a target class. In addition, tempered generation results can consequently result in model poisoning attacks when used in data augmentation [66, 9, 43]. While adversarial example inputs [20] are deliberately crafted to mislead DNN models, our attacks make DNN models mal-functional when provided with normal, legitimate inputs. ## 4 Attack and Key Findings Overview We first present the attack overview. As mentioned earlier, we require that the attacker has a DNN executable that shares identical DNN structure with the victim DNN executable. This enables the attacker to conduct offline bit searching and identify that are highly likely vulnerable (details in Sec. 5). Our bit searching is an offline process, meaning that the attacker can perform it in a local simulation environment, where the attacker can easily perform BFAs at will (see details in Sec. 5.1). In contrast, once bit searching is finished, the online attack is conducted via RH exploitation in the real-world environment; we successfully demonstrate the feasibility of the attack on a mainstream DDR4 DRAM by following and extending a state-off-the-art RH exploitation technique [33]. Before presenting the attack details, we first introduce the key findings of this research, whose corresponding empirical studies are presented in Sec. 7. 1**Pervasive Vulnerabilities in DNN Executables.** DNN executables compiled by mainstream DL compilers, TVM and Glow, are pervasively vulnerable under BFAs. Moreover, with reverse engineering and extensive manual efforts, we present characteristics of the vulnerable bits in Sec. 7.6, illustrating that vulnerable bits can originate from various binary code patterns that commonly exist in DNN executables. _"Reflections on Trusting Trust": Compiler-Introduced Backdoors._ Following 1, comparing with DNN models in high-level DL frameworks, we confirm that the vulnerable bits we found in DNN executables do not exist in their correspond ing DNN models in high-level DL frameworks. We view these shared, common issues as hardware "backdoors" introduced by DL compilers and the runtime, which conceptually correlates to the classic backdoor attack noted in Ken Thompson's Turning award lecture [69] -- "Reflections on Trusting Trust." This observation advocates for low-level, binary-centric security analysis and mitigation against BFAs, for which today's BFA mitigation techniques that mainly focus on model weights and are deployed in DL frameworks [75, 81], are hardly applicable. Footnote 1: [https://github.com/faceface/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/](https://github.com/faceface/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/face/) world datasets with distinct features. Thus, we obtain trained weights in a novel data-free manner; see details in Sec. 5.2. After obtaining \(\mathcal{E}\), the attacker starts to use them as local profiling targets. Ultimately, our goal is to find vulnerable bits in \(e\) that can be flipped to cause a desired effect, but this is challenging because it is normally hard to tell whether an arbitrary bit will be a vulnerable bit in the victim executable \(e\) whose weights are never exposed to the attacker. Recall that, as introduced in 2 of Sec. 4, we identify superbits that are transferable among DNN executables with distinct weights; attacks can leverage this transferability to search for vulnerable bits. More specifically, searching for vulnerable bits in the victim \(e\) can be transformed to finding superbits shared by \(\mathcal{E}\) that are likely also shared by \(e\); we give details below. For each \(e_{i}\in\mathcal{E}\), if the attacker obtain a set of its vulnerable bits \(\mathcal{V}_{i}\), she can then calculate the superbits over \(\mathcal{E}\), denoted as \(\mathcal{S}_{\mathcal{E}}\stackrel{{\text{def}}}{{=}}\bigcap_{i= 1}^{n}\mathcal{V}_{i}\), illustrated in Fig. 2(a). As the size of \(\mathcal{E}\) increases, \(\mathcal{S}_{\mathcal{E}}\) becomes a set of superbits that are shared by more and more \(e_{i}\)'s. Intuitively, these superbits are also more likely to affect the victim \(e\) as well, since \(e\) has the same model structure as \(e_{i}\). Although the superbits' vulnerability in \(e\) is not guaranteed and the attacker will not know until she launches the online attack towards \(e\), our empirical results show that this approach finds vulnerable bits in \(e\) with high enough accuracy for real-world attackers to conveniently launch practical attacks, as we will demonstrate in Sec. 7.5. To obtain \(\mathcal{V}_{i}\) for every \(e_{i}\), a naive attacker would need to sweep all bits in each \(e_{i}\)'s.text section and check if the vulnerability condition is met, then repeat this process for all \(e_{i}\in\mathcal{E}\). However, this can be very time-consuming, especially for complex DNNs that can have more than 6.8 million bits in their.text sections, according to our preliminary study. To speed up the search, the sweeping process and intersection calculation can be interleaved so that the attacker can iteratively shrink \(\mathcal{S}_{\mathcal{E}}\), starting with \(\mathcal{S}_{\mathcal{E}}=\mathcal{V}_{1}\), each iteration trying the bits already in \(\mathcal{S}_{\mathcal{E}}\) on an unswept binary \(e^{\prime}\) and removing those that are not vulnerable in \(e^{\prime}\), instead of trying all bits in \(e^{\prime}\). During this process, the attacker will need to flip bits in the local executables \(\mathcal{E}\) and assess their effects. We clarify that this is simple: as the attacker has full control over \(\mathcal{E}\) and her local environment, she can easily achieve bit flipping by editing the binary files. **Online Preparation: Memory Templating.** Our attack is achievable in practical and challenging settings: we consider the attacker and victim to be in separate VMs on the same multi-tenant cloud server so they cannot access each other's files or processes, as in typical machine-learning-as-a-service (MLaaS) environments. As a "warm up," the attacker needs to scan the DRAM module in the host machine for bit locations that can be flipped using RH, a procedure called memory templating [62]. A bit is flippable using RH only if there is a "template" in the DRAM module with the same bit location and flip direction. Moreover, for DDR4 platforms, an effective "DRAM access pattern [33]" containing the frequency-domain information needed to launch RH on the platform must also be found and applied to trigger a flip. The set of metadata describing where RH can flip bits and how to flip each bit are called _memory templates_. Currently, multiple tools have been made available to find these templates on DDR4, including Blacksmith [33] and TRRespass [19]. In our pipeline, we leverage and slightly extend Blacksmith, the state-of-the-art RH technique for DDR4, to perform memory templating and use the access patterns it found in the attack step later. **Online Attack: RH.** Since the attacker and victim are in two different VMs on the same host, we also assume that Transparent Huge Page (THP) and memory deduplication is enabled on the host, as done commonly to maximize the memory utilization in a commercial environment [23, 64, 2]. Here, memory deduplication refers to techniques such as Kernel Same-Page Merging (KSM) on Linux or Transparent Page Sharing (TPS) on VMware ESXi, both of which identify and merge identical memory pages across different virtual machines. During the attack, KSM and TPS would merge the pages containing the.text section of \(e\) and any binary in \(\mathcal{E}\) because these pages are _identical_. This enables a unique opportunity for the attacker to modify bits in \(e\) because, by flipping the bits in a merged page, the modification will also be directly visible in the victim's page, bypassing any software protections such Figure 2: The attack pipeline. The “lock” symbol in the top right means attackers cannot access the weights of the victim. as copy-on-write (CoW) that is normally triggered when the content of a merged page is modified through OS APIs. Nevertheless, if the host has disabled THP or memory deduplication (rare in practice), the attacker can still exploit the vulnerable bits in \(e\) in line with previous work's threat model that assumes co-location of the attacker and victim [78]. After the attacker has determined the set of superbits \(\mathcal{S}_{\mathcal{E}}\), she can launch RH to flip the vulnerable bits in \(e\). She does so by first choosing a superbit \(s\in\mathcal{S}_{\mathcal{E}}\) that has at least one available template, whose information has been obtained in the "warm-up" phase. Then, as shown in Fig. 2(b), she also needs to check if the bit belongs to an unflipped page: Due to the implementation of KSM, the attacker can only flip one bit per merged page [62], unless the victim executable crashed and restarted, in which case its associated memory is considered to be reset. If all these requirements are met, the attacker can then use the template to flip the bit via standard RH, whose steps include memory massaging, setting up aggressor rows, and applying the effective access pattern. Here, memory massaging refers to the process of precisely placing the memory page containing the bit to flip at the location specified by the template, and can be done via one of the many existing techniques [78, 62, 71, 36, 65] according to the attacker's need. Overall, after the bit flip is triggered by RH, the attacker queries the victim executable via its public interface to check if its behavior has changed as expected. As shown in Fig. 2(b), there are three possible outcomes: (1) the victim's behavior changes as expected, which is the desired outcome; (2) the victim crashes, which is undesirable since attackers wish to manipulate the victim's behavior without crashing it; and (3) the victim's behavior does not change, which is also undesirable. In the latter two cases, the attacker has to repeat the above process with a different superbit \(s\) until she finds a bit that can be flipped and changes the victim's behavior as expected, or runs out of possible superbits to try. Our attack can be directly performed on DDR4 DRAM, and does not require any hardware modification. Moreover, we clarify that this step is not limited to Blacksmith, and can be replaced by any other RH techniques if needed [40, 21, 35]. ### Preparing Well-Trained Weights without Distinct Training Datasets Following the discussion of the need for trained weights in Sec. 5.1, this section elaborates on their definition, the reason and challenges for getting them, as well as how we efficiently and effectively obtain a set of \(\mathcal{E}\) where each \(e_{i}\in\mathcal{E}\) has well-trained weights, even when \(|\mathcal{E}|\) is large (e.g., about ten). **DNN Functionality.** We start by formulating a DNN's functionality. In general, a DNN \(f_{\mathbf{0}}:\mathcal{X}\rightarrow\mathcal{Y}\) can be viewed as a parameterized function mapping an input \(x\in\mathcal{X}\) to an output \(y\in\mathcal{Y}\). For example, mapping an image to a label for image classifiers. The DNN structure (denoted by \(f\)) is a non-linear function, whereas the weights \(\mathbf{\theta}\) are learned from training data, implicitly representing the rules for mapping \(\mathcal{X}\) to \(\mathcal{Y}\). ``` 1functionCom2D,DNN(x,\(\mathbf{\theta}\)); 2\(kernel\leftarrow[2,2]\);stride\(\leftarrow\) 1; 3\(out\gets 0\); 4for\(i\gets 0\)do 5for\(j\gets 0\)to\(\mathbf{j}-kernel[i]\)bystridedo 6for\(k\gets 0\)to\(kernel[0]-1\)by\(\mathbf{1}\)do 7for\(l\gets 0\)to\(kernel[1]-1\)by\(\mathbf{1}\)do 8\(out\gets out+s_{i+k,j+1}+b_{i}\); 9 10 end for 11 12 end for 13 14 end for 15 16\(out\gets ReLU(out)\); 17return\(out>0\); ``` **Algorithm 1**Execution of a Sample Conv DNN. **Well-Trained vs. Randomly-Initialized Weights.** A group of well-trained weights \(\mathbf{\theta}\) should encode a fixed mapping (implicitly) defined by the training data. Also, because the randomness (i.e., entropy) in weights gradually reduces during training [17, 39, 6, 50], the trained mapping typically focuses more on some input elements while caring less about others (i.e., have preferences). This distinguishes a well-trained DNN from a DNN with randomly initialized weights: the latter usually treats different input elements approximate equally.3 We give the definition of trained weights below. Footnote 3: That’s why it performs random predication. Note that this “equal” case is associated with the highest entropy (randomness). We illustrate this in Fig. 3, where we show the computation of the same convolutional DNN (see line 4-12 in Alg. 1) with trained weights (Fig. 3(a)) and randomly initialized weights (Fig. 3(b)). When given the same input \(x\), the trained weights focus only on \(x_{0,1},x_{0,2},x_{1,1}\), and \(x_{1,2}\) because only \(\mathbf{\theta}_{0,1}\) is non-zero, whereas the randomly initialized weights treat all input elements \(x_{0,0}\)-\(x_{2,2}\) approximately equally. We now make the connection to show why this difference is important for BFAs on DNN executables: although Fig. 3 shows the same DNN model \(f\) just with different weights, when compiled into DNN executables, they will have distinctly distributed vulnerable bits. Consider the execution process of the Fig. 3 (see Alg. 1). For the trained weights (Fig. 3(a)), skipping either of the two inner loops (line 6 and Figure 3: Comparing DNNs with trained/random weights \(\mathbf{\theta}\). \(7)\)4 will change the value of \(out\) to \(0\). As a result, the prediction output changes from \(true\) to \(false\). In contrast, if randomly initialized weights (Fig. 3(b)) are used, the prediction function will still have the same output after the same mutation. Footnote 4: For simplicity, we omit the discussion on whether this is possible via BFA here; we show various ways BFA can corrupt machine code in Sec. 7.6. On the other hand, for different trained weights that have distinct preferences (e.g., those having larger weight values on \(\theta_{0,0}\), \(\theta_{1,0}\), or \(\theta_{1,1}\)), all of them have vulnerable bits on line 6 and 7 in Alg. 1. These transferable bits (i.e., superbits) among different DNN executables can be leveraged to conduct real attacks. More concretely, if we have identified some vulnerable bits on line 6 and 7 which are shared by multiple executables of different trained weights, we have high confidence that these bits are also vulnerable in the victim executable (and we empirically show this in Sec. 7.5). Therefore, we argue that superbits \(\mathcal{S}_{\mathcal{E}}\) found over a set of executables \(\mathcal{E}\) with trained weights are more likely to be also shared by the victim executable which we reasonably assume to also have trained weights (since they are adopted for real applications). As a result, we require that offline profiling must use a set of DNN executables \(\mathcal{E}\) where each of them has well-trained weights rather than randomly initialized weights. **Constructing Fake Datasets.** Given usually the limited number of datasets, the key challenge at this step is preparing trained weights encoding distinct mappings/preferences. Note that training weights of different values on the same dataset (e.g., by varying training setups) is infeasible because their encoded mappings will be similar (since mappings/preferences are learned from the same training data). As a result, the searched vulnerable bits will be biased and unable to transfer among weights trained on unknown datasets. One unique opportunity, as observed in our study, is that the mapping encoded in a DNN is not necessarily semantically meaningful (e.g., perform a reasonable classification). Since we only focus on the distinction between mappings, it is unnecessary to train weights on different _real datasets_. Previous works in the machine learning community have shown that, pre-training DNNs with random datasets can speed up the follow-up fine-tuning on real meaningful datasets, because during the pre-training, DNN weights are "regulated" to be similar to those trained on real datasets [44, 51]. Inspired by this observation, we use random noise to construct fake datasets of distinct mappings. To do so, we first randomly generate random noise as inputs and assign labels for them. Once this step is done, the inputs and their labels are fixed; a fake dataset is therefore constructed. Then, when training with a fake dataset, a randomly initialized weight is gradually updated to encoding the mapping in the fake dataset, and certain preferences are accordingly formed. Since the mappings in different fake datasets are completely different, DNN weights of distinct preferences can be obtained by training using different fake datasets. ## 6 Study Setup **DL Compilers.** This research uses two DL compilers, TVM (version 0.9.0) and Glow (revision b91ddff10), which are developed by Amazon and Meta (Facebook), respectively. To the best of our knowledge, these two DL compilers represent the best DL compilers with broad application scope and support for various hardware platforms. Both DL compilers are studied in the standard setting without any modifications. TVM allows users to specify the optimization level, and we study both the lowest optimization level (-00) and the highest optimization level (-03). Moreover, we also study the effect of the AVX2 switch, which controls whether the generated code uses AVX2 instructions, on the attack surface of BFAs. This is because these instructions allow many vectorized calculation optimizations and are widely used in DNNs to accelerate computation; we confirm that TVM emits binaries with different ASM instructions with and without this switch. On the other hand, Glow does not allow users to specify the optimization level, and it enables full optimizations by default. Sec. 2.1 has clarified the high-level workflow of DL compilation. TVM and Glow compile DNN models into standalone 64-bit x86 executables that can be directly executed on CPUs. **DNN Models and Datasets.** This research uses five representative DNN models. We pick ResNet50, GoogLeNet, DenseNet121, and LeNet, four popular image classification models, all of which are widely-used with varying model structures and a diverse set of DNN operators. Each of them has up to 121 layers with up to 23.5M weights. In addition, we also include the quantized versions of these models to evaluate whether their robustness against traditional weights-based BFAs still holds as DNN executables. However, we do not compile quantized models with Glow, because Glow has no support for them. To explore BFAs on generative models, we focus on generative adversarial networks (GANs) as they are the most popular ones. We select DCGAN, which is the backbone of nearly all modern GANs. For the image classification models, we train them on three popular and representative datasets, CIFAR10, MNIST, and Fashion-MNIST. For DCGAN, we train it on the MNIST dataset and evaluate its outputs in aspects of image quality and semantics. These trained models, after being compiled as executables, are treated as the victim for attacks. With different compilers and configurations, we have 21 victim DNN executables as listed in Table 2. As noted in Sec. 5.2, for our practical attack experiments (see Sec. 7.5), we require DNNs trained on fake datasets for local profiling. To do so, we train each DNN on ten fake datasets for offline bit searching. Table 1 lists all seed DNN models, their numbers of weights and accuracies, as well as the sizes of their compiled executables and the.text sections. We report that the (victim) classification models have average accuracies of 91.34%, 89.68%, 87.53%, and 98.50%, respectively, for all datasets. In terms of file size, executables compiled by Glow are much smaller than those compiled by TVM, since Glow does not embed the weights data into the executable but instead stores them in a separate file. When weights are embedded into the executables, the file size is generally proportional to the number of weights of the model; this also means that for the quantized variants, the file size is usually much smaller. As for the size of the.text section, it is influenced by both the complexity of the model structure and the enabled compiler options: more complex structures like quantized models and models with more layers tend to have larger.text sections, while enabling optimizations reduces the size of the.text section. For example, the binary with the largest.text section is the quantized DenseNet121 (844.5K), and ResNet50's.text section size reduces from 215.7K to 86.4K after optimizations are enabled. ## 7 Evaluation In this section, we report the evaluation results in accordance with the five key findings listed in Sec. 4. ### 1 Pervasive Vulnerabilities DNN executables compiled by mainstream DL compilers are extensively vulnerable under BFAs. Table 2 shows the list of binaries we have trained, compiled, and evaluated based on our study setup in Sec. 6. To give an in-depth analysis, we form different comparison groups as follows: (1) **Model Objectives and Structures.** Overall, for both classifiers and generative models, all models have from about 0.52% to 7.54% (weighted average 2.21%) of their.text region bits vulnerable to BFAs where flips each single bit can successfully manipulate model output. Here, we consider a bit vulnerable if flipping it causes an image classification model to degrade to a random guesser (i.e., the accuracy drops to \(\frac{1}{\text{\#classes}}\)). For a GAN model, if, after flipping a bit, 85% of its outputs' labels changed or either the Frechet Inception Distance (FID) [26] or average Learned Perceptual Image Patch Similarity (LPIPS) [80] score becomes higher than their 85th percentile values, we consider the bit vulnerable. For quantized models, we notice that their percentages of vulnerable bits are significantly lower than their full-precision versions, especially for Quantized GoogLeNet and Quantized DenseNet121, which only have 0.84% and 0.52% of.text region bits vulnerable, respectively. In a way, this result confirms the observation from prior works that the BFA surface on quantized models is smaller than on full-precision models. However, we also note that because quantized models are significantly more complicated in terms of structure, they produce binaries with much larger.text regions after compilation (e.g., for the first ResNet50 binary in Table 2, its quantized version has a.text section that is 98,044 bytes larger). Thus, the absolute number of vulnerable bits is still significant (over 10,000), leaving attackers plenty of room to launch attacks. (2) **Compiler Configurations.** In Table 2, we have two compilers, TVM and Glow, where for TVM, we also have two options (optimization levels and AVX2 code generation toggle) set to different values. We find that turning off either optimizations or AVX2 will increase the vulnerability of the DNN executable. Since the TVM optimization pipeline simplifies and fuses many operators in the computational graph [12], the large amount of unfused individual operators in an unoptimized executable may introduce more structures vulnerable to BFAs, such as more loops and more exploitable loop variables, hence more vulnerable bits. On the other hand, turning off AVX2 replaces all AVX2 instructions in an executable with simpler SSE instructions, for example replacing VMOVAPS with MOVAPS. While there have to be more instructions because SSE is less vectorized than AVX2, these new instructions are shorter in machine code, so the binaries'.text regions still shrink, as shown in Table 2. We also noticed that the opcode bits in these SSE instructions are more "flippable" than those in AVX2 instructions, i.e., an opcode is more eas \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**\#Weights**} & \multicolumn{2}{c|}{**Avg**} & \multicolumn{2}{c}{**Compiled binaries**} \\ & & \%Acc. & **File size** & **.text size** \\ \hline ResNet50 [24] & 23.5M & 91.34 & 0.39-07.7M & 80.82-125.7K \\ GoogLeNet [67] & 5.5M & 89.68 & 6.02-11.4M & 221.5-337.9K \\ DenseNet121 [53] & 7.0M & 87.53 & 8.92-73.3M & 427.3-844.5K \\ LeNet [38] & 3.2K & 98.50 & 78.0-90.0K & 17.7-25.0K \\ DCGAN [55] & 3.6M & - & 13.7M & 42.4K \\ \hline \end{tabular} \end{table} Table 1: Statistics of DNN models and their compiled executables evaluated in our study. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**\#Weights**} & \multicolumn{2}{c|}{**Avg**} & \multicolumn{2}{c}{**Compiled binaries**} \\ & & \%Acc. & **File size** & **.text size** \\ \hline ResNet50 [24] & 23.5M & 91.34 & 0.39-07.7M & 80.82-125.7K \\ GoogLeNet [67] & 5.5M & 89.68 & 6.02-11.4M & 221.5-337.9K \\ DenseNet121 [53] & 7.0M & 87.53 & 8.92-73.3M & 427.3-844.5K \\ LeNet [38] & 3.2K & 98.50 & 78.0-90.0K & 17.7-25.0K \\ DCGAN [55] & 3.6M & - & 13.7M & 42.4K \\ \hline \end{tabular} \end{table} Table 2: Vulnerable bits in our evaluated DNN executables. ily flipped to become another still _valid_ opcode that does something, as can be seen by comparing the most common flip types before and after turning off AVX2 in Table 3. This is likely because shorter instructions have a smaller opcode space, so different opcodes are closer to each other (in terms of Hamming distance). Additionally, we report that the vulnerable bits are distributed throughout the entire.text region of a binary rather than being concentrated in a small region. The distribution of vulnerable bits in the.text region is plotted in Fig. 4, where the darkness of the color indicates the portion of vulnerable bits found in the corresponding address range inside.text. For Glow, the regions near the beginning and end of the.text region are mainly auxiliary functions (e.g., PNG image loading), so we do not consider vulnerable bits in these regions. Other than that, for both TVM and Glow, vulnerable bits are distributed relatively evenly inside.text; this translates to higher success rates for attackers, as the "one bit per page" constraint in Sec. 5.1 will have much less impact on them. ### Effective & Stealthy Corruption As mentioned in the "conditions for vulnerable bits" in Sec. 5.1, all of our findings are vulnerable bits that can be used to deplete full DNN model intelligence with only a single-bit BFA. For example, flipping bit 1 of the byte at offset 0x1022f6 in the first binary in Table 2 causes the model's prediction accuracy to drop from 87.20% to 11.00%, equivalent to a random guesser. To the best of our knowledge, this is the first work to report such single-bit corruptions in DNN models. We believe the ability to corrupt a model using one single-bit flip is an important motivation for real-world attackers because, under the assumption that BFAs are mostly instantiated using RH, which is a probabilistic process, it greatly reduces the cost for the attack and increases the success rate, as we will see in our practical attack experiments in Sec. 5.1. It also largely reduces the risk of being detected by the victim (i.e., more stealthy), as the corruption is much more subtle than the case of multi-bit corruption. One intriguing fact related to launching single-bit BFA is that the flip direction of the vulnerable bits leans towards 0\(\rightarrow\)1. Intuitively, a more uniform distribution might be preferred by attackers, so she can maximize her attack opportunities using the observation that vulnerable DRAM modules tend to have about the same amount of 0\(\rightarrow\)1 and 1\(\rightarrow\)0 flips [33, 35]. Given that said, we believe the skewness we found is still in an acceptable range and does not impede attacks, as we will show later. ### Versatile End-Goals As mentioned earlier, the consequences BFA can cause are not limited to downgrading a classification model's inference accuracy. We observe that BFA also shows a high potential to _manipulate_ the DNN executable's prediction results or generative outputs, and this phenomenon also extensively exists in our study. First, in terms of classification models, Table 4 shows a summary of different predicted classes that can be controlled by single-bit BFA. When a model is pinned to a class by BFA, it has the highest probability of outputting that class for any input, granting attackers the ability to control model outputs. We notice that, for most of the models in the list, there are frequently hundreds or even thousands of vulnerable bits that can pin the model to each of the classes, although in rare cases, there will be few or no bits available for a specific class. While we do not claim this as a targeted BFA (T-BFA) in the standard sense [11] because of its non-deterministic nature, we should point out that no existing work has demonstrated practical T-BFA that can pin a model's output using one bit, whereas all of our findings are achieved via single-bit BFA. Thus, we see this more as a "high risk, high return" strategy for sophisticated attackers. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Dataset**} & \multicolumn{6}{c|}{**\#Vulnerable bits by Output Class**} \\ \cline{3-10} & & **0** & 1 & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** \\ \hline ResNet50 & CIFAR101 & 742 & 447 & 3192 & 881 & 121 & 1500 & 381 & 47 & 4290 & 502 \\ ResNet50 & MNIST & 12 & 7699 & 33 & 17 & 8 & 77 & 14 & 150 & 284 & 1509 \\ ResNet50 & Fashion & 0 & 8543 & 96 & 6 & 2 & 87 & 71 & 19 & 336 & 425 \\ GoogLeNet & CIFAR101 & 1747 & 2022 & 8430 & 5358 & 614 & 1148 & 1088 & 619 & 209 & 3675 \\ GoogLeNet & MNIST & 188 & 10602 & 1270 & 1726 & 256 & 1991 & 1235 & 526 & 2062 & 29953 \\ GoogLeNet & Fashion & 6165 & 388 & 722 & 302 & 2989 & 1123 & 505 & 1366 & 417 & 9398 \\ DenseNet121 & CIFAR101 & 602 & 3425 & 2681 & 1881 & 3228 & 504 & 1321 & 1439 & 1128 \\ DenseNet121 & MNIST & 3563 & 3895 & 874 & 5066 & 1935 & 3202 & 2723 & 3740 & 5377 \\ DenseNet21 & Fashion & 16718 & 1014 & 551 & 1282 & 152 & 1945 & 5402 & 26 & 2141 & 1001 \\ QResNet50 & CIFAR101 & 866 & 1534 & 43752 & 2409 & 397 & 3127 & 1295 & 488 & 389 & 966 \\ QGoogLeNet & CIFAR101 & 3612 & 165 & 1707 & 2336 & 266 & 348 & 394 & 600 & 311 & 1229 \\ QDenseNet121 & CIFAR104 & 1546 & 2395 & 1051 & 2794 & 1840 & 74 & 1021 & 578 & 705 & 1040 \\ \hline ResNet50 & CIFAR101 & 1369 & 1023 & 1940 & 1278 & 331 & 2225 & 496 & 357 & 647 & 630 \\ ResNet50 & MNIST & 36 & 7010 & 47 & 39 & 31 & 102 & 23 & 130 & 305 & 1891 \\ \hline \end{tabular} \end{table} Table 4: Number of vulnerable bits by output class. The last two rows are Glow-compiled executives whereas the rest of the TVM-compiled. Figure 4: Distribution of vulnerable bits in DNN executables. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Rank**} & \multicolumn{2}{c|}{**AVX2 Binaries**} & \multicolumn{2}{c}{**Non-AVX2 Binaries**} \\ \cline{2-7} & **Instruction (Flip Type)** & **Pct. (\%)** & **Instruction (Flip Type)** & **Pct. (\%)** \\ \hline 1 & VMOUPS (Data) & 12.34 & MULPS (Opcode) & 16.96 \\ 2 & MOV (Data) & 8.82 & MULPS (Data) & 15.96 \\ 3 & ADD (Data) & 6.23 & ADDPS (Opcode) & 9.12 \\ 4 & VPROADCASTSS (Data) & 5.83 & MOV (Data) & 8.02 \\ 5 & VXORPs (Data) & 5.17 & MOVAPS (Opcode) & 5.63 \\ 6 & VYRADD231PS (Data) & 4.88 & XORPS (Data) & 4.88 \\ 7 & LEA (Data) & 4.86 & ADD (Data) & 4.37 \\ 8 & VMOAPS (Data) & 3.47 & ADDPS (Data) & 4.07 \\ 9 & VADDPS (Opcode) & 3.35 & MOVAPS (Data) & 3.96 \\ 10 & VADDPS (Data) & 3.31 & MOVSS (Opcode) & 3.78 \\ \hline \end{tabular} \end{table} Table 3: Top 10 most common flip types in AVX2 and non-AVX2 binaries. “Pct.” stands for percentage. For GAN, Fig. 5(a) shows the original output of our DCGAN model before being corrupted by BFA, and Fig. 5(b) shows three different types of outcomes the model produces when given the same input after flipping three different bits in the model. Among the three types, the first one may be the most interesting: not only does it almost completely change the semantics of the output, but it also pins the model output to only two semantic classes (1 and 9). Furthermore, notice that the 9's in the output sample have two distinct looks, suggesting that this flip is not merely causing duplications in the image pixel level, but is also manipulating the model's learned semantics. Then, the second type degrades the output image quality while still preserving the semantics, and the third destroys both the semantics and image quality. For the first case, it is anticipated when augmenting a DNN model with this GAN under our attack, the augmented DNN will lean to predict "1" or "9" for any inputs since its training data are dominated by digit one and nine. In addition, under the last two cases, the augmented DNN should be largely downgraded because its training data are less recognizable. ### Superbits As briefly mentioned before, during our study, we find that some vulnerable bits exist in more than one DNN executable, even though the models are trained on different datasets but just sharing the same DNN structure; we call these bits _superbits_. In Table 5, we summarize the existence of superbits over 7 different models, each trained on 3 different datasets: for each model structure, we train three versions of the model, each using a different dataset (CIFAR10, MNIST, or Fashion-MNIST); after compiling them into DNN executables, we search for superbits across all 3 executables which share the same DNN structure. For TVM-compiled executables, all of them are compiled with full optimizations and AVX2 code generation enabled. Generally, comparing with the results in Table 2, we find that about half of the vulnerable bits found in one DNN executable trained on one dataset also exist in the DNN executables trained on the other two datasets. Since a superbit is located at the same offset and has the same flip direction in all executables that share it, an attacker will find it much more convenient to launch BFAs if she can find superbits shared by the victim DNN executable. In fact, we show in Sec. 5.1 that it is indeed feasible to find superbits that are highly likely to be shared by a set of DNN executables the attacker possesses _plus_ the victim DNN executable, and we describe a systematic search method to achieve this effectively and efficiently. Moreover, in Sec. 7.5, we further use the existence of superbits and our novel search method to launch practical BFAs without relying on knowledge about the victim model's weights. Finally, in our case study in Sec. 7.6, we will provide more insights into why they can effectively disrupt the behavior even of different DNN executables. ### Practical Attack This section demonstrates that practical BFAs can be launched against DNN executables. We run our experiments on a server with an Intel i7-8700 CPU and a Samsung 8GB DDR4 DRAM module without any hardware modifications. Before launching our attacks, we first extend Blacksmith to launch our RH attacks on DDR4 platforms more smoothly: in our preliminary experiments, we find that the timing function in Blacksmith is rather easily affected by noise, in the sense that it is hard to distinguish the timing difference between DRAM accesses with and without row conflicts. Thus, we replace the timing function in Blacksmith with the one in TRRespass, which we find to be more resilient to noise on our platform. Our practical attack experiment covers in total 9 different DNN executables to evaluate the attack effectiveness on different model structures, datasets, and compilers. For TVM-compiled executables, we enable full optimizations and AVX2 to better represent the real-world scenario. Then, for each executable, we launch the attack five times and collect the results. Following our attack pipeline in Sec. 5.1, we first perform memory templating by running Blacksmith with the default settings to search for the most effective access pattern on our platform and then use this pattern to sweep for flippable bit templates in a 256MB region. Totally 17,366 flippable bits are identified after about 17.5 hours, 8,855 of which are 0\(\rightarrow\)1 Figure 5: Output samples of DCGAN before and after BFA. \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Model** & **Datasets** & **Compiler\#Superbits\#Superbits** & **\#Superbits** \\ \hline ResNet50 & CIFAR10/ MNIST / Fashion & TVM & 4334 & 1.61 \\ GoogLeNet & CIFAR10/ MNIST / Fashion & TVM & 12422 & 1.38 \\ DenseNet121 & CIFAR10/ MNIST / Fashion & TVM & 18349 & 1.39 \\ QResNet50 & CIFAR10/ MNIST / Fashion & TVM & 7579 & 1.04 \\ QGoogLeNet & CIFAR10/ MNIST / Fashion & TVM & 1994 & 0.14 \\ QDenseNet121 & CIFAR10/ MNIST / Fashion & TVM & 6517 & 0.24 \\ ResNet50 & CIFAR10/ MNIST / Fashion & Glow & 5223 & 1.26 \\ \hline \end{tabular} \end{table} Table 5: Statistics of superbits in different DNN executables. Figure 6: The relation between the number of fake datasets (Sec. 5.2) used and the accuracy of the superbits found. The line is the average for _all executables_ and the shaded region shows the 95% confidence interval. flips. We then obtain the superbits set \(\mathcal{S_{E}}\) using the search method mentioned in Sec. 5; these are the bits we will attempt to flip during the attack. To empirically determine the number of "fake datasets" used to calculate \(\mathcal{S_{E}}\) (Sec. 5.2), we plot in Fig. 6 the relationship between the number of fake datasets used and the accuracy of the superbits found, i.e., how many bits in \(\mathcal{S_{E}}\) are actually also vulnerable in the victim executable. We observed that, the accuracy becomes stable at around 70% after 8 datasets, and the confidence interval is the tightest at 10 datasets. Thus, we use 10 datasets to calculate \(\mathcal{S_{E}}\). The statistics of the attack results are shown in Table 6. On average, we successfully degrade each victim executable to a random guesser (prediction accuracy of 10%) with 1.4 flip attempts while causing no crashes at all, regardless of the original prediction accuracy of the victim executable. In the case of DenseNet121 on CIFAR10, we consistently succeed with just one flip attempt in all five runs, decreasing its accuracy from 80.00% to 11.40%, ruining its inference capabilities. As for quantized models, they seem slightly more resilient to our attacks than non-quantized models, requiring 1.4 to 1.6 flips to succeed on average. In fact, in one run for the quantized DenseNet121 model, four flips were required before successfully achieving the goal. However, our results show that attacking quantized models is not a challenging target for attackers, as most quantized models are still successfully attacked within two flips. In summary, our observations suggest that BFA is a severe _and_ practical threat to DNN executables. ### Case Study To understand the underlying root cause of how a single-bit flip is impairing the full capabilities of a DNN executable, we randomly select 60 cases, 30 each for non-superbit and superbit, from our findings. After an extensive manual study, Table 7 lists four categories of destructive consequences of single-bit corruption, including * **Data Flow Broken:** the calculation of a specific layer's input or output address is corrupted, resulting in reading incorrect inputs or writing layer outputs to wrong memory regions that are never read by subsequent layers. * **Control Flow Broken:** a branch is changed to be always false, causing the corresponding calculation to be skipped and thus producing a large number of incorrect outputs. * **Data Alignment Broken:** the offset of memory read and write instructions is deviated, causing data to be read or written in an unaligned manner. * **Instruction Alignment Broken:** the bit flip converts alignment-purpose data bytes embedded in the.text section into instructions, causing subsequent instructions to be corrupted. Our case study reveals common code patterns exhibited across models and datasets. Although the analyzed cases come from the same DNN executable (i.e., LeNet1), to our observation, the results are applicable to other DNN executables compiled by TVM and Glow and can offer a comprehensive understanding of the reasons behind successful BFA. Below, for each category, we discuss one representative case. **Data Flow Broken.** We observed many different patterns of how a single bit flip could break the data flow of model inference. Here, we provide one straightforward example related to the parallelism of DNN executables. Typically, a DNN executable runs in parallel, such that multiple threads will be launched to conduct the computation of a DNN layer, while each thread will compute a portion of the output. When a thread is initialized, its corresponding output offset will be calculated according to the thread ID. Consider the example in Fig. 7, where r14 is a register that is used to store the offset, and [rcx+10h] stores the base address of the output. Before BFA, the offset (r14) is calculated as base_address+r12*192 (r12 stores the thread ID). However, after BFA, the instruction at 0xC9 is split into two instructions irrelevant to r14 (at 0xC9 and 0xCB), resulting in all threads simultaneously writing to the same output region. **Control Flow Broken.** The number of threads launched by DNN executables is determined by a predefined environment variable. When there are more threads than a DNN layer \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Model** & **Dataset** & **\#Flips** & **\#Crashes** & **\%Acc. Change** \\ \hline ResNet50 & CIFAR10 & 1.4 & 0.0 & 87.20 \(\rightarrow\) 10.00 \\ GoogLeNet & CIFAR10 & 1.4 & 0.0 & 84.80 \(\rightarrow\) 10.00 \\ DenseNet121 & CIFAR10 & 1.0 & 0.0 & 80.00 \(\rightarrow\) 11.40 \\ DenseNet121 & MNIST & 1.2 & 0.0 & 99.10 \(\rightarrow\) 11.20 \\ DenseNet121 & Fashion & 1.2 & 0.0 & 92.50 \(\rightarrow\) 10.60 \\ QResNet50 & CIFAR10 & 1.6 & 0.0 & 86.90 \(\rightarrow\) 9.60 \\ QGoogLeNet & CIFAR10 & 1.4 & 0.0 & 84.60 \(\rightarrow\) 11.20 \\ QDenseNet121 & CIFAR10 & 1.6 & 0.0 & 78.50 \(\rightarrow\) 10.20 \\ \hline ResNet50 & CIFAR10 & 1.4 & 0.0 & 78.80 \(\rightarrow\) 10.00 \\ \hline \end{tabular} \end{table} Table 6: Statistics of the 5 attack runs on 9 executables. The last row is for Glow-compiled executable whereas the rest are for TVM-compiled. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Bid** & **Ocode**bytes & **x65 assembly instruction \\ \hline HC5 & 4F & 80 & 34 & 64 & lea r14, [r12+r12*2] \\ HC5 & 4F & 16 & 66 & sh1 r14, 6 & \\ HC1 & 4F & 03 & 71 & 10 & add r14, [rcx+10h] \\ \hline \end{tabular} (a) Assembly code before BFA. \begin{tabular}{c|c|c|c} \hline **Oxc9** & 4F & 40 & 1odsq j; load quoord in RAX \\ HC8 & 34 & 64 & xor a1, 64h & xor a1, 64h \\ & ; r14 not initialized \\ HC9 & 4F & 16 & 66 & sh1 r14, 6 \\ HC1 & 4F & 03 & 71 & 10 & add r14, [rcx+10h] \\ \hline \end{tabular} (b) Assembly code after BFA. \end{table} Table 7: Classification of manually analyzed BFA cases. Figure 7: Case 1: the multi-thread data flow is broken. (e.g., a convolutional layer) required, redundant threads will directly jump to the function end after thread ID checking. However, such a check is vulnerable to BFA. As shown in Fig. 8, the instruction at 0x70 originally compares 0x28 with eax; after BFA, it compares with esp register, which always stores a very large address on the stack. Thus, the following comparisons are always true, making all threads skip the execution of the function. **Data Alignment Broken.** In the computation of DNN executables, all floating-point numbers are represented as 4-byte aligned data. However, such alignment can be easily violated with even only a single-bit flip. As shown in Fig. 9, the vmovups instruction will move 32 bytes of data from the ymm1 register to the memory. The BFA increases the offset of the target memory address by 2 bytes, resulting in unaligned data written into memory. The next time the data is read from memory in an aligned manner, the corrupted data will be interpreted as an extremely large float value (e.g., 1e8). These large numbers will propagate in the process of DNN model inference and dominate the final result. **Instruction Alignment Broken.** During compilation, nop instructions are often used to align instruction addresses. Similar cases are observed in DNN executables. Consider the example in Fig. 10, a nop instruction is used to align the next instruction to address 0xD0. Nevertheless, after a single-bit flip, the nop instruction is converted into a shorter variant, leaving its last byte (at 0xCF) being recognized as the start of an add instruction. The instruction alignment is thus broken, leading to an uninitialized register (ymm0) being used in the computation. In this case, the presence of nan values in ymm0 directly destroys subsequent DNN model inference. ## 8 Discussion **Attacking Other DNN Models.** Align with the focus of prior BFA works [28, 78], this work mostly studies CV models, including four classification models and one generative model. Production DL compilers can smoothly compile these models. In addition to that, some other types of DNN models are not discussed in this work. For instance, NLP models may take some recurrent structures, such as RNN and long short-term memory (LSTM) [27] to manage sequential inputs. We tentatively tried to evaluate our methods on them but found that TVM and Glow show immature support for RNN models. Nevertheless, our attack method should be general enough to cover RNN models since the vulnerable bits and related binary code patterns are model/operator-agnostic (see Sec. 7.6). The attack surface revealed by our attack widely exists in executables compiled from different types of DNN models. **Countermeasures.** Our approach extensively manipulates the DNN executable outputs. Therefore, our approach may raise potential security concerns because of the wide adoption of DNN executables in business fields. Manipulating model outputs (e.g., generated images) also facilitates other attacks like training data pollution, as noted in Sec. 4. To mitigate adversarial launching BFAs, code obfuscation could be explored. Besides, since flipping vulnerable bits mainly corrupts data and control flow (as discussed in Sec. 7.6), one potential defense against BFAs might be designing DNN executable-specific data/control flow integrity [1, 10]. We leave exploring techniques to mitigate our proposed attack as future work. ## 9 Conclusion Despite the prosperous usage of DL compilers, their security remains largely unexplored. In this paper, we have launched the first systematic study on the attack surface of BFA over DNN executables. We show that DNN executables are pervasively vulnerable to BFAs, and can be exploited in a practical, single-bit, and transferable manner. Our findings call for incorporating security mechanisms in future DL compilation toolchains. Figure 8: Case 2: the control flow is broken. Figure 10: Case 4: the instruction alignment is broken. Figure 9: Case 3: the data alignment is broken. ## Acknowledgments The authors would like to thank Patrick Jattke for providing valuable help on performing Rowhammer attack on DDR4 DRAM modules.
2309.10441
Coreset selection can accelerate quantum machine learning models with provable generalization
Quantum neural networks (QNNs) and quantum kernels stand as prominent figures in the realm of quantum machine learning, poised to leverage the nascent capabilities of near-term quantum computers to surmount classical machine learning challenges. Nonetheless, the training efficiency challenge poses a limitation on both QNNs and quantum kernels, curbing their efficacy when applied to extensive datasets. To confront this concern, we present a unified approach: coreset selection, aimed at expediting the training of QNNs and quantum kernels by distilling a judicious subset from the original training dataset. Furthermore, we analyze the generalization error bounds of QNNs and quantum kernels when trained on such coresets, unveiling the comparable performance with those training on the complete original dataset. Through systematic numerical simulations, we illuminate the potential of coreset selection in expediting tasks encompassing synthetic data classification, identification of quantum correlations, and quantum compiling. Our work offers a useful way to improve diverse quantum machine learning models with a theoretical guarantee while reducing the training cost.
Yiming Huang, Huiyuan Wang, Yuxuan Du, Xiao Yuan
2023-09-19T08:59:46Z
http://arxiv.org/abs/2309.10441v2
# Coreset selection can accelerate quantum machine learning models with provable generalization ###### Abstract Quantum neural networks (QNNs) and quantum kernels stand as prominent figures in the realm of quantum machine learning, poised to leverage the nascent capabilities of near-term quantum computers to surmount classical machine learning challenges. Nonetheless, the training efficiency challenge poses a limitation on both QNNs and quantum kernels, curbing their efficacy when applied to extensive datasets. To confront this concern, we present a unified approach: coreset selection, aimed at expediting the training of QNNs and quantum kernels by distilling a judicious subset from the original training dataset. Furthermore, we analyze the generalization error bounds of QNNs and quantum kernels when trained on such coresets, unveiling the comparable performance with those training on the complete original dataset. Through systematic numerical simulations, we illuminate the potential of coreset selection in expediting tasks encompassing synthetic data classification, identification of quantum correlations, and quantum compiling. Our work offers a useful way to improve diverse quantum machine learning models with a theoretical guarantee while reducing the training cost. ## 1 Introduction Quantum neural networks (QNNs) [1, 2, 3, 4] and quantum kernels [5, 6] have emerged as pivotal models in the burgeoning field of quantum machine learning (QML) [7, 8, 9], poised to unlock the power of near-term quantum computers to address challenges that elude classical machine learning paradigms [10, 11]. The allure of these models is rooted in a fusion of theoretical advances and practical adaptability. That is, theoretical evidence showcases their superiority over classical counterparts in diverse scenarios, spanning synthetic datasets, discrete logarithmic problems, and quantum information processing tasks [6, 12, 13, 14, 15, 16], as measured by sample complexity and runtime considerations. Complementing their theoretical strength, their implementation displays flexibility, adeptly accommodating constraints posed by contemporary quantum hardware, including qubit connectivity and limited circuit depth. This convergence of theoretical promise and practical flexibility has spurred a wave of empirical investigations, substantiating the viability and potential benefits of QNNs and quantum kernels across real-world applications such as computer vision [17, 18, 19] and quantum physics [20, 21]. Despite their promising potential, QNNs and quantum kernels confront a pertinent challenge concerning the training efficiency, resulting in a constrained practical applicability towards large-scale datasets [22]. This limitation is particularly evident due to the absence of fundamental training mechanisms like back-propagation and batch gradient descent in the majority of QNNs, imperative for the swift training of deep neural networks [23]. Similarly, the training process of quantum kernels necessitates the collection of a kernel matrix of size \(O(N^{2})\), with \(N\) being the number of training examples and each entry demanding independent evaluation via a specific quantum circuit. Consequently, the capacities of both QNNs and quantum kernels to effectively navigate vast training datasets, characterized by millions of data points, are compromised. In response to the above challenge, several research lines have emerged to enhance the training efficiency of QNNs. The first line embarks on improving the optimizer or the initialization methods, seeking to expedite convergence towards the minimal empirical risk through a reduction in measurements and iterations [24, 25, 26, 27, 28]. Nonetheless, the non-convex nature of the loss landscape cautions against potential entrapment within saddle points during the optimization process [29, 30, 31]. The second avenue delves into feature dimension reduction techniques, enabling more streamlined utilization of quantum resources for each data point [32, 33]. However, this approach may not necessarily alleviate overall runtime complexity, potentially even exacerbating it. The third research pathway navigates the realm of QNN expressivity, imposing judicious constraints to facilitate the integration of back-propagation through meticulously engineered ansatzes [34, 35]. Nevertheless, the extent to which these constrained ansatzes might impact QNN performance on unseen data remains a dynamic yet unresolved facet. The endeavor to enhance the training efficiency of quantum kernels has received less attention in contrast to QNNs. This divergence in attention stems from the shared observation that both classical and quantum kernels demand \(O(N^{2})\) runtime for the collection of the kernel matrix. Existing literature focused on enhancing the training efficiency of quantum kernels has predominantly centered on the development of advanced encoding methods [36, 37]. These methods aim to mitigate the usage of quantum resources and attenuate the manifestation of barren plateaus [38]. However, despite the cultivation of various research trajectories directed at enhancing QML models, it is noteworthy that these approaches often retain model-specific attributes, potentially harboring unforeseen side effects. This realization begets a pivotal inquiry: Can a unified approach be devised that systematically enhances the training efficiency of both QNNs and quantum kernels while safeguarding a steadfast theoretical guarantee? In this study, we provide an affirmation of the above question by introducing the coreset selection techniques into QML. Conceptually, coreset is an effective preprocessing approach to distill a weighted subset from the large training dataset, which guarantees that models fitting the coreset also provide a good fit for the original data. Considering that the training efficiency of both QNNs and quantum kernels hinges on the number of training examples, coreset provides a unified way to enhance their training efficiency. Moreover, the theoretical foundation of coreset is established on the recent generalization error analysis of QNNs and quantum kernels, indicating that few training examples are sufficient to attain a good test performance. In this regard, we provide a rigorous analysis of the generalization ability of QNNs and quantum kernels training on the coreset. The achieved bounds exhibit the comparable generalization performance of QNN and quantum kernel learning when they are optimized under the original dataset and the coreset. Numerical simulations on synthetic data, identification of non-classical correlations in quantum states, and quantum circuit compilation confirm the effectiveness of our proposal. Related works The prior literature related to our work can be divided into two classes: the algorithms for accelerating the optimization of QML models and the generalization error analysis of QML models. In the following, we separately explain how our work relates to and differs from the previous studies. **Acceleration algorithms for QML models**. As aforementioned, various algorithms have been introduced to expedite the optimization of QNNs rather than quantum kernels. These algorithms can be classified into three distinct categories, each addressing different facets of optimization improvement: data feature engineering, QNN architecture design, and optimizer enhancement. In the realm of data feature engineering, the core concept revolves around implementing dimensionality reduction techniques such as principal component analysis and feature selection during the pre-processing stage [39]. This strategy effectively reduces the quantum resources required for processing data points compared to their unprocessed counterparts. Within the domain of architecture design, there exists a dual focus on encoding strategy design [40, 41] and ansatz design [42, 43, 44, 45, 46]. The underlying principle emphasizes the utilization of a minimal number of qubits, trainable parameters, and shallower circuit depths, all geared towards achieving competent performance in learning tasks. In parallel, efforts to enhance optimizers have also garnered attention. The deployment of higher-order gradient descent or machine-learning assisted optimizers [24, 25, 26, 27, 28] and distributed learning schemes [47] has been advocated as a means to accelerate convergence rates and wall-clock time, thereby further augmenting the efficiency of QNN optimization. Our work is complementary to the above approaches since the reduced size of data is independent of data feature engineering, QNN architecture design, and optimizer enhancement. In other words, QNNs trained on coreset can be further accelerated by the above approaches. Moreover, different from prior literature that concentrates on the acceleration of QNNs, coreset selection can be directly employed to accelerate the collection of quantum kernels. **Generalization of QML models**. Several studies have undertaken the task of quantifying the generalization error of QNNs through the lens of the foundational learning-theoretic technique known as uniform convergence [48, 49, 50, 51, 52]. In general, the resulting bound for generalization adheres to the format of \(O(\sqrt{p/N})\), where \(p\) denotes the count of trainable parameters and \(N\) signifies the number of training instances. For quantum kernels, the generalization error upper bound scales with \(O(\sqrt{C_{1}/N})\), where \(C_{1}\) depends on the labels of data and the value of quantum kernel [6]. When system noise is considered, the generalization error upper bound degrades to \(O(\sqrt{C_{1}/N}+N/(C_{2}\sqrt{m}))\), where \(m\) is the shot number and \(C_{2}\) depends on the kernel value, shot number, and the noise level [53]. Our work leverages the above analysis to quantify how the coreset selection effects the learning performance of QNNs and quantum kernels, respectively. We note that although Ref. [54] also discusses quantum coreset. Their primary emphasis lies in employing fault-tolerant quantum computers to speed up the construction of coresets, a subject that falls beyond the scope of our work. In addition, Ref. [55] and Ref. [56] illustrated the potential of coreset in specific applications, i.e., clustering and image classification, without a unified view and generalization error analysis. Preliminaries In this section, we first introduce the concept of machine learning and coreset. Then, we formally introduce the mechanism of quantum neural networks (QNNs) and quantum kernels under the supervised learning paradigm. ### Foundations of machine learning Let \(\mathcal{S}_{t}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N_{t}}\) be the dataset in which each paired training example \((\mathbf{x}_{i},y_{i})\) is independent and identically sampled over \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) with probability distribution \(p_{\mathcal{Z}}\), where \(\mathcal{X}\) is feature space and \(\mathcal{Y}\) the label space. The goal of supervised learning algorithms is to find a hypothesis \(f:\mathcal{X}\rightarrow\mathcal{Y}\) with trainable parameters \(\mathbf{w}\) such that the true risk \(R\) on the distribution \(p_{\mathcal{Z}}\) is minimized with \[R=\mathbb{E}_{(\mathbf{x},y)\sim p_{\mathcal{Z}}}[l(f_{\mathbf{w}}(\mathbf{x}),y)], \tag{1}\] where \(l(\cdot,\cdot)\) is the loss function used to measure the degree of fit between the output of the hypothesis and its corresponding ground truth. As the distribution \(p_{\mathcal{Z}}\) is unknown and, even given, accessing all the data over \(\mathcal{Z}\) would be impractical, in practice, the optimal hypothesis is estimated by optimizing \(\mathbf{w}\) to minimize the empirical risk \(R_{e}\) over the training dataset, i.e., \[R_{e}=\frac{1}{N_{t}}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{S}}l(f_{\mathbf{w}}(\mathbf{x }_{i}),y_{i}). \tag{2}\] ### Coreset in machine learning As the learning algorithm evolves, not only are models becoming increasingly complex, but training datasets are also growing larger. With growing volumes of data, the challenges on how to organize and analyze the massive data force us to devise effective approaches to condense the enormous data set. The concept of coreset selection, as a paradigm for extracting a data subset that is comprised of informative samples such that the generalization performance of the model trained over this reduced set is closed to those trained on the entire data set [57], is a promising solution to address the issue. **Definition 1**.: **(Coreset)** _Let \(P\) be a set of points in space \(\mathcal{V}\), and \(f\) be a monotone measure function, we call a subset \(Q\subseteq P\) as an \(\epsilon\)-coreset of \(P\), if_ \[|f(Q)-f(P)|\leq\epsilon\cdot f(P). \tag{3}\] Various coreset selection approaches are exploited to assist in dealing with computationally intractable problems in different learning tasks and data types [58, 59]. Throughout the whole work, we consider a geometry-based method to construct coreset [60]. As shown in Fig. 1, the goal of coreset construction is to find \(k\) data points as the centers \(\mathcal{C}\) such that the maximum distance between any point \(s\in\mathcal{S}\) to its nearest center is minimized, i.e., selecting \(\mathcal{C}\) such that the radius \(\delta_{c}\) of which is minimized. The optimization of finding the coreset can be formulated as \[\mathcal{S}_{c}=\arg\min_{\mathcal{C}\subseteq\mathcal{S},|\mathcal{C}|=k} \max_{\mathbf{x}_{j}\in\mathcal{S}}D(\mathbf{x}_{j},\mathbf{x}_{c}), \tag{4}\] where \(D(\mathbf{x}_{i},\mathbf{x}_{c})=\min_{\mathbf{x}_{c}\in\mathcal{C}}d(\mathbf{x}_{i},\mathbf{x}_{c})\) denotes the distance between the point \(i\) to its closest center. Although it is an NP-Hard problem to find \(\mathcal{S}_{c}\), there is a provable greedy algorithm can efficiently get a 2-approximate solution, i.e., if \(\mathcal{C}^{*}\) is the optimal solution of Eq. (4), we can efficiently find a solution \(\mathcal{C}\) such that \(\delta_{\mathcal{C}^{*}}\leq\delta_{\mathcal{C}}\leq 2\cdot\delta_{\mathcal{C}^{*}}\). ### Quantum neural networks Quantum neural network (QNN) refers to a class of neural networks that leverages the power of variational quantum circuits and classical optimizers to tackle learning problems. A QNN is mainly composed of three components: feature encoding, variational parametrized circuit, and measurement, as depicted in Fig. 2. Generally, feature encoding utilizes an encoding circuit \(U_{\mathbf{x}}\) to map the classical input \(\mathbf{x}\) into an \(n\)-qubit state \(|\mathbf{x}\rangle\). The concrete approaches of feature encoding are diverse, as outlined in [41, 61]. A variational parametrized circuit \(U_{\theta}\) evolves the feature state \(|\mathbf{x}\rangle\) to a parametrized state \(|\mathbf{x},\theta\rangle\), where the parameters \(\theta\) are to be tuned for minimizing the training loss. Measurement extracts the processed information stored in parametrized state \(|\mathbf{x},\theta\rangle\) into the classical register and may combine with a post-processing operation to form the output of QNN. In this work, we consider a QNN implemented by the data-reuploading strategy [40, 61, 62], alternating the feature encoding circuit \(U_{\mathbf{x}}\) and the variational circuit \(U_{\theta}\) to generate Figure 1: Coreset construction as a \(k\)-center problem. The red stars are \(k\) picked points with \(\delta_{c}\) radius covering the entire set. In the case shown in the figure, there are 5 center data points \(P_{k}\in\mathcal{S}\) with \(k\in\{1,2,...,5\}\) such that the maximum distance from any point in \(\mathcal{D}\) to its closest center is minimized. Figure 2: A general feed-forward process of quantum neural networks. The input state \(|\phi_{0}\rangle\) is firstly fed to feature encoding block to map the classical data \(\mathbf{x}\) to \(|\mathbf{x}\rangle\), and then use variational circuit \(U_{\theta}\) to form a parametrized state \(|\mathbf{x},\theta\rangle\) which is used for minimizing the loss function. In the end, the measurement output will be used to estimate the prediction residuals. parametrized state \(|\mathbf{x},\theta\rangle\), i.e., \[|\mathbf{x},\theta\rangle=U_{\theta}U_{\mathbf{x}}...U_{\mathbf{x}}U_{\theta}U_{\mathbf{x}}|\varphi\rangle. \tag{5}\] Without loss of generality, the feature encoding circuit and the variational circuit take the form as \[U_{\mathbf{x}}=\bigotimes_{j=1}^{n}\exp{(-i\mathbf{x}_{j}H)}\text{ and }U_{\theta}= \bigotimes_{j=1}^{n}\exp(-i\theta_{j}H)V, \tag{6}\] where \(H\in\{\sigma_{x},\sigma_{y},\sigma_{z}\}\) is the Hermitian operator and \(V\) refers to the fixed gates such as a sequence of CNOT gates. Once the variational state \(|\mathbf{x},\theta\rangle\) is evolved, the measurement operator \(M\) is applied to obtain the estimated expectation value. Given the training data set \(\mathcal{S}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N_{t}}\), the empirical risk of QNN over \(\mathcal{S}\) is \[R_{e}^{\text{QNN}}(\theta)=\frac{1}{|\mathcal{S}|}\sum_{\mathbf{x}_{i},y_{i}\in \mathcal{S}}l(tr[M|\mathbf{x}_{i},\theta\rangle\langle\mathbf{x}_{i},\theta]|,y_{i}). \tag{7}\] A possible choice of \(l\) is the mean square error, i.e., \(l(a,b)=(a-b)^{2}\). The optimization of QNN, i.e., the minimization of \(R_{e}^{\text{QNN}}\), can be completed by the gradient-descent optimizer based on the parameter shift rule [22]. ### Quantum kernels The kernel methods have been extensively studied in classical machine learning and applied to various scenarios such as face recognition and the interpretation of the dynamics of deep neural networks [63; 64]. Their quantum counterparts have also been extensively investigated from both experimental and theoretical perspectives [3; 53]. Formally, quantum kernels leverage quantum feature maps that encode the classical vector \(\mathbf{x}\) into a higher Hilbert space to perform the kernel trick. One well known quantum kernel function \(\kappa(\mathbf{x},\mathbf{x}^{\prime})\) is defined as the overlap between the quantum states \(|\mathbf{x}\rangle\) and \(|\mathbf{x}^{\prime}\rangle\) that encodes \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) via an \(n\)-qubit feature encoding circuit \(U_{e}\), i.e. \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=|\langle\mathbf{x}^{\prime}|\mathbf{x}\rangle|^{2}\) with \(|\mathbf{x}\rangle=U_{e}(\mathbf{x})|+\rangle^{\otimes N}\) and \(|+\rangle=H|0\rangle\), as shown in Fig. 3. In this work, we consider a generic quantum feature map proposed in [3], i.e., \[U_{e}(\mathbf{x})=\exp{(\sum_{i}\mathbf{x}_{i}\sigma_{j}^{Z}+\sum_{i,j}(\pi-\mathbf{x}_{i} )(\pi-\mathbf{x}_{j})\sigma_{i}^{Z}\sigma_{j}^{Z})}. \tag{8}\] We note that other symmetric positive-definite functions are also valid candidates for implementing quantum kernels. Different forms of the quantum kernel correspond to different feature maps in Hilbert space, which could provide quantum merits for classification tasks if designed properly [65]. Figure 3: The quantum circuit implementation of quantum kernel \(\kappa(\mathbf{x},\mathbf{x}^{\prime})\) considered in this work. The circuit separately encodes \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) as the parameter of \(h\) layers variational circuit \(U\) and \(U^{\dagger}\) into the states \(|\mathbf{x}\rangle\) and \(|\mathbf{x}^{\prime}\rangle\). Coreset selection for QML models This section first introduces the algorithmic implementation of coreset selection. Then, we elucidate how to use the constructed coreset to train QNNs and quantum kernels. The Pseudo code of coreset selection for QML models is summarized in Algorithm 1. The main idea is regarding the coreset selection as a data preprocessing strategy, which aims to find \(k\) balls, centered at points \(\{\mathbf{x}_{s}\}_{s=1}^{k}\) in data space \(\mathcal{P}\), to cover the whole data set with the radius \(\delta_{c}\). Let \(\mathcal{S}=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{N_{t}},y_{N_{t}})\}\) be an \(n_{c}\)-class data set and a hyper-parameter \(k\) satisfy \(k\gg n_{c}\). We apply the following procedure to each class to construct the coreset. ``` input : Data set \(\mathcal{S}=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{N_{t}},y_{N_{t}})\}\), covering number \(k\). output : The subset \(\mathcal{S}_{c}\subseteq\mathcal{S}\) with \(|\mathcal{S}_{c}|=k\) and corresponding weight \(\{\gamma_{s}\}_{s=1}^{k}\). 1for\(i\gets 1\)to\(n_{c}\)do 2\(\mathcal{S}_{c}^{(i)}\leftarrow\emptyset\); /* initialize the coreset for each class */ 3 endfor 4for\(i\gets 1\)to\(n_{c}\)do 5\(\mathcal{S}_{c}^{(i)}\leftarrow\mathbf{x}\in\mathcal{S}^{(i)}\); 6 /* randomly select point \(\mathbf{x}\) from set \(\mathcal{S}^{(i)}\) associated with class \(i\), and add it to set \(\mathcal{S}_{c}^{(i)}\). */ 7while\(|\mathcal{S}_{c}^{(i)}|<\left|\frac{|\mathcal{S}^{(i)}|}{|\mathcal{S}|}\cdot k\right|\)do 8\(\mathbf{x}=\arg\max_{\mathbf{x}\in\mathcal{S}^{(i)}\setminus\mathcal{S}_{c}^{(i)}} \min_{\mathbf{x}^{\prime}}d(\mathbf{x},\mathbf{x}^{\prime})\); 9\(\mathcal{S}_{c}^{(i)}\leftarrow\mathcal{S}_{c}^{(i)}\bigcup\{\mathbf{x}\}\); 10 endwhile 11endfor 12\(s=1\); 13for\(j\gets 1\)to\(n_{c}\)do 14for\(\mathbf{x}\in\mathcal{S}_{c}^{(j)}\)do 15\(\gamma_{s}\leftarrow\sum_{\mathbf{x}^{\prime}\in\mathcal{S}^{(j)}}\mathbf{I}_{|\mathbf{x}- \mathbf{x}^{\prime}|\leq\delta_{c}^{(j)}}\); 16 /* use the indicator function to count the number of same-class training samples in radius \(\delta_{c}\). */ 17\(s\gets s+1\); 18 endfor 19 endfor 20\(\mathcal{S}_{c}\leftarrow\emptyset\); 21for\(i\gets 1\)to\(n_{c}\)do 22\(\mathcal{S}_{c}\leftarrow\mathcal{S}_{c}\bigcup\mathcal{S}_{c}^{(i)}\) 23 endfor return \(\mathcal{S}_{c}\) and \(\{\gamma_{s}\}_{s=1}^{k}\). ``` **Algorithm 1**The greedy algorithm for \(k\)-center coreset selection. For each class \(i\in[n_{c}]\), at the first step, we randomly pick a data point \(\mathbf{x}\) from the set \(\mathcal{S}^{(i)}\subseteq\mathcal{S}\) and put it into the initialized empty set \(\mathcal{S}_{c}^{(i)}\), where \(\mathcal{S}^{(i)}\) refers to the set of all training data points associated with the label \(i\). In the second step, we iteratively choose the data point from \(\mathcal{S}^{(i)}\) to be as far away as possible from the other centers in \(\mathcal{S}_{c}^{(i)}\). That is, for each class \(i\), repeatedly finding a data point \(\mathbf{x}\) for which the distance \(d(\mathbf{x},\mathbf{x}^{\prime})\) is maximized where \(\mathbf{x}\in\mathcal{S}^{(i)},\mathbf{x}^{\prime}\in\mathcal{S}^{(i)}_{c}\). The searched data point is then appended to \(\mathcal{S}^{(i)}_{c}\). This iteration procedure is terminated when \(|\mathcal{S}^{(i)}_{c}|\geq\left\lceil\frac{|\mathcal{S}^{(i)}|}{|\mathcal{S}|} \cdot k\right\rceil\). In other words, the coreset size of each class \(i\) is restricted to be proportional to the ratio of the size of that class \(|\mathcal{S}^{(i)}|\) to the overall amount of data \(|\mathcal{S}|\). When the coreset for each class is collected, we merge these sets together to create a coreset with \(\mathcal{S}_{c}=\cup_{i=1}^{n_{c}}\{\mathcal{S}^{(i)}_{c}\}\) and set the weight \(\gamma_{s}\) as the number of samples covered by the coreset example \(\mathbf{x}_{s}\) in radius \(\delta_{c}\). Once the coreset \(\mathcal{S}_{c}\) is built, we can integrate it into QML models. In the following, we introduce how it works on QNNs and quantum kernels. The Pseudo code of QNN trained over the coreset is summarized in Algorithm 2. Conceptually, we only need to replace the full training data \(\mathcal{S}\) with the coreset \(\mathcal{S}_{c}\), and rewrite the cost function by introducing the weight \(\{\gamma_{s}\}\) for each corresponding data point \(\mathbf{x}\) in coreset \(\mathcal{S}_{c}\), i.e., \[R^{\text{QNN}}_{c}=\frac{1}{|\mathcal{S}_{c}|}\sum_{\mathbf{x}_{s},y_{s}\in \mathcal{S}_{c}}\gamma_{s}\cdot l(tr[M\cdot|\mathbf{x}_{s},\theta\rangle\langle\bm {x}_{s},\theta|,y_{s}), \tag{9}\] where \(|\mathbf{x}_{s},\theta\rangle\) and \(M\) are identical to those in Eq. (7). ``` input : Coreset \(\mathcal{S}_{c}\), weights \(\{\gamma_{s}\}_{s=1}^{k}\), maximum epoch number \(K_{max}\), learning rate \(\eta\) output : trained QNN. 1 Initialize the parametrized quantum circuit \(U(\theta)\); 2\(l=1\); 3while\(l<K_{max}\)do 4for\(\mathbf{x}_{j},y_{j}\in\mathcal{S}_{c}\)do 5 Estimate the gradient of \(R_{c}\) with respect to \(\theta\), \(\nabla_{\theta}f(\mathbf{x}_{j},y_{j})\) ; 6\(\theta_{l+1}\leftarrow\theta_{l}+\eta\cdot\gamma_{j}\cdot\nabla_{\theta}f(\mathbf{ x}_{j},y_{j})\); 7 endfor 8 9 endwhile 10 return the QNN \(tr[M\cdot U(\theta)U(\mathbf{x}_{i})\rho_{0}U^{\dagger}(\theta)U^{\dagger}(\mathbf{x}_{ i})]\). ``` **Algorithm 2**QNN with coreset selection We next explain the implementation of a quantum-kernel-based support vector machine (SVM) classifier over coreset. As we employ the coreset \(\mathcal{S}_{c}\) and introduce the weights \(\{\gamma\}\), there is a slight difference between original SVM and coreset enhanced SVM, where the Lagrange multipliers \(\alpha_{i}\) in dual problem is upper bounded by \(C\cdot\gamma_{i}\) instead of \(C\). Mathematically, we have \[\max_{\alpha} \sum_{i}\alpha_{i}-\frac{1}{2}\sum_{i,j}\alpha_{i}\alpha_{j}y_{i}y _{j}K(\mathbf{x}_{i},\mathbf{x}_{j}),\] (10) s.t. \[\sum_{i}y_{i}\alpha_{i}=0,\] \[0\leq\alpha_{i}\leq C\cdot\gamma_{i},i=1,\cdots,k.\] where \(K(\mathbf{x},\mathbf{x}^{\prime})=|\langle\mathbf{x}|\mathbf{x}^{\prime}\rangle|^{2}\) is the quantum kernel function. The training process is similar to the original SVM and depicted in Algorithm 3. ``` input : Coreset \(\mathcal{S}_{c}\), weights \(\{\gamma_{s}\}_{s=1}^{k}\), encoding circuit \(U\), regularization parameter \(C\) output : SVM with quantum kernel. 1for\(\mathbf{x}\in\mathcal{S}_{c}\)do 2for\(\mathbf{x}^{\prime}\in\mathcal{S}_{c}\)do 3\(K_{\mathbf{x},\mathbf{x}^{\prime}}=|\langle 0|U^{\dagger}(\mathbf{x}^{\prime})|U(\mathbf{x})|0 \rangle|^{2}\); /* construct kernel matrix \(K\) */ 4 endfor 5 6 endfor 7 Solve the dual problem in Eq.(10), get the optimal parameters \(\alpha^{*}\); 8 Calculate the parameter \(W^{*},b^{*}\); \[W^{*}=\sum_{i}\alpha_{i}y_{i}\mathbf{x}_{i},\] (11) \[b^{*}=y_{i}-\sum_{i}\alpha_{i}y_{i}K(\mathbf{x}_{i},\mathbf{x}_{j}),\] (12) return the quantum kernel based SVM, \(f(\mathbf{x})=\sum_{i}\alpha_{i}y_{i}K(\mathbf{x},\mathbf{x}_{i})+b^{*}\). ``` **Algorithm 3**Quantum kernel with coreset selection. ## 5 Generalization ability of QML models under coreset selection In machine learning, generalization analysis is a crucial theoretical tool to measure how accurately a learning model predicts previously unseen data [66]. It helps us as a guiding metric to choose the best-performing model when comparing different model variations. As explained in Section 3.1, the purpose of learning algorithms is to find a hypothesis \(f\) such that the true risk \(R\) on the data distribution \(\mathcal{Z}\) is as small as possible. However, it is unlikely to directly estimate the true risk \(R\) since the distribution \(\mathcal{Z}\) is unknown and an alternative solution is minimizing the empirical risk \(R_{e}\) over the given samples. The generalization error quantifies the gap between the true risk \(R\) and empirical risk \(R_{e}\), i.e., \[|R-R_{e}|=\left|\mathbb{E}_{(\mathbf{x},y)\sim p_{\mathcal{Z}}}[l(f_{\mathbf{w}}(\mathbf{ x}),y)]-\frac{1}{|\mathcal{S}|}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{S}_{t}}l(f_{ \mathbf{w}}(\mathbf{x}_{i}),y_{i})\right|. \tag{13}\] Thus, the true risk \(R\) can be upper bounded by its empirical error and the generalization error, i.e., \[R\leq\underbrace{|R-R_{e}|}_{\text{generalization error}}+\underbrace{|R_{e} |}_{\text{empirical error}}. \tag{14}\] Since the empirical error, namely the training loss, is close to zero in general, we can only consider the upper bound of the generalization error. Thus, a natural question is whether the coreset selection could provide a tighter generalization bound compared to random sampling under the same pruned training size. To answer this question, we need first define the empirical error \(R_{r}\) on a sub-training set \(\mathcal{S}_{r}\) that is generated by random sampling from full data set \(\mathcal{S}\) as, \[R_{r}=\frac{1}{|\mathcal{S}_{r}|}\sum_{\mathbf{x}_{i},y_{i}\in\mathcal{S}_{r}}l(f_ {\mathbf{w}}(\mathbf{x}_{i}),y_{i}), \tag{15}\] where \(|\mathcal{S}_{r}|=N_{r}\) is the size of \(\mathcal{S}_{r}\). Hence, the generalization error \(G_{r}\) over \(\mathcal{S}_{r}\) is represented as, \[G_{r}=|R-R_{r}|=\left|\mathbb{E}_{(\mathbf{x},y)\sim p_{\mathcal{Z}}}[l(f_{\mathbf{w}}( \mathbf{x}),y)]-\frac{1}{|\mathcal{S}_{r}|}\sum_{\mathbf{x}_{i},y_{i}\in\mathcal{S}_{r} }l(f_{\mathbf{w}}(\mathbf{x}_{i}),y_{i})\right|. \tag{16}\] According to previous studies on the generalization analysis of QNNs and quantum kernels [6, 52], with probability at least \(\geq 1-\delta\) over \(\mathcal{S}_{r}\), we have the generalization error of QNNs, \[G_{r}^{\text{QNN}}\leq\mathcal{O}\left(\sqrt{\frac{m\log\left(m\right)}{N_{r} }}+\sqrt{\frac{\log\left(1/\delta\right)}{N_{r}}}\right), \tag{17}\] where \(m\) is the trainable parameters of QNNs, and the generalization error of quantum kernel, \[G_{r}^{\text{qkernel}}\leq\mathcal{O}\left(\sqrt{\frac{\left\|\mathbf{w}\right\| \right\|^{2}}{N_{r}}}+\sqrt{\frac{\log\left(4/\delta\right)}{N_{r}}}\right). \tag{18}\] To bound the generalization error \(G_{c}\) on coreset \(\mathcal{S}_{c}\), we similarly define the empirical error \(R_{c}\) on the coreset \(\mathcal{S}_{c}\) as, \[R_{c}=\frac{1}{|\mathcal{S}_{c}|}\sum_{\mathbf{x}_{s},y_{s}\in\mathcal{S}_{r}} \gamma_{s}\cdot l(f_{\mathbf{w}}(\mathbf{x}_{s}),y_{s}), \tag{19}\] where \(|\mathcal{S}_{c}|=N_{c}\) is the size of \(\mathcal{S}_{c}\) and \(\gamma_{s}\) is the weight for each coreset example \(\mathbf{x}_{s}\) that is used to make \(R_{c}\) approximate to \(R_{c}\) over full data set \(\mathcal{S}\). Then, we can present \(G_{c}\) as, \[G_{c}=|R-R_{c}|=\left|\mathbb{E}_{(\mathbf{x},y)\sim p_{\mathcal{Z}}}[l(f_{\mathbf{w}} (\mathbf{x}),y)]-\frac{1}{|\mathcal{S}_{c}|}\sum_{\mathbf{x}_{s},y_{s}\in\mathcal{S}_{ c}}l(f_{\mathbf{w}}(\mathbf{x}_{s}),y_{s})\right|. \tag{20}\] Based on the triangle inequality, \(G_{c}\) can be bounded by, \[G_{c}=|R-R_{c}|\leq\underbrace{|R-R_{c}|}_{\text{generalization error}}+ \underbrace{|R_{c}-R_{c}|}_{\text{coreset error}}. \tag{21}\] Here, we name the second term, i.e. \(|R_{e}-R_{c}|\), as the coreset error that describes the gap between the empirical loss on the full data set and the coreset. As the bound of the generalization error term, i.e. \(|R-R_{c}|\), is given by previous works [6, 52], we can only focus on the bound of coreset error. By analyzing the coreset error, we first provide the generalization error bound of QNNs. **Theorem 1**.: **(Generalization error bound of QNN on coreset)** _Given sample set \(\mathcal{S}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N_{t}}\) are \(i.i.d\) randomly drawn from distribution \(\mathcal{Z}\), \(\mathcal{S}_{c}\) is \(\delta_{c}\) cover of \(\mathcal{S}\). Assume there is \(\lambda_{\eta}\)-Lipschitz continuous class-specific regression function \(\eta(\mathbf{x})=p(y=c|\mathbf{x})\), and the loss \(l(f_{\mathbf{w}}(\mathbf{x}_{s},y_{s}))\) over coreset \(\mathcal{S}_{c}\) is zero and bounded by \(L\). We have the following upper bound for the generalization error of QNNs trained on coreset with probability \(1-\delta\),_ \[G_{c}^{\text{QNN}}\leq\mathcal{O}\left(\sqrt{\frac{m\log(m)}{N_{t}}}+\sqrt{ \frac{\log(1/\delta)}{N_{t}}}+\delta_{c}(\lambda_{\eta}Ln_{c}+d\sqrt{d_{\mathbf{x }}}\max_{j}|\mathbf{w}_{j}|\left|M\right|(\left|M\right|+\max|y|)\right), \tag{22}\] _where \(m\) is the number of the parameters in QNN, \(d\) is the number of QNN layers, \(n_{c}\) is the number of classes, \(M\) is the measurement operator, \(d_{\mathbf{x}}\) is the feature dimension._ For the coreset error \(|R_{e}-R_{c}|\), we assume the training error on coreset is equal to zero, thus it becomes the average error over the entire data set which can be bound with the radius \(\delta_{c}\) determined by the \(k\)-center covering problem shown in Fig. 1, which is related to the data prune rate \(\zeta=|\mathcal{S}_{c}|/|\mathcal{S}|\). Combining the generalization bound on the full data set and the bound of risk gap between the full data set and coreset together, we have the generalization error of QNN on coreset is mainly bounded by two terms, i.e. \(\mathcal{O}((\sqrt{m\log(m)/N_{t}}+\delta_{c})\). Thus, the \(G_{c}^{\text{QNN}}\) will give a tighter bound than \(G_{r}^{\text{QNN}}\) when we carefully choose the data prune rate \(\zeta\). It is clear that \(G_{c}^{\text{QNN}}\) gives the tighter bound on the first term since \(N_{r}<N_{t}\). For the second term, as \(\delta_{c}\) is related to \(\zeta\), a low \(\zeta\) will cause \(\delta_{c}\) to become large leading to a high approximation error. Conversely, a high \(\zeta\) will decrease the approximation error but the acceleration of training will disappear because \(N_{c}\) is approximate to \(N_{t}\). We next provide the generalization error bound of quantum kernels under the coreset. **Theorem 2**.: **(Generalization error bound of quantum kernels on coreset.)** _Given sample set \(\mathcal{S}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N_{t}}\) are \(i.i.d\) randomly drawn from distribution \(\mathcal{Z}\), \(\mathcal{S}_{c}\) is \(\delta_{c}\) cover of \(\mathcal{S}\). Assume there is \(\lambda_{\eta}\)-Lipschitz continuous class-specific regression function \(\eta(\mathbf{x})=p(y=c|\mathbf{x})\), and the loss \(l(f_{\mathbf{w}}(\mathbf{x}_{s},y_{s}))\) over coreset \(\mathcal{S}_{c}\) is zero and bounded by \(L\). We have the following upper bound for the generalization error of SVM with quantum kernel trained on coreset with probability \(1-\delta\),_ \[G_{c}^{\text{qkernel}}\leq\mathcal{O}\left(\sqrt{\frac{\|\mathbf{w}\|\|^{2}}{N_{ t}}}+\sqrt{\frac{\log{(4/\delta)}}{N_{t}}}+\delta_{c}(\lambda_{\eta}Ln_{c}+N_{c} \sqrt{d_{\mathbf{x}}}\max_{j}|\mathbf{w}_{j}|\cdot(1+(N_{q}-1)r))\right), \tag{23}\] _where \(n_{c}\) is the number of classes, \(d_{\mathbf{x}}\) is the feature dimension of \(\mathbf{x}\), \(N_{q}\) is the size of mapped quantum state \(|\mathbf{x}\rangle\), \(r\) is the maximum value of feature \(\mathbf{x}\)._ Similar to the analysis of the generalization error of QNNs on coreset, the generalization error of SVM with a quantum kernel is also mainly bounded by two terms, i.e. \(\mathcal{O}(\sqrt{\frac{\|\mathbf{w}\|\|^{2}}{N_{t}}}+\delta_{c})\) that indicates we have the similar results like \(G_{c}^{\text{QNN}}\). Here, when we employ coreset selection to reduce the size of training examples, we will reduce the complexity of getting kernel matrix, that provides \(\mathcal{O}(k^{2})\) speedup where \(k=N_{t}/N_{c}\), meanwhile also having provable generalization guarantee. It should be pointed out that it is not easy to propose a universal trick for selecting an appropriate data prune rate \(\zeta\) that determines the \(\delta_{c}\) to achieve a good generalization performance. Because it is related to the distribution of the given training set \(\mathcal{S}\) and unknown true data distribution \(\mathcal{Z}\). Nevertheless, we give some numerical experiments on various data and models, the results not only support the analytical findings but also might provide some practical advice for the selection of \(\zeta\). Consequently, if we carefully choose the data prune rate and the size of the training set to be scaled at least quasi-linearly with the number of gates employed, we are able to accelerate the quantum machine learning model with a performance guarantee. These results provide effective and practical guidance on achieving accurate and reliable performance with a reasonable model complexity and sample complexity. ## 6 Numerical results In this section, we conduct extensive numerical simulations to explore the performance of the proposed coreset selection method. Specifically, we employ our proposal to accomplish three learning tasks, which are synthetic data classification, quantum correlation identification, and quantum compiling. ### Synthetic data classification by quantum kernels We first utilize the quantum kernel to classify a synthetic dataset with coreset. The construction rule of the synthetic dataset mainly follows Ref. [6]. Specifically, given the data set \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N}\) that independently sampled from the distribution \(\mathcal{X}\), the corresponding labels are modified according to the maximized geometric difference given by \[\max_{y\in\mathbb{R}^{N}}\frac{\sum_{i=1}^{N}\sum_{j=1}^{N}(K^{Q})_{ij}^{-1}y_ {i}y_{j}}{\sum_{i=1}^{N}\sum_{j=1}^{N}(K^{C})_{ij}^{-1}y_{i}y_{j}}, \tag{24}\] where \(K^{Q}\) and \(K^{C}\) denote the quantum and classical kernel respectively. The optimal solution of Eq. (24) yields the modified labels \(\mathbf{y}^{*}\) such that maximize the geometric difference. \[\mathbf{y}^{*}=\text{sign}(\sqrt{K^{Q}}v), \tag{25}\] where \(v\) is the eigenvector of \(\sqrt{K^{Q}}(K^{C})^{-1}\sqrt{K^{Q}}\) with the maximum eigenvalue, and \(\text{sign}(\mathbf{z})\) is the element-wise function such that set the \(i\)-th element as \(+1\) if \(z_{i}>median(\mathbf{z})\) otherwise set as \(-1\). As proved in Ref. [6], quantum kernels can achieve quantum advantages when learning this dataset. An illustration of the synthetic dataset construction is shown in Fig. 4. Concretely, in our numerical simulations, the synthetic dataset is based on fashion-MNIST [67]. As the dimension of vectorized data of the fashion-MNIST is too high for NISQ device, we preprocess it as the low-dimension representation by principle component analysis and then relabel the class of each data point according to Eq. (24). Besides, the element of classical kernel \(K^{C}_{ij}\) is given by the radial basis function kernel \(K^{C}_{ij}=\exp(-\frac{\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}}{2\sigma^{2}})\). The quantum kernel \(K^{Q}_{ij}\) is generated through encoding the data points into \(N_{q}\)-qubit Hilbert space by a quantum circuit \(U_{e}\), \[K^{Q}_{ij}=tr(\rho(\mathbf{x}_{i})\rho(\mathbf{x}_{j})), \tag{26}\] Figure 4: Illustration of the employed synthetic dataset adapted from the fashion-MINIST [67]. We use principal component analysis to get the low-dimensional representation, then embed the reduced data into quantum Hilbert space. In the end, we relabel the data according to maximizing the geometric difference between classical and quantum kernels in Eq. (24). where \(\rho(\mathbf{x})=U_{e}(\mathbf{x})|0\rangle\langle 0|U_{e}^{\dagger}(\mathbf{x})\). We can further assume the following form of \(U_{\mathbf{x}}\) implemented through quantum gates in an \(N_{q}\)-qubit circuit: \(U_{\mathbf{x}}=(U(\mathbf{x})H^{\otimes N_{q}})^{2}|0\rangle^{\otimes N_{q}}\), where \(U(\mathbf{x})=\exp(\sum_{j=1}^{N_{q}}\mathbf{x}_{j}\sigma_{j}^{Z}+\sum_{j,j^{\prime}=1 }^{N_{q}}\mathbf{x}_{j}\mathbf{x}_{j^{\prime}}\sigma_{j}^{Z}\sigma_{j^{\prime}}^{Z})\). Once the synthetic data set was prepared, we conducted experiments on training sets with various sizes and subsequently tested on 200 unseen examples to get the test accuracy. Instead of independently and randomly choosing training data from entire set \(\mathcal{S}\), we first solve the \(k\)-center problem over set \(\mathcal{S}\) with 1000 examples that are equivalent to form the coreset \(\mathcal{S}_{c}\). In Fig. 5, we show the comparison of the classification performance under these settings. They respectively depict the correlation between the size of the training set, obtained using random sampling and coreset selection, and the corresponding test accuracy. For experiments involving the same number of training examples, we carried out five independent trials, each employing randomized initialization, resulting in the shaded area as shown in Fig. 5. The average test performance of these trials is plotted as a solid line. We also highlight the best test accuracy of random sampling as the red dash line. From the results shown in Fig. 5, when the models are trained on a set with 250 examples (i.e., \(\zeta=0.25\)), the test accuracy of those trained on the data points obtained by coreset selection is higher than that of the model via random sampling. Besides, when achieving the same test accuracy, the model only needs 250 coreset examples while requiring 400 randomly sampled examples. They support our analytical results, the QNNs trained on coreset have better generalization performance than those with randomly picked training data under appropriate data prune rate. The coreset-enhanced classifier significantly improves the training efficiency, achieving competitive performance while only utilizing approximately 50% of the training examples compared to random sampling. ### Correlation identification by QNNs Non-classical correlation plays a core role in quantum information and quantum computation [68]. Nevertheless, identifying the non-classical correlation of a given quantum state is a challenging task. Ref. [69] explored classifying non-classical correlation experimentally with machine learning techniques. Consider the family of quantum states characterized by \(p\) and \(\theta\) with the following form, \[\rho_{AB}(p,\theta)=p|\psi_{\theta}\rangle\langle\psi_{\theta}|+(1-p)\frac{ \mathbb{I}}{2}\otimes tr_{A}(|\psi_{\theta}\rangle\langle\psi_{\theta}|), \tag{27}\] Figure 5: The comparison between the proposed model with random sampling and coreset selection. It shows the average performance of classification of synthetic data where the solid line is the test accuracy of the models that trained over coreset and random sampling. The shaded area refers to the range of the test accuracy. The red dashed line denotes the maximum accuracy achieved by random sampling. where the \(p\in(0,1)\), \(\theta\in(0,2\pi)\) and state \(|\psi_{\theta}\rangle=\cos(\theta)|00\rangle+\sin(\theta)|11\rangle\). There are following rules to determine the non-classical correlation of quantum states \(\rho_{AB}\) including separable, entangled, one-way steerable, and non-local. 1. According to PPT criterion, the states are _separable_ when \(p<\frac{1}{3}\) otherwise they are entangled. 2. When \(\frac{1}{\sqrt{2}}<p<\frac{1}{\sqrt{1+\sin^{2}(2\theta)}}\), the quantum state is _one-way steerable_. 3. When \(p>\frac{1}{\sqrt{1+\sin^{2}(2\theta)}}\), the state is _non-local_. Therefore, to identify the non-local correlation of a given state under the learning framework, we can label the quantum states with different non-classical correlations according to the above criteria and create the data set \(\{\rho_{j}^{AB},y_{j}\}_{j=1}^{N}\) where \(y_{i}\) represents the type of correlation, i.e. separable, entangled, one-way steerable, non-local. To further enhance the methods, the fewer training samples the less run time, we apply the coreset selection to this learning task. For classifying the quantum correlation, we uniformly pick the parameters \(p\in(0,1)\) and \(\theta\in(0,2\pi)\) to generate the 1000 quantum states as the full training set as shown in the subfigure (a) of Fig. 6. Then label the class of correlation in terms of the criterion listed above. Here, we continue with the same experimental setup as the previous section. where we examine the test accuracy of the model, trained over various sample sizes, on 200 unseen random samples. The red dash lines in subfigure (i) and (ii) denote the maximum test accuracy attained by the classifier through random sampling, with a maximum of 900 training samples. When utilizing the coreset method to prune the dataset, for sample sizes exceeding 180, the averaged test accuracy obtained by coreset is almost higher than the best accuracy achieved by random sampling. It indicates that when \(\zeta\geq 0.18\), the coreset selection could provide better performance compared to training on random sampling data, which matches our theoretical findings. ### Quantum compiling by Qnn Compiling a unitary into a sequence of gates is a challenging and high-profile task for NISQ quantum devices. With the limitation of current NISQ hardware, compiling a unitary not only should take account of the function but also consider the connectivity and depth of the output circuit. Recently, various methods for quantum compiling have been proposed Figure 6: Results related to correlation identification. (a) The states are represented as dots in polar plots. The radius of the polar plot represents the parameter \(p\) that varies from 0 to 1. The phase stands for the parameter \(\theta\) that varies from 0 to \(2\pi\). The blue and orange dots indicate the separable and entangled states separately. (b) The comparison of test accuracy between the classifier with random sampling and coreset under different sizes of training set. under the framework of variational quantum algorithms [42, 52, 70]. In general, these algorithms consider the compiling task as an optimization problem on a given compact quantum gates set which consists of fixed and parametrized quantum gates. The goal is to optimize the structure and gate parameters such that the proposed quantum circuit approximates to the given unitary. Here, we consider the method that tackles the compiling task by a quantum machine learning protocol. Given an \(n\)-qubit target unitary \(U\), the training data consists of random input states and their corresponding output when \(U\) applied on, i.e. \(\{|\psi_{j}\rangle,U|\psi_{j}\rangle\}_{j=1}^{N}\). To approximate the target unitary \(U\), one can simply minimize the empirical loss of the squared trace distance between target states \(U|\psi_{j}\rangle\) and parametrized output states \(V(\mathbf{\theta})|\psi_{j}\rangle\) and over randomly sampled states in Hilbert space. Concretely, we randomly pick the quantum gates from the gate pool consisting of single parametrized gates \(\{R_{x},R_{y},R_{z}\}\) and CNOT gate to build the target quantum circuits \(U\). Then set the input as the random state \(|\psi_{i}\rangle\) in Hilbert space and get training pairs \(\{|\psi_{j}\rangle,U|\psi_{j}\rangle\}_{j=1}^{1000}\). In subfigure (b) of Fig. 7, we present the comparison between the models trained on random sampling and coreset with various data sizes and pruning ratios. We separately estimate the performance of the model with \(\zeta=\{0.8,0.4,0.6,0.2\}\). Each data point with a different color on the plot corresponds to different \(\zeta\). For the compiling task on a specific unitary, we found that 1000 training examples are redundant, and the effective size of training data scales linearly with the system size which is similar to what previous work found [52]. When given around 20 examples, it is sufficient for training the variational compiler to achieve a reasonable performance for a 6-qubit system. Since the randomly sampled training points are likely to be uniformly located in Hilbert space and the proposed coreset selecting approach also uses \(k\)-centers to uniformly cover the entire set, the compilers trained on coreset with the different \(\zeta\) have similar performance compared to those trained on the random-sampled set. Figure 7: Results of unitary compiling. (a) The target unitary is exploited in numerical simulations. (b) The performance comparison between the different prune ratios \(\zeta\) of coreset on compiling task. The solid dark and light lines represent the models trained on the coreset and random samples. The vertical axis represents the percentage of states in the test set for which the trace distance to their corresponding targets is below \(10^{-5}\). Discussion In this work, we investigate enhancing QML model from the data engineering perspective and attempt to alleviate a practical problem when handling a large volume of data samples. In particular, we consider the coreset construction as a \(k\)-set cover problem and then analyze the generalization performance of QML models on coreset. Our investigation of various learning scenarios, including on classification of synthetic data, identification of non-classical correlations in quantum states, and quantum circuit compilation, confirm the extreme improvements in the effectiveness of our proposal. Our research findings highlight the considerable enhancement in model training achieved through the utilization of the coreset method. Data pruning contributes to improved training efficiency. Besides, it also helps filter out noise data, thereby enhancing model performance. It's evident that selecting a sparser coreset enforces a more rigorous upper bound on the count of trainable gates and appropriate data prune rate. The size of the training set should be scaled at least quasi-linearly with the number of gates. These findings provide effective and practical guidelines to achieve accurate and reliable results with a reasonable configuration of gates and training data. Although we have improved model training efficiency through data pruning methods, there is still significant room for enhancement. For instance, the introduction of influence functions could better characterize the impact of data variations on the model, thus achieving more precise data filtering. We aspire for the work presented in this article to offer valuable insights and guidance for future practical research in quantum machine learning, from theoretical or practical aspects, on NISQ devices.
2309.17448
SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation
Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications. Despite encouraging progress, current state-of-the-art methods still depend largely on a confined set of training datasets. In this work, we investigate scaling up EHPS towards the first generalist foundation model (dubbed SMPLer-X), with up to ViT-Huge as the backbone and training with up to 4.5M instances from diverse data sources. With big data and the large model, SMPLer-X exhibits strong performance across diverse test benchmarks and excellent transferability to even unseen environments. 1) For the data scaling, we perform a systematic investigation on 32 EHPS datasets, including a wide range of scenarios that a model trained on any single dataset cannot handle. More importantly, capitalizing on insights obtained from the extensive benchmarking process, we optimize our training scheme and select datasets that lead to a significant leap in EHPS capabilities. 2) For the model scaling, we take advantage of vision transformers to study the scaling law of model sizes in EHPS. Moreover, our finetuning strategy turn SMPLer-X into specialist models, allowing them to achieve further performance boosts. Notably, our foundation model SMPLer-X consistently delivers state-of-the-art results on seven benchmarks such as AGORA (107.2 mm NMVE), UBody (57.4 mm PVE), EgoBody (63.6 mm PVE), and EHF (62.3 mm PVE without finetuning). Homepage: https://caizhongang.github.io/projects/SMPLer-X/
Zhongang Cai, Wanqi Yin, Ailing Zeng, Chen Wei, Qingping Sun, Yanjun Wang, Hui En Pang, Haiyi Mei, Mingyuan Zhang, Lei Zhang, Chen Change Loy, Lei Yang, Ziwei Liu
2023-09-29T17:58:06Z
http://arxiv.org/abs/2309.17448v3
# SMPLer-X: Scaling Up Expressive ###### Abstract Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications. Despite encouraging progress, current state-of-the-art methods still depend largely on a confined set of training datasets. In this work, we investigate scaling up EHPS towards the first _generalist_ foundation model (dubbed **SMPLer-X**), with up to ViT-Huge as the backbone and training with up to 4.5M instances from diverse data sources. With big data and the large model, SMPLer-X exhibits strong performance across diverse test benchmarks and excellent transferability to even unseen environments. _1) For the data scaling_, we perform a systematic investigation on 32 EHPS datasets, including a wide range of scenarios that a model trained on any single dataset cannot handle. More importantly, capitalizing on insights obtained from the extensive benchmarking process, we optimize our training scheme and select datasets that lead to a significant leap in EHPS capabilities. _2) For the model scaling,_ we take advantage of vision transformers to study the scaling law of model sizes in EHPS. Moreover, our finetuning strategy turn SMPLer-X into _specialist_ models, allowing them to achieve further performance boosts. Notably, our foundation model SMPLer-X consistently delivers state-of-the-art results on seven benchmarks such as AGORA (107.2 _mm_ NMVE), UBody (57.4 _mm_ PVE), EgoBody (63.6 _mm_ PVE), and EHF (62.3 _mm_ PVE without finetuning). 2 Footnote 2: Homepage: [https://caizhongang.github.io/projects/SMPLer-X/](https://caizhongang.github.io/projects/SMPLer-X/). ## 1 Introduction The recent progress in expressive human pose and shape estimation (EHPS) from monocular images or videos offers transformative applications for the animation, gaming, and fashion industries. This task typically employs parametric human models (_e.g._, SMPL-X [51]) to adeptly represent the highly complicated human body, face, and hands. In recent years, a large number of diverse datasets have entered the field [5; 8; 8; 63; 68; 39; 3; 14; 16; 16; 64; 9], providing the community new opportunities to study various aspects such as capture environment, pose distribution, body visibility, and camera views. Yet, the state-of-the-art methods remain tethered to a limited selection of these datasets, creating a bottleneck in performance across varied scenarios and hindering the ability to generalize to unseen situations. Our mission in this study is to explore existing data resources comprehensively, providing key insights crucial for establishing robust, universally applicable models for EHPS. Accordingly, we establish the first systematic benchmark for EHPS, utilizing 32 datasets and evaluating their performance across five major benchmarks. We find that there are significant inconsistencies among benchmarks, revealing the overall complicated landscape of EHPS, and calling for data scaling to combat the domain gaps between scenarios. This detailed examination emphasizes the need to reassess the utilization of available datasets for EHPS, advocating for a shift towards more competitive alternatives that offer superior generalization capabilities, and highlights the importance of harnessing a large number of datasets to capitalize on their complementary nature. Moreover, we systematically investigate the contributing factors that determine the transferability of these datasets. Our investigation yields useful tips for future dataset collection: 1) the more is not necessarily, the merrier: datasets do not have to be very large to be useful as long as they exceed approximately 100K instances based on our observation. 2) Varying indoor scenes is a good alternative if an in-the-wild (including outdoor) collection is not viable. 3) synthetic datasets, despite having traceable domain gaps, are becoming increasingly potent to a surprising extent. 4) Pseudo-SMPL-X labels are useful when ground truth SMPL-X annotations are unavailable. Equipped with the knowledge procured from the benchmark, we exhibit the strength of massive data with SMPLer-X, a _generalist_ foundation model that is trained using a diverse range of datasets and achieves exceptionally balanced results across various scenarios. To decouple from algorithmic research works, we design SMPLer-X with a minimalist mindset: SMPLer-X has a very simple architecture with only the most essential components for EHPS. We hope SMPLer-X could facilitate massive data and parameter scaling and serve as a baseline for future explorations in the field instead of a stringent investigation into the algorithmic aspect. Experiments with various data combinations and model sizes lead us to a well-rounded model that excels across all benchmarks that contests the community norm of limited-dataset training. Specifically, our foundation models demonstrate significant performance boost through both data scaling and model size scaling, reducing the mean primary errors on five major benchmarks (AGORA [50], UBody [39], EgoB-ody [68], 3DPW [58], and EHF [51]) from over 110 mm to below 70 mm (demonstrated in Fig. 1), and showcases impressive generalization capabilities by effectively transferring to new scenarios, such as DNA-Rendering [9] and ARCTIC [14]. Furthermore, we validate the efficacy of finetuning our _generalist_ foundation models to evolve into domain-specific _specialists_, delivering outstanding performance on all benchmarks. Specifically, we follow the same data selection strategy that empowers our specialist models to set new records on the AGORA leaderboard by being the first model to hit 107.2mm in NMVE (an 11.0% improvement) and achieving SOTA performance on EgoBody, UBody, and EHF. Our contributions are three-fold. **1)** We build the first systematic and comprehensive benchmark on EHPS datasets, which provides critical guidance for scaling up the training data toward robust and transferable EHPS. **2)** We explore both data and model scaling in building the _generalist_ foundation model that delivers balanced results across various scenarios and extends successfully to unseen datasets. **3)** We extend the data selection strategy to finetune the foundation model into potent _specialists_, catering to various benchmark scenarios. ## 2 Related Work **Expressive Human Pose and Shape Estimation (EHPS).** Due to the erupting 3D virtual human research applications [66; 67; 21; 20; 7] and the parametric models (e.g., SMPL [42] and SMPL-X [51]), capturing the human pose and shape (HPS) [28; 33; 30; 31; 38; 59; 60], and additionally hands and face (EHPS) [51; 61; 10; 53; 70; 15; 56; 65] from images and videos have attracted increasing Figure 1: **Scaling up EHPS. Both data and model scaling are effective in reducing mean errors on primary metrics across key benchmarks: AGORA [50], UBody [39], EgoBody [68], 3DPW [58] and EHF [51]. OSX [39] and H4W [46] are SOTA methods. Area of the circle indicates model size, with ViT variants as the reference (top right).** attention. Optimization-based methods (e.g., SMPLify-X [51]) detect 2D features corresponding to the whole body and fit the SMPL-X model. However, they suffer from slow speed and are ultimately limited by the quality of the 2D keypoint detectors. Hence, learning-based models are proposed. One of the key challenges of EHPS is the low resolution of hands and face compared with the body-only estimation, making the articulated hand pose estimation and high-quality expression capture difficult. Accordingly, mainstream whole-body models first detect and crop the hands and face image patches, then resize them to higher resolutions and feed them into specific hand and face networks to estimate the corresponding parameters [10; 53; 70; 15; 56; 46; 65; 35]. Due to the highly complex multi-stage pipelines, they inevitably cause inconsistent and unnatural articulation of the mesh and implausible 3D wrist rotations, especially in occluded, truncated, and blurry scenes. Recently, OSX [39] proposes the first one-stage framework based on ViT-based backbone [13] to relieve the issues in previous multi-stage pipelines. This method provides a promising and concise way to scale up the model. However, they only use confined training datasets for a fair comparison and do not explore the combination of more data toward generalizable and precise EHPS. **Multi-dataset Training for Human-centric Vision.** Recent efforts have been using multiple datasets in pretraining a general model for a wide range of downstream human-centric tasks. For example, HumanBench [57] leverages 37 datasets, whereas UniHCP [11] utilizes 33 datasets for tasks such as ReID, pedestrian detection, and 2D pose estimation. However, these works have only evaluated the efficacy of 2D tasks. Sarandi _et al._[54] take advantage of 28 datasets in training a strong model for 3D keypoint detection, which recovers only the skeleton of subjects without estimating body shapes and meshes. Pang _et al._[49] analyze 31 datasets for human pose and shape estimation (_i.e._, SMPL estimation). However, hands and face estimation is not included, and only fewer than ten datasets are used concurrently in the most diverse training. This paper targets to scale training data and model size for EHPS, that simultaneously recovers the expressive pose and shape of the human body, hands, and face. ## 3 Benchmarking EHPS Datasets ### Preliminaries **SMPL-X.** We study expressive human pose and shape estimation via 3D parametric human model SMPL-X [51], which models the human body, hands, and face geometries with parameters. Specifically, our goal is to estimate pose parameters \(\theta\in\mathbb{R}^{55\times 3}\) that include body, hands, eyes, and jaw poses; joint body, hands and face shape \(\beta\in\mathbb{R}^{10}\), and facial expression \(\psi\in\mathbb{R}^{10}\). The joint regressor \(\mathcal{J}\) is used to obtain 3D keypoints from parameters via \(R_{\theta}(\mathcal{J}(\beta))\) where \(R_{\theta}\) is a transformation function along the kinematic tree. Figure 2: **Dataset attribute distributions.** a) and d) are image feature extracted by HumanBench [57] and OSX [39] pretrained ViT-L backbone. b) Global orientation (represented by rotation matrix) distribution. c) Body pose (represented by 3D skeleton joints) distribution. Both e) scenes and f) Real/Synthetic are drawn on the same distribution as d). All: all datasets. UMAP [43] dimension reduction is used with the x and y-axis as the dimensions of the embedded space (no unit). **Evaluation Metrics.** We use standard metrics for EHPS. PVE (per-vertex error) and MPJPE (mean per-joint position error) measure the mean L2 error for vertices and regressed joints, respectively. The "PA" prefix indicates Procrustes Alignment is conducted before error computation. AGORA Leaderboard [50] introduces NMVE (normalized mean vertex error) and NMJE (normalized mean joint error) that take detection performance F1 score into consideration. Moreover, we propose MPE (mean primary error) that takes the mean of multiple primary metrics (MPJPE for 3DPW [58] test, and PVE for AGORA, UBBody, EgoBody, and EHF) to gauge generalizability. All errors are reported in millimeters (mm). ### Overview of Data Sources In this work, we study three major types of datasets. 1) motion capture datasets that leverage optical [23; 16; 14; 17; 18; 44] or vision-based [68; 19; 9; 7] multi-view motion capture systems, are typically collected in a studio environment. However, it is possible to include an outdoor setup, or utilize additional sensors such as IMUs [58]. These datasets generally provide high-quality 3D annotations but are less flexible due to physical constraints, especially those built with immobile capture systems that require accurate sensor calibrations. 2) pseudo-annotated datasets [40; 1; 36; 39; 45; 29; 69; 2; 25; 48; 64; 55] that re-annotate existing image datasets with parametric human annotations [26; 47; 39]. These datasets take advantage of the diversity of 2D datasets, and the pseudo-3D annotations, albeit typically not as high-quality, have been proven effective [49; 33; 26]. 3) synthetic datasets [5; 8; 32; 50; 63] that are produced with renderings engines (_e.g._, Unreal Engine). These datasets produce the most accurate 3D annotations and can easily scale up with high diversity. However, the synthetic-real gap is not fully addressed. Key attributes of the datasets are included in Table 1. To evaluate the EHPS capability across diverse scenarios, we select multiple key datasets to form a comprehensive benchmark. They should possess the desirable traits such as 1) having accurate SMPL or SMPL-X annotations, 2) being representative of certain aspects of real-life scenarios, 3) being widely used, but this requirement is relaxed for the new datasets which are released within two years, and 4) has a clearly defined test set. To this end, five datasets (AGORA [50], UBody [39], EgoBody [68], 3DPW [58], and EHF [51]) representing different aspects are selected as the evaluation datasets. We briefly introduce these five datasets and the rest in the Supplementary Material. **AGORA \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & & \\ \hline BEDALAN [5] & 951.1K & ITW & Syn & - & Yes & 164.7 & 132.5 & 8 & 100.1 & 2 & **88.1** & 84.1 & 1 & **117.4** \\ SuylBody [63] & 633.5K & ITW & Syn & - & Yes & 166.7 & 5 & 146.4 & 11.6 & 136.4 & 106.5 & 3 & 112.9 & 1 & 313.5 \\ LauVasey [29] & 2138.48K & ITW & Real & Nca & 179.0 & 9 & 215.4 & 140.1 & 190.9 & 3 & 103.8 & 4 & 134.3 \\ GTA-Hamn [18] & 1802.2K & ITW & Syn & Yes & 161.9 & 3 & 142.4 & 100.1 & 9 & 8 & 103.4 & 1 & 126.0 & 1 & 134.8 \\ MSCOO [40] & 149.3K & ITW & Real & ITW & 191.6 & 8 & 107.2 & 19.0 & 7 & 112.2 & 10 & 116.3 & 7 & 135.0 \\ FeiBody-MyView [68] & 845.6K & Indoor & Real & Yes & Yes & 190.9 & 7 & 114.8 & 127.0 & 3 & 99.2 & 101.3 & 2 & 142.1 \\ AGORA (100) & 100.7K & ITW & Syn & Yes & 178.0 & 12.4 & 11.4 & 18.4 & 163.4 & 6 & 111.1 & 12 & 146.2 & 4 & 148.4 \\ Facebook-Jogog [68] & 801.6K & Indoor & Real & Yes & Yes & 190.7 & 15.8 & **108.3** & **188.1** & 1 & 134.4 & 1 & 121.4 & 147.5 \\ EUG [22] & 240.4K & ITW & Real & Yes & 195.6 & 106.1 & 18.6 & 11.9 & 5 & 115.5 & 8 & 127.5 & 13 & 148.9 \\ MPI [2] & 229.8K & ITW & Real & IFT & Nca & 203.2 & 11 & 13.9 & 135.5 & 15.3 & 113.9 & 14 & 140.8 & 1 & 150.8 \\ McG-DMPW [45] & 465.3K & ITW & Real & Nca & 137.7 & 6 & 153.7 & 164.4 & 12 & 19.4 & 9 & 134.7 & 15 & 154.7 \\ PROS [19] & 85.5K & Indoor & Real & Yes & 204.1 & 13 & 10.3 & 16.1 & 151.3 & 13.5 & 13.5 & 172.5 & 1 & 158.2 \\ URD [30] & 633.3K & ITW & Real & Yes & 207.0 & **14.7** & **187.5** & 145.5 & 11 & 14.9 & 24.3 & 123.1 & 14 & 158.5 \\ SPEC [32] & 220.2K & ITW & Syn & Yes & 161.5 & 2 & 161.2 & 12.4 & 12.4 & 14 & 139.7 & 21 & 197.8 & 2 & 160.0 \\ Crow is the most widely-used benchmark for SMPL-X evaluation. It is a synthetic dataset featuring diverse subject appearances, poses, and environments with high-quality annotation. We evaluate on both validation and test set (leaderboard) as the latter has a monthly limit of submissions. **UBody** is the latest large-scale dataset with pseudo-SMPL-X annotations that covers fifteen real-life scenarios, such as talk shows, video conferences, and vlogs, which primarily consist of the upper body in images. We follow the intra-scene protocol in training and testing, where all scenarios are seen. **EgoBody** captures human motions in social interactions in 3D scenes with pseudo-SMPL-X annotations. It comprises a first-person egocentric set (EgoSet) and a third-person multi-camera set (MVSet). We test on the EgoSet with heavy truncation and invisibility. **3DPW** is the most popular in-the-wild dataset with SMPL annotations. Since SMPL-X annotation is not available, we map SMPL-X keypoints and test on 14 LSP [24] keypoints following the conventional protocol [28; 33]. **EHF** is a classic dataset with 100 curated frames of one subject in an indoor studio setup, with diverse body poses and especially hand poses annotated in SMPL-X vertices. It has a test set but no training or validation sets. Hence, it is only used to evaluate cross-dataset performance. Besides being popular or the latest evaluation sets for EHPS, we further analyze if these five datasets collectively provide wide coverage of existing datasets. In Fig. 3, we randomly downsample all datasets to equal length (1K examples) and employ UMAP [43] to visualize several key aspects. We use pretrained ViT-L from HumanBench [57] and OSX [39] to process patch tokens flattened as feature vectors from images cropped by bounding boxes. HumanBench is trained for various human-centric tasks (_e.g._, Re-ID, part segmentation, and 2D pose estimation), whereas OSX is an expert model on EHPS. As for global orientation, it is closely associated with camera pose as we convert all data into the camera coordinate frame; we plot its distribution by using flattened rotation matrix representations. Moreover, we follow [52; 8; 49] to represent poses as 3D keypoints regressed from the parametric model. Specifically, we flatten 21 SMPL-X body keypoints, and 15 hand keypoints from each hand, regressed with zero parameters except for the body pose and hand poses. It is shown that 1) the five benchmark datasets have varied distribution, which is expected due to their different designated purposes, and 2) collectively, the five datasets provide a wide, near-complete coverage of the entire dataset pool. ### Benchmarking on Individual Datasets In this section, we aim to benchmark datasets and find those that do well in various scenarios. To gauge the performance of each dataset, we train a SMPLer-X model with the training set of that dataset and evaluate the model on the _val/testing_ sets of five evaluation datasets: AGORA, UBody, EgoBody, 3DPW, and EHF. Here, the benchmarking model is standardized to use ViT-S as the backbone, trained on 4 V100 GPUs for 5 epochs with a total batch size of 128 and a learning rate of \(1\times 10^{-5}\). The dataset preprocessing details are included in the Supplementary Material. In Table 1, we report the primary metrics (Sec. 3.1) and ranking of the 32 datasets. The complete results in the Supplementary Material. We also compute the mean primary error (MPE) to facilitate easy comparison between individual datasets. Note that for AGORA, UBody, EgoBody, and 3DPW, their performances on their own test set are excluded from computing MPE. This is because in-domain evaluation results are typically much better than cross-domain ones, leading to significant error drops. In addition, note that there are datasets designed for specific purposes (_e.g._, Talkshow [64] for gesture generation, DNA-Rendering [9] for human NeRF reconstruction), being ranked lower on our benchmark, which focuses on EHPS (a perception task) does not reduce their unique values and contributions to the computer vision community. From the benchmark, we observe models trained on a single dataset tend to perform well on the same domain but often cannot do well on other domains. For example, the model trained on AGORA is ranked \(1^{st}\) on AGORA (val), but \(6^{th}\) on UBody, \(6^{th}\) on EgoBody, \(12^{th}\) on 3DPW, and \(24^{th}\) on EHF. This observation indicates that 1) the test scenarios are diverse, showcasing the challenging landscape of EHPS, and 2) data scaling is essential for training a robust and transferable model for EHPS due to significant gaps between different domains. ### Analyses on Dataset Attributes In this section, we study attributes that contribute to generalizability. However, it is important to acknowledge that such analyses are not a straightforward task: the attributes often exhibit coupled effects. Consequently, counter-examples are inevitable (_e.g._, we observe that InstaVariety, an in-the-wild dataset, demonstrates strong performance, whereas LSPET, another in-the-wild dataset, does not perform as well). Despite the challenges in pinpointing the exact factors that determine the success of an individual dataset, we adopt a collective perspective and aim to identify general trends with several key factors [49, 41, 8] in Fig. 3, and discussed below. **First**, Fig. 3a) shows that the performance of a dataset (in terms of ranking) is not strongly associated with the number of training instances once the instance number exceeds approximately 100K. Although a very small amount of training data is insufficient to train a strong model, having an exceedingly large amount of data does not guarantee good performance either. For example, MSCOCO only comprises 149.8K training instances but achieves a higher ranking compared to datasets with 10\(\times\) larger scales. This may be attributed to the diverse appearance and complex scenes present in the MSCOCO dataset. Hence, it would be more cost-effective to channel resources to improve diversity and quality, when the dataset has become adequately large. **Second**, we categorize datasets into 1) in-the-wild, which contains data from diverse environments; 2) indoor with several scenes; 3) studio, which has a fixed multi-view setup. Particularly, Fig. 3b) shows that the top 10 are mostly in-the-wild datasets, indoor datasets concentrate in the top 20 and the studio dataset tends to be ranked lower in the benchmark. Moreover, Fig. 2e) illustrates that in-the-wild datasets exhibit the most diverse distribution, covering both indoor and studio datasets. Indoor datasets display a reasonable spread, and studio datasets have the least diversity. Our findings validate previous studies that suggest an indoor-outdoor domain gap [27]. Differing from Pang _et al._[49], which does not differentiate between indoor and studio datasets, we argue that categorizing all datasets collected indoors into a single class oversimplifies the analysis. For example, consider EgoBody [68] and Human3.6M [23]. Both datasets does not have outdoor data; however, EgoBody consists of a wide variety of indoor scenes, whereas Human3.6M consists of only one scene, which may contribute to the better ranking of EgoBody compared to Human3.6M. Hence, this suggests that in-the-wild data collection is the most ideal, but diversifying indoor scenes is the best alternative. **Third**, most of the five contemporary synthetic datasets [5, 63, 50, 32, 8] demonstrate surprising strength and are ranked highly in Fig. 3c). It is worth noting that four (UBody, EgoBody, 3DPW, and EHF) of the five evaluation benchmarks used are real datasets, indicating that knowledge learned from synthetic data is transferable to real scenarios. To explain this observation, we take a close look at Fig. 2f): although real and synthetic datasets do not have extensive overlap, synthetic data possesses two ideal characteristics. First, there is a high overlap between real and synthetic data at the rightmost cluster. Referring to Fig. 2e), which is drawn from the same distribution, we find that this cluster primarily represents in-the-wild data. Therefore, synthetic data includes a substantial number of in-the-wild images that closely resemble real in-the-wild scenarios. Second, synthetic data also have scatters of image features on other clusters, indicating that synthetic data provides coverage to some extent for various real-world scenarios. **Fourth**, Fig. 3d) reveals that a dataset can be valuable with accurate or pseudo-SMPL-X annotations, as they constitute the most of the top 10 datasets. A prominent example is InstaVariety [29], which has only pseudo-SMPL-X annotation produced by NeuralAnnot [47], yet, is ranked third in our benchmark. However, due to the differences in parameter spaces, SMPL annotations are less effective: it is observed that datasets with SMPL annotations tend to cluster in the lower bracket of the benchmark, especially those with pseudo-SMPL annotations. This observation suggests that SMPL-X Figure 3: **Analysis on dataset attributes. We study the impact of a) the number of training instances, b) scenes, c) real or synthetic appearance, and d) annotation type, on dataset ranking in Table 1.** annotations are critical to EHPS; fitting pseudo labels is a useful strategy even if they could be noisy. Moreover, using SMPL labels effectively for SMPL-X estimation remains a challenge. ## 4 Scaling up EHPS ### Model Architectures Catering to our investigation, we design a minimalistic framework (dubbed SMPLer-X) that only retains the most essential parts for two reasons. First, it must be scalable and efficient as we train with a large amount of data. Second, we aim to create a framework that is decoupled from specific algorithm designs, providing a clean foundation for future research. To this end, SMPLer-X consists of three parts: a _backbone_ extracts image features, which we employ Vision Transformer [13] for its scalability; a _neck_ that predicts bounding boxes and crop regions of interest from the feature map for hands and face; regression _heads_ that estimate parameters for each part. Note that SMPLer-X does not require third-party detectors [53], cross-part feature interaction modules [10; 15], projection of coarse SMPL-X estimations [65], or a heavy decoder [39]. As the design of SMPLer-X is not the focus of our investigation, more details are included in the Supplementary Material. ### Training the Generalist Foundation Models The SOTA methods [39; 46] usually train with only a few (_e.g._, MSCOCO, MPII, and Human3.6M) datasets, whereas we investigate training with many more datasets. However, we highlight that the dataset benchmark in Table 1 cannot be used: selecting datasets based on their performance on the test sets of the evaluation benchmarks leaks information about the test sets. Hence, we construct another dataset benchmark in the Supplementary Material, that ranks individual datasets on the _training_ set of the major EHPS benchmarks. We use four data amounts: 5, 10, 20, and 32 datasets as the training set, with a total length of 0.75M, 1.5M, 3.0M, and 4.5M instances. We always prioritize higher-ranked \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \#Datasets & \#Inst. & Model & \#Param. & FPS & AGORA [50] & EgoBody [68] & UBody [39] & 3DPW [58] & EHF [51] & MPE \\ \hline 5 & 0.75M & SMPLer-X-S5 & 32M & 36.2 & 119.0 & 114.2 & 110.1 & 110.2 & 100.5 & 110.8 \\ 10 & 1.5M & SMPLer-X-S10 & 32M & 36.2 & 116.0 & 88.6 & 107.7 & 97.4 & 89.9 & 99.9 \\ 20 & 3.0M & SMPLer-X-S20 & 32M & 36.2 & 109.2 & 84.3 & 70.7 & 87.5 & 86.6 & 87.7 \\ 32 & 4.5M & SMPLer-X-S32 & 32M & 36.2 & 105.2 & 82.5 & 68.1 & 83.2 & 74.1 & 82.6 \\ \hline 5 & 0.75M & SMPLer-X-S5 & 103M & 33.1 & 102.7 & 108.1 & 105.8 & 104.8 & 96.1 & 103.5 \\ 10 & 1.5M & SMPLer-X-B10 & 103M & 33.1 & 97.8 & 76.4 & 107.3 & 89.9 & 74.7 & 89.2 \\ 20 & 3.0M & SMPLer-X-B20 & 103M & 33.1 & 95.6 & 75.5 & 65.3 & 83.5 & 73.0 & 78.6 \\ 32 & 4.5M & SMPLer-X-N32 & 103M & 33.1 & 88.0 & 72.7 & 63.3 & 80.3 & 67.3 & 74.3 \\ \hline 5 & 0.75M & SMPLer-X-S5 & 327M & 24.4 & 88.3 & 98.7 & 110.8 & 97.8 & 89.5 & 97.0 \\ 10 & 1.5M & SMPLer-X-L10 & 327M & 24.4 & 82.6 & 69.7 & 104.0 & 82.5 & 64.0 & 80.6 \\ 20 & 3.0M & SMPLer-X-L20 & 327M & 24.4 & 80.7 & 66.6 & 61.5 & 78.3 & 65.4 & 70.5 \\ 32 & 4.5M & SMPLer-X-L32 & 327M & 24.4 & 74.2 & 62.2 & 57.3 & 75.2 & 62.4 & 66.2 \\ \hline 5 & 0.75M & SMPLer-X-H5 & 662M & 17.5 & 89.0 & 87.4 & 102.1 & 88.3 & 68.3 & 87.0 \\ 10 & 1.5M & SMPLer-X-H10 & 662M & 17.5 & 81.4 & 65.7 & 100.7 & 78.7 & **56.6** & 76.6 \\ 20 & 3.0M & SMPLer-X-H20 & 662M & 17.5 & 77.5 & 63.5 & 59.9 & **74.4** & 59.4 & 67.0 \\ 32 & 4.5M & SMPLer-X-H32 & 662M & 17.5 & **69.5** & **59.5** & **54.5** & 75.0 & 56.8 & **63.1** \\ \hline \hline \end{tabular} \end{table} Table 2: **Foundation Models.** We study the scaling law of the amount of data and the model sizes. The metrics are MPJPE for 3DPW, and PVE for other evaluation benchmarks. Foundation models are named “SMPLer-X-MN”, where M indicates the size of ViT backbone (S, B, L, H), N is the number of datasets used in the training. FPS: inference speed (frames per second) on a V100 GPU. MPE: mean primary error. AGORA uses the validation set, and EgoBody uses the EgoSet. Figure 4: **Architecture of SMPLer-X, which upholds the idea that ”simplicity is beauty”. SMPLer-X contains a backbone that allows for easy investigation on model scaling, a neck for hand and face feature cropping, and heads for different body parts. Note that we wish to show in this work that model and data scaling are effective, even with a straightforward architecture.** datasets. To prevent larger datasets from shadowing smaller datasets, we adopt a balanced sampling strategy. Specifically, all selected datasets are uniformly upsampled or downsampled to the same length and add up to the designated total length. To facilitate training, we follow OSX [39] to use AGORA, UBody, MPII, 3DPW, Human3.6M in COCO-format [40], and standardize all other datasets into the HumanData [12] format. We also study four ViT backbones of different sizes (ViT-Small, Base, Large and Huge), pretrained by ViTPose [62]. The training is conducted on 16 V100 GPUs, with a total batch size of 512 (256 for ViT-Huge) for 10 epochs. More training details such as adapting SMPL or gendered SMPL-X in the training are included in the Supplementary Material. In Table 2, we show experimental results with a various number of datasets and foundation model sizes. Foundation models are named "SMPLer-X-MN", where M can be S, B, L, H that indicates the size of the ViT backbone, and N indicates the number of datasets used in the training. For example, SMPLer-X-L10 means the foundation model takes ViT-L as the backbone, and is trained with Top 10 datasets (ranked according to the individual dataset performance on the training sets of the key evaluation benchmarks). It is observed that **1)** more training data (data scaling) leads to better performance in terms of MPE. The model performance improves gradually as the number of training datasets increases. However, besides the increment in training instances, more datasets provide a richer collection of diverse scenarios, which we argue is also a key contributor to the performance gain across evaluation benchmarks. **2)** A larger foundation model (model scaling) performs better at any given amount of data. However, the marginal benefits of scaling up decrease beyond model size L. Specifically, ViT-H has more than twice the parameters than ViT-L, but the performance gain is not prominent. **3)** The foundation model always performs better than in-domain training on a single training set. For example, SMPLer-X-B20, performs better on the validation set of AGORA, and test sets of UBody, EgoBody, and 3DPW, than models trained specifically on the corresponding training set in Table 1. This is useful for real-life applications: instead of training a model for each of the user cases, a generalist foundation model contains rich knowledge to be a one-size-fits-all alternative. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{PA-PVE\({}_{1}\) (mm)} & \multicolumn{2}{c}{PVE\({}_{1}\) (mm)} \\ \cline{2-6} & All & Hands & Face & All & Hands & Face \\ \hline HandWhole [46] & 73.2 & 9.7 & 4.7 & 183.9 & 72.8 & 81.6 \\ OSX [39] & 69.4 & 11.5 & 4.8 & 168.6 & 70.6 & 77.2 \\ OSX [39] & 45.0 & **8.5** & 19.9 & 79.6 & 48.2 & 37.9 \\ SMPLer-X-B1+ & 48.9 & **8.6** & 4.0 & 86.1 & 51.5 & 41.2 \\ SMPLer-X-L20 & 48.6 & 8.9 & 4.0 & 80.7 & 51.0 & 41.3 \\ SMPLer-X-L32 & 45.1 & 8.7 & **3.8** & **24.2** & **42.8** & **38.7** \\ SMPLer-X-L20J & **39.1** & 9.3 & **3.8** & **62.5** & **42.3** & **32.8** \\ \hline \hline \end{tabular} \end{table} Table 4: **AGORA Val set. \(\dagger\) and \(*\) are finetuned on the AGORA training set, and trained on the AGORA training set only, respectively.** \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{MVUE\({}_{1}\) (mm)} & \multicolumn{2}{c}{NMUE\({}_{1}\) (mm)} & \multicolumn{4}{c}{MVUE\({}_{1}\) (mm)} & \multicolumn{4}{c}{MPIPE\({}_{1}\) (mm)} \\ \cline{2-13} Method & All & Body & All & Body & All & Body & Face & LHand & HHand & All & Body & Face & LHand & RHHand \\ \hline BEDLAM [5] & 179.5 & 132.2 & 177.5 & 131.4 & 131.0 & 96.5 & 25.8 & 38.8 & 39.0 & 129.6 & 95.9 & 27.8 & 36.6 & 36.7 \\ HandWhole [46] & 144.1 & 96.0 & 141.1 & 92.7 & 135.5 & 90.2 & 41.6 & 46.3 & 48.1 & 13.6 & 87.1 & 46.1 & 44.3 & 46.2 \\ BEDLAM [5] & 142.2 & 102.1 & 141.0 & 110.8 & 103.8 & 74.5 & **23.1** & **31.7** & **33.2** & 102.9 & 74.3 & **24.7** & **29.9** & **31.3** \\ PyMx-X [65] & 141.2 & 94.4 & 140.0 & 93.5 & 125.7 & 84.0 & 35.0 & 44.6 & 45.6 & 124.6 & 83.2 & 37.9 & 42.5 & 43.7 \\ OSX [39] & 130.6 & 85.3 & 127.6 & 83.3 & 122.8 & 80.2 & 36.2 & 45.4 & 46.1 & 119.9 & 78.3 & 37.9 & 43.0 & 43.9 \\ HybridX-X [35] & 120.5 & 73.7 & 115.7 & 72.3 & 112.1 & 68.5 & 37.0 & 46.7 & 47.0 & 107.6 & 67.2 & 38.5 & 41.2 & 41.4 \\ SMPLer-X-L32 & 133.1 & 88.1 & 128.9 & 84.6 & 123.8 & 81.9 & 37.4 & 43.6 & 44.8 & 119.9 & 78.7 & 39.5 & 41.4 & 44.8 \\ SMPLer-X-L32 & 122.8 & 80.3 & 191.1 & 77.6 & 114.2 & 74.7 & 35.1 & 41.3 & 42.2 & 110.8 & 72.2 & 36.7 & 39.1 & 40.1 \\ SMPLer-X-L20J & **107.2** & **68.3** & **104.1** & **66.3** & **99.7** & **63.5** & 29.9 & 39.1 & 39.5 & **96.8** & **61.7** & 31.4 & 36.7 & 37.2 \\ \hline \hline \end{tabular} \end{table} Table 3: **AGORA test set. \(\dagger\) denotes the methods that are finetuned on the AGORA training set.**\(*\)denotes the methods that are trained on AGORA training set only. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{PA-PVE\({}_{1}\) (mm)} & \multicolumn{2}{c}{PVE\({}_{1}\) (mm)} \\ \cline{2-5} & All & Hands & Face & All & Hands & Face \\ \hline Hand4Whole [46] & 50.3 & **10.8** & 5.8 & 76.8 & **39.8** & 26.1 \\ OSX [39] & 48.7 & 15.9 & 6.0 & 70.8 & 53.7 & 26.4 \\ SMPLer-X-L20 & 37.8 & 15.0 & 5.1 & 65.4 & 49.4 & 17.4 \\ SMPLer-X-L32 & **37.1** & 14.1 & **5.0** & **62.4** & 47.1 & **17.0** \\ \hline \hline \end{tabular} \end{table} Table 10: **3DPW. \(\ddagger\) denotes the methods that use a head for SMPL regression. \(\dagger\) and \(*\) are finetuned on the 3DPW training set and trained on 3DPW training set only, respectively. Unit: \(mm\).** \begin{table} \begin{tabular}{l c c} \hline \hline Method & MPJPE & PA-MPJPE \\ \hline \hline \multicolumn{2}{c}{Body-only (SMPL) Methods} \\ \hline \hline OSX-MPL [39]\(*\) & 74.7 & 45.1 \\ HybrIK [37] & 71.6 & **41.8** \\ CLIFF [38] & **68.0** & 43.0 \\ \hline \hline \multicolumn{2}{c}{Whole-Body (SMPL-X) Methods} \\ \hline \hline Hand4Whole [46] & 86.6 & 54.4 \\ ExPose [10] & 93.4 & 60.7 \\ OXS [39]\({}^{\dagger}\) & 86.2 & 60.6 \\ SMPLer-X-B1\(*\) & 95.6 & 67.6 \\ SMPLer-X-L20 & 78.3 & 52.1 \\ SMPLer-X-L32 & **75.2** & **50.5** \\ SMPLer-X-L20J & 76.8 & 51.5 \\ \hline \hline \end{tabular} \end{table} Table 5: **EHF. As EHF does not have a training set to benchmark datasets, we do not perform finetuning. Moreover, EHF is not seen in our training and can be used to validate our foundation models’ transferability.** Besides the errors on key benchmarks, we also report the inference speed (in terms of FPS, or frames per second) of the SMPLer-X model family in Table 2. The testing is conducted on a single V100 GPU with batch size 1, excluding data loading. SMPLer-X family is faster than OSX (12.2 FPS on a single V100 GPU) using the same test setting, and the smaller versions such as SMPLer-X-S and SMPLer-X-B can achieve real-time performance, with SMPLer-X-L on the verge of achieving real-time speeds. The high inference speed is attributed to the minimalistic architecture of SMPLer-X, which only retains the most essential components for EHPS. Moreover, we show detailed by-part results of body, hands, and face on main benchmarks such as AGORA test set (Table 3), AGORA validation set (Table 4), UBody (Table 6), EgoBody-EgoSet (Table 7) and EHF (Table 5). We also compare our results with whole-body methods on 3DPW (Table 10). We highlight that the foundation models show strong and balanced performances on all benchmarks. Furthermore, we evaluate the transferability of our foundation models on two more benchmarks: ARCTIC (Table 8) and DNA-Rendering (Table 8). ARCTIC features complicated hand-object interaction with whole-body annotations, and DNA-Rendering includes diverse subjects, motions, and garments. Note that ARCTIC is not seen by foundation models trained on Top 10 datasets, and DNA-Rendering is not seen by foundation models trained on Top 20 datasets. The foundation models, however, achieve much better performance than SOTAs with conventional data sampling strategies. In addition, we compare our foundation model with SOTA methods, such as Hand4Whole [46] and OSX [39] in various scenarios in Fig. 5. These scenarios feature challenging aspects such as heavy truncation (from only half of the body visible to only the arms visible), difficult body poses in diverse backgrounds, and rare camera angles (extremely high or low elevation angles). SMPLer-X demonstrates the strength of massive training data and consistently produces robust estimations. finetune experiments on ViT-L to match the backbone of current SOTA [39]. The results are shown in the same tables as the foundation models (Table 3, 4, 5, 6, 7, and 10), where finetuning always lead to substantial performance enhancement on the foundation models. ## 5 Conclusion In this work, we benchmark datasets for EHPS that provide us insights for training and finetuning a foundation model. Our work is useful in three ways. First, our pretrained model (especially the backbone) can be a plug-and-play component of a larger system for EHPS and beyond. Second, our benchmark serves to gauge the performances of future generalization studies. Third, our benchmarking-finetuning paradigm can be useful for the rapid adaptation of any foundation model to specific scenarios. Specifically, users may collect a training set, evaluate pretrained models of various other datasets on it, and select the most relevant datasets to finetune a foundation model. **Limitations.** First, although we use five comprehensive benchmark datasets to gauge the generalization capability, they may still be insufficient to represent the real-world distribution. Second, our experiments do not fully investigate the impact of various model architectures due to the prohibitive cost of training the foundation model. **Potential negative societal impact.** As we study training strong EHPS models and release the pretrained models, they may be used for unwarranted surveillance or privacy violation. Figure 5: **Visualization.** We compare SMPLer-X-L32 with OSX [39] and Hand4Whole [46] (trained with the MSCOCO, MPII, and Human3.6M) in various scenarios such as those with heavy truncation, hard poses, and rare camera angles. ## Acknowledgement This study is supported under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). The project is also supported by NTU NAP and Singapore MOE AcRF Tier 2 (MOET2EP20221-0012).
2309.08568
Denoising Diffusion Probabilistic Models for Hardware-Impaired Communications
Generative AI has received significant attention among a spectrum of diverse industrial and academic domains, thanks to the magnificent results achieved from deep generative models such as generative pre-trained transformers (GPT) and diffusion models. In this paper, we explore the applications of denoising diffusion probabilistic models (DDPMs) in wireless communication systems under practical assumptions such as hardware impairments (HWI), low-SNR regime, and quantization error. Diffusion models are a new class of state-of-the-art generative models that have already showcased notable success with some of the popular examples by OpenAI1 and Google Brain2. The intuition behind DDPM is to decompose the data generation process over small ``denoising'' steps. Inspired by this, we propose using denoising diffusion model-based receiver for a practical wireless communication scheme, while providing network resilience in low-SNR regimes, non-Gaussian noise, different HWI levels, and quantization error. We evaluate the reconstruction performance of our scheme in terms of mean-squared error (MSE) metric. Our results show that more than 25 dB improvement in MSE is achieved compared to deep neural network (DNN)-based receivers. We also highlight robust out-of-distribution performance under non-Gaussian noise.
Mehdi Letafati, Samad Ali, Matti Latva-aho
2023-09-15T17:35:50Z
http://arxiv.org/abs/2309.08568v2
# Denoising Diffusion Probabilistic Models for Hardware-Impaired Communications ###### Abstract Generative AI has received significant attention among a spectrum of diverse industrial and academic domains, thanks to the magnificent results achieved from deep generative models such as generative pre-trained transformers (GPT) and diffusion models. In this paper, we explore the applications of denoising diffusion probabilistic models (DDPMs) in wireless communication systems under practical assumptions such as hardware impairments (HWI), low-SNR regime, and quantization error. Diffusion models are a new class of state-of-the-art generative models that have already showcased notable success with some of the popular examples by OpenAI1 and Google Brain2. The intuition behind DDPM is to decompose the data generation process over small "denoising" steps. Inspired by this, we propose using denoising diffusion model-based receiver for a practical wireless communication scheme, while providing _network resilience_ in low-SNR regimes, non-Gaussian noise, different HWI levels, and quantization error. We evaluate the reconstruction performance of our scheme in terms of mean-squared error (MSE) metric. Our results show that more than \(25\) dB improvement in MSE is achieved compared to deep neural network (DNN)-based receivers. We also highlight _robust out-of-distribution_ performance under non-Gaussian noise. AI-native wireless, diffusion models, generative AI, network resilience, wireless AI. Footnote 1: [https://openai.com/dall-e-2](https://openai.com/dall-e-2) Footnote 2: [https://imagen.research.google/](https://imagen.research.google/) ## I Introduction The emergence of innovative approaches in generative artificial intelligence (GenAI) has lead to the development of novel ideas for AI-based systems [1]. At the same time, from data communication and networking perspective, "connected intelligence" is envisioned as the most significant driving force in the sixth generation (6G) of communications--machine learning (ML) and AI algorithms are envisioned to be widely incorporated into 6G networks, realizing "AI-native" wireless systems [2, 3]. This underscores the need for novel AI/ML-based solutions to be tailored for the emerging communication scenarios [4, 5]. Most of the research carried out so far on AI-native wireless has been focused on "discriminative models". One can consider [6] and [7] as two seminal papers that have attracted remarkable attention in both academia and industry. From a very high-level perspective, the goal of such models is to simply learn the "boundaries" between classes or latent spaces of high-dimensional signals. On the other hand, "generative models" are aimed to learn the "representations" of signals, and generate the desired samples accordingly. In this paper, we take a radically different approach, and our aim is to unleash the power of GenAI for wireless systems. One of the recent breakthroughs in GenAI is the evolution of diffusion models, as the new state-of-the-art family of generative models [8]. It has lead to unprecedented results in different applications such as computer vision, natural language processing (NLP), and medical imaging [9]. The key idea behind diffusion models is that _if we could develop an ML model that can learn the systematic decay of information then it should be possible to "reverse" the process and recover the information back from the noisy/erroneous data._ The close underlying relation between the key concepts on how diffusion models work and the problems in wireless communications has motivated us to carry out this research. Notably, _the incorporation of diffusion models into wireless communication problems is still in its infancy, hoping that this paper would shed light on some of the possible directions._ Ongoing research on diffusion models encompasses both theoretical advancements and practical applications across different domains of computer science society. However, there have been only a few papers in wireless communications literature that have started looking into the potential merits of diffusion models for wireless systems [5, 10, 11, 12, 13]. The authors in [5] study a workflow for utilizing diffusion models in wireless network management. They exploit the flexibility and exploration ability of diffusion models for generating contracts using diffusion models in mobile AI-generated content services as a use-case. Diffusion models are utilized in [10] to generate synthetic channel realizations. The authors tackle the problem of differentiable channel model within the training process of end-to-end ML-based communications. The results highlight the performance of diffusion models as an alternative to generative adversarial network (GAN)-based schemes. The authors show that GANs experience unstable training and less diversity in generation performance, while diffusion models maintain a more stable training process and a better generalization in inference. Noise-conditioned neural networks are employed in [11] for channel estimation in multi-input-multi-output (MIMO) wireless communications. The authors employ RefineNet neural architecture and run posterior sampling to generate channel estimations based on the pilot signals observed at the receiver. The results in [11] highlight a competitive performance for both in- and out-of-distribution (OOD) scenarios compared to GANs. In [12], deep learning-based joint source-channel coding (Deep-JSCC) is combined with diffusion models to complement digital communication schemes with a generative component. The results indicate that the perceptual quality of reconstruction can be improved by employing diffusion models. Despite the close relation between the "denoise-and-generate" characteristics of diffusion models and the problems in communication theory, to the best of our knowledge, there is only one preprint [13] in the literature that studies the application of denoising diffusion models in wireless to help improve the receiver's performance in terms of noise removal. The authors utilize a diffusion model and call it channel denoising diffusion model (CDDM). However, the paper has some drawbacks, which need further considerations. Specifically, i) the authors do not evaluate the performance of their CDDM module under realistic scenarios such as hardware impairments (HWI), low-SNR regimes, and non-Gaussian noise. Rather, their goal is to simply compensate for the channel noise and equalization errors. ii) To reconstruct and generate samples, the authors implement another neural decoder block at the receiver in addition to CDDM. However, having two different ML models, each maintaining a distinct objective function can impose computational overhead to the network. iii) The proposed scheme in [13] relies on the channel state information (CSI) knowledge at the diffusion module, and the transmitter has to feed back the CSI data to the receiver, which can cause communication overhead and might not be aligned with communication standards. _Our Work:_ In this paper, we study the implementation of denoising diffusion probabilistic models (DDPM), proposed by Ho _et al._ in 2020 [8], for a practical wireless communication systems with HWIs. The key idea of DDPM is to decompose the data generation process into subsequent "denoising" steps and then gradually generating the desired samples out of noise. Inspired by the recent visions on AI-native wireless for 6G [2], our general idea in this paper is that instead of designing a communication system which avoids HWI and estimation/decoding errors, we can train the network to handle such distortions, aiming to introduce _native resilience_ for wireless AI [2, 3, 4]. In our proposed approach, a DDPM is employed at the receiver side (without relying on any other conventional autoencoder in contrast to [13]) to enhance the resilience of the wireless system against practical non-idealities such as HWI and quantization error. Our DDPM is parameterized by a neural network (NN) comprised of conditional linear layers. Inspired by the Transformer paper (Vaswani _et al._, 2017 [14]), we only employ one model for the entire denoising time-steps by incorporating the time embeddings into the model. After the diffusion model is trained to generate data samples out of noise, the receiver runs the reverse diffusion process algorithm, starting from the distorted received signals, to reconstruct the transmitted data. We demonstrate the _resilience_ of our DDPM-based approach in different scenarios, including low-SNR regimes, non-Gaussian noise, different HWI levels, and quantization error, which were not addressed in [13]. We also evaluate the mean-squared error (MSE) of our scheme, highlighting more than \(25\) dB performance improvement compared to the deep neural network (DNN)-based receiver of [7] as one of the promising benchmark designs in ML-based communications systems. In the following sections, we first introduce the concept of DDPMs together with the main formulas in Section II. Our system model is introduced in Section III, where we provide the generic formulations and the details of the neural architecture and algorithms. Then, we study numerical evaluations in Section IV, and conclude the paper in Section V.3 Footnote 3: _Notations:_ Vectors and matrices are represented, respectively, by bold lower-case and upper-case symbols. \(|\cdot|\) and \(||\cdot||\) respectively denote the absolute value of a scalar variable and the \(\ell_{2}\) norm of a vector. Notation \(\mathcal{N}(\mathbf{x};\mathbf{\mu},\mathbf{\Sigma})\) stands for the multivariate normal distribution with mean vector \(\mathbf{\mu}\) and covariance matrix \(\mathbf{\Sigma}\) for a random vector \(\mathbf{x}\). Similarly, complex normal distribution with the corresponding mean vector and covariance matrix is denoted by \(\mathcal{CN}(\mathbf{\mu},\mathbf{\Sigma})\). Moreover, the expected value of a random variable (RV) is denoted by \(\mathbb{E}\left[\cdot\right]\) Sets are denoted by calligraphic symbols. \(\mathbf{0}\) and \(\mathbf{I}\) respectively show all-zero vector and identity matrix. Moreover, \([N]\), (with \(N\) as integer) denotes the set of all integer values from \(1\) to \(N\), and \(\text{Unif}[N]\), \(N>1\), denotes discrete uniform distribution with samples between \(1\) to \(N\). ## II Preliminaries on DDPMs Diffusion models are a new class of generative models that are inspired by non-equilibrium thermodynamics [8]. They consist of two diffusion processes, i.e., the forward and the reverse process. During the forward diffusion steps, random perturbation noise is purposefully added to the original data. Then in a reverse process, DDPMs learn to construct the desired data samples out of noise. Let \(\mathbf{x}_{0}\) be a data sample from some distribution \(q(\mathbf{x}_{0})\). For a finite number, \(T\), of time-steps, the forward diffusion process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) is defined by adding Gaussian noise at each time-step \(t\in[T]\) according to a known "variance schedule" \(0<\beta_{1}<\beta_{2}<\cdots<\beta_{T}<1\). This can be formulated as \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) \sim\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1 },\beta_{t}\mathbf{I}), \tag{1}\] \[q(\mathbf{x}_{1:T}|\mathbf{x}_{0}) =\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}). \tag{2}\] Invoking (2), the data sample gradually loses its distinguishable features as the time-step goes on, where with \(T\rightarrow\infty\), \(\mathbf{x}_{T}\) approaches an isotropic Gaussian distribution with covariance matrix \(\mathbf{\Sigma}=\sigma^{2}\mathbf{I}\) for some \(\sigma>0\)[8]. According to (1), each new sample at time-step \(t\) can be drawn from a conditional Gaussian distribution with mean vector \(\mu_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}\) and covariance matrix \(\mathbf{\Sigma}_{t}^{2}=\beta_{t}\mathbf{I}\). Hence, the forward process is realized by sampling from a Gaussian noise \(\mathbf{\epsilon}_{t-1}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and setting \[\mathbf{x}_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}+\sqrt{\beta_{t}}\mathbf{ \epsilon}_{t-1}. \tag{3}\] A useful property for the forward process in (3) is that we can sample \(\mathbf{x}_{t}\) at any arbitrary time step \(t\) in a closed-form expression, through recursively applying the reparameterization trick from ML literature [15]. This results in \[\mathbf{x}_{t} =\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}} \mathbf{\epsilon}_{0}, \tag{4}\] \[q(\mathbf{x}_{t}|\mathbf{x}_{0}) \sim\mathcal{N}\left(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}} \mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\right), \tag{5}\] where \(\bar{\alpha}_{t}\!=\!\prod_{i=1}^{t}(1-\alpha_{i})\) and \(\alpha_{t}=1-\beta_{t}\)[15]. Now the problem is to reverse the process in (4) and sample from \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), so that we regenerate the true samples from \(\mathbf{x}_{T}\). According to [8], for \(\beta_{t}\) small enough, \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) also follows Gaussian distribution \(\forall t\in[T]\). However, we cannot estimate the distribution, since it requires knowing the distribution of all possible data samples (or equivalently exploiting the entire dataset). Hence, to approximate the conditional probabilities and run the reverse diffusion process, we need to learn a probabilistic model \(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) that is parameterized by \(\mathbf{\theta}\). According to the above explanations, the following expressions can be written \[p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) \sim\mathcal{N}(\mathbf{x}_{t-1};\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x }_{t},t),\mathbf{\Sigma}_{\mathbf{\theta}}(\mathbf{x}_{t},t)), \tag{6}\] \[p_{\mathbf{\theta}}(\mathbf{x}_{0:T}) =p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\mathbf{\theta}}(\mathbf{x}_{t-1 }|\mathbf{x}_{t}). \tag{7}\] Hence, the problem simplifies to learning the mean vector \(\mathbf{\mu}_{\mathbf{\theta}}(x_{t},t)\) and the covariance matrix \(\mathbf{\Sigma}_{\mathbf{\theta}}(x_{t},t)\) for the probabilistic model \(p_{\mathbf{\theta}}(\cdot)\), where an NN can be trained to approximate (learn) the reverse process. Before proceeding with the details of learning the reverse diffusion process, we note that if we condition the reverse process on \(\mathbf{x}_{0}\), this conditional probability becomes tractable. The intuition behind this could be explained as follows. _A painter (our generative model) requires a reference image \(\mathbf{x}_{0}\) to be able to gradually draw a picture._ Hence, when we have \(\mathbf{x}_{0}\) as a reference, we can take a small step backwards from noise to generate the data samples. Then, the reverse step is formulated as \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\). Mathematically speaking, we can derive \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\) using Bayes rule as follows. \[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\sim\mathcal{N}(\mathbf{x}_{t -1};\ \tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t),\tilde{\beta}_{t}\mathbf{I}), \tag{8}\] where \[\tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t) =\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{ t}}\mathbf{x}_{t}+\frac{\sqrt{\alpha_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}} \mathbf{x}_{0},\ \ \text{and} \tag{9}\] \[\tilde{\beta}_{t} =\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}. \tag{10}\] Invoking (10), one can infer that the covariance matrix in (8) has no learnable parameter. Hence, we simply need to learn the mean vector \(\tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). To further simplify (9), we note that thanks to the reparameterization trick and with a similar approach to (4), we can express \(\mathbf{x}_{0}\) as follows. \[\mathbf{x}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t}-\sqrt{1-\bar{ \alpha}_{t}}\mathbf{\epsilon}_{t}). \tag{11}\] Substituting \(\mathbf{x}_{0}\) in (9) by (11) results in \[\tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)=\frac{1}{\sqrt{\alpha_{t}}} \Big{(}\mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\mathbf{ \epsilon}_{t}\Big{)}. \tag{12}\] Now we can learn the conditioned probability distribution \(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) of the reverse diffusion process by training a NN that approximates \(\tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). Therefore, we simply need to set the approximated mean vector \(\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) to have the same form as the target mean vector \(\tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). Since \(\mathbf{x}_{t}\) is known at time-step \(t\), we can reparameterize the NN to make it approximate \(\mathbf{\epsilon}_{t}\) from the input \(\mathbf{x}_{t}\). Compiling these facts results in the following expression for \(\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) \[\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\Big{(} \mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\mathbf{\epsilon}_{ \mathbf{\theta}}(\mathbf{x}_{t},t)\Big{)}, \tag{13}\] where \(\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) denotes our NN. We now define the loss function \(\mathcal{L}_{t}\) aiming to minimize the difference between \(\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) and \(\tilde{\mathbf{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). \[\mathcal{L}_{t} =\mathbb{E}_{\begin{subarray}{c}t\sim\text{Unif}[T]\\ \mathbf{x}_{0}\sim q(\mathbf{x}_{0})\\ \epsilon_{0}\sim\mathcal{N}(0,\mathbf{I})\end{subarray}}\Big{[}\|\mathbf{\epsilon}_{t }-\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\|^{2}\Big{]}\] \[=\mathbb{E}_{\begin{subarray}{c}t\sim\text{Unif}[T]\\ \mathbf{x}_{0}\sim q(\mathbf{x}_{0})\\ \epsilon_{0}\sim\mathcal{N}(0,\mathbf{I})\end{subarray}}\Big{[}\|\mathbf{\epsilon}_{t }-\mathbf{\epsilon}_{\mathbf{\theta}}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1- \bar{\alpha}_{t}}\mathbf{\epsilon}_{t},t)\|^{2}\Big{]}. \tag{14}\] Invoking (14), at each time-step \(t\), the DDPM model takes \(\mathbf{x}_{t}\) as input and returns the distortion components \(\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\). Moreover, \(\mathbf{\epsilon}_{t}\) denotes the diffused noise term at time step \(t\). ## III System Model and Proposed Scheme ### _Problem Formulation_ Consider a point-to-point communication system with non-ideal transmitter and receiver hardware. We denote by \(s_{k}\in\mathbb{C}\), the \(k\)-th element, \(k\in[K]\), in the batch (with size \(K\)) of unit-power data samples that are supposed to be transmitted over the air. Wireless channel between the communication entities follows block-fading model, and is represented by a complex-valued scalar \(h_{k}\in\mathbb{C},\forall k\in[K]\), taking independent realizations within each coherence block. The corresponding received signal \(y_{k}\) under non-linear HWIs is formulated by \[y_{k}=h_{k}(\sqrt{p}s_{k}+\eta_{k}^{t})+\eta_{k}^{r}+n_{k}, \tag{15}\] where \(p\) denotes the transmit power, \(\eta_{k}^{t}\sim\mathcal{CN}(0,\kappa^{t}p)\) is the distortion noise caused by the transmitter hardware with the corresponding impairment level \(\kappa^{t}\). Moreover, \(\eta_{k}^{r}\) reflects the hardware distortion at the receiver with \(\kappa^{r}\) showing the level of impairment at the receiver hardware. Notably, \(\eta_{k}^{r}\) is conditionally Gaussian, given the channel realization \(h_{k}\), and \(\eta_{k}^{r}\sim\mathcal{CN}(0,\kappa^{r}p|h_{k}|^{2})\).4 To further explain (15), we emphasize that according to [16], the distortion noise caused at each radio frequency (RF) device is proportional to its signal power. In addition to the receiver noise \(n_{k}\) that models random fluctuations in the electronic circuits of the receiver, a fixed portion of the information signal is turned into _distortion noise_ due to inter-carrier interference induced by phase noise, leakage from the mirror subcarrier under I/Q imbalance, nonlinearities in power amplifiers, etc. [16]. Footnote 4: This is an experimentally-validated model for HIs, which is widely-adopted in wireless communication literature [16]. **Remark 1**: _The power of distortion noise is proportional to the signal power \(p\) and the channel gain \(|h_{k}|^{2}\). According to [16], HWI levels, \(\kappa^{t}\) and \(\kappa^{r}\), characterize the proportionality coefficients and are related to the error vector magnitude (EVM). Following [16], we consider \(\kappa\)-parameters in the range \([0,0.15^{2}]\) in our simulations, where smaller values imply less-impaired transceiver hardware._ **Remark 2**: _The additive noise term \(n_{k}\sim\mathcal{CN}(0,\delta^{2})\) in (15) could be interpreted as the aggregation of independent receiver noise and interference from simultaneous transmissions of other nodes. In this paper, for the sake of notation brevity, we have used a common notation \(n_{k}\) for the aggregate effect of interference and noise. Investigation of interference signals will be addressed in our subsequent works._ _Adhering to the "batch-processing" nature of AI/ML algorithms and AI/ML-based wireless communications simulators [17], we formulate data signaling expressions in matrix-based format. We define a batch of data samples by \(\mathbf{s}\stackrel{{\Delta}}{{=}}[s_{1},\cdots,s_{K}]^{\mathsf{T}}\) with the underlying distribution \(\mathbf{s}\sim q(\mathbf{s})\). We also define the corresponding channel realizations vector as \(\mathbf{h}\stackrel{{\Delta}}{{=}}[h_{1},\ldots,h_{K}]^{\mathsf{T}}\). Similarly, \(\boldsymbol{\eta}^{\prime}\stackrel{{\Delta}}{{=}}[\eta_{1}^{r},\ldots,\eta_{K}^{r}]^{\mathsf{T}}\in\mathbb{C}^{K}\), where \(\boldsymbol{\eta}^{\prime}\) is conditionally Gaussian given the channel realizations \(\{h_{k}\}_{k\in[K]}\), i.e., \(\boldsymbol{\eta}^{\prime}\sim\mathcal{CN}(\mathbf{0}_{K},\kappa^{r}\mathbf{ p}\mathbf{G})\), with \(\mathbf{G}=\text{diag}(\mathbf{g})\), where \(\mathbf{g}\stackrel{{\Delta}}{{=}}\left[|h_{1}|^{2},\ldots,|h_{K }|^{2}\right]^{\mathsf{T}}\). We also define \(\boldsymbol{\eta}^{t}\stackrel{{\Delta}}{{=}}[\eta_{1}^{t},\ldots,\eta_{K}^{t}]^{\mathsf{T}}\in\mathbb{C}^{K}\) with \(\boldsymbol{\eta}^{t}\sim\mathcal{CN}(\mathbf{0}_{K},\kappa^{t}\mathbf{p} \mathbf{I}_{K})\), and \(\mathbf{n}\sim\mathcal{CN}(\mathbf{0}_{K},\sigma^{2}\mathbf{I}_{K})\). Then we can rewrite the batch \(\mathbf{y}\in\mathbb{C}^{K}\) of received samples, which is given below._ \[\mathbf{y}=\sqrt{p}\ \mathbf{H}\mathbf{s}+\boldsymbol{\zeta}, \tag{16}\] _where \(\mathbf{H}=\text{diag}(\mathbf{h})\), and \(\boldsymbol{\zeta}\stackrel{{\Delta}}{{=}}\mathbf{H}\boldsymbol{ \eta}^{t}+\boldsymbol{\eta}^{r}+\mathbf{n}\) represents the effective Gaussian noise-plus-distortion given the channel vector \(\mathbf{h}\) with the conditional distribution \(\boldsymbol{\zeta}|\mathbf{h}\sim\mathcal{CN}(\mathbf{0}_{K},\boldsymbol{ \Sigma})\) and the covariance matrix \(\boldsymbol{\Sigma}\) given by \(\boldsymbol{\Sigma}=p(\kappa^{t}+\kappa^{r})\mathbf{G}+\sigma^{2}\mathbf{I}_{K}\). Since NNs can only process real-valued inputs, we map complex-valued symbols to real-valued tensors, and rewrite the received signals in (16) by stacking the real and imaginary components. This results in the following expression._ \[\mathbf{y}_{\mathsf{r}}=\widetilde{\mathbf{H}}\mathbf{x}+\boldsymbol{\nu}, \tag{17}\] _where \(\mathbf{y}_{\mathsf{r}}\!\!=\!\!\!\begin{bmatrix}\Re\{\mathbf{y}\}\\ \Im\{\mathbf{y}\}\end{bmatrix}\), \(\mathbf{x}\!\!=\!\!\!\begin{bmatrix}\Re\{\mathbf{s}\}\\ \Im\{\mathbf{s}\}\end{bmatrix}\), \(\widetilde{\mathbf{H}}\!\!=\!\!\!\begin{bmatrix}\Re\{\mathbf{H}\}&-\Im\{ \mathbf{H}\}\\ \Im\{\mathbf{H}\}&\Re\{\mathbf{H}\}\end{bmatrix}\), and \(\boldsymbol{\nu}\!\!=\!\!\!\begin{bmatrix}\Re\{\boldsymbol{\zeta}\}\\ \Im\{\mathbf{\zeta}\}\end{bmatrix}\). For the completeness of our formulations, we note that the effective noise-plus-distortion term \(\boldsymbol{\zeta}\) in (17) is conditionally circularly symmetric given the channel realizations vector. Hence, the covariance matrix of \(\boldsymbol{\nu}\) could be written as \(\mathbf{C}=\frac{1}{2}\begin{bmatrix}\Re\{\mathbf{\Sigma}\}&-\Im\{\boldsymbol{ \Sigma}\}\\ \Im\{\boldsymbol{\Sigma}\}&\Re\{\boldsymbol{\Sigma}\}\end{bmatrix}\in\mathbb{R }^{2K\times 2K}\)._ ### _DDPM Solution: Algorithms & Neural Architecture_ In this subsection, we exploit the DDPM framework for our hardware-impaired wireless system. First, a NN is trained with the aim of learning hardware and channel distortions. Then, exploiting the so-called "denoising" capability of our DDPM helps us reconstruct samples from the original data distribution, by removing imperfections and distortions from the batch of distorted received signals, \(\mathbf{y}_{\mathsf{r}}\). Hence, the combined effect of HWI and communication channel is taken into account. In brief, the idea here is to first make our system learn how to gradually remove the purposefully-injected noise from batches of noisy samples \(\mathbf{x}_{T}\). Then in the inference (sampling) phase, we start from the batch of received signals in (17), run the DDPM framework to remove the hardware and channel distortions and reconstruct original samples \(\mathbf{x}\). #### Iii-B1 Neural Network Architecture Generally speaking, the implemented NN is supposed to take as input the tensor of noisy signals at a particular time step, and approximate distortions-plus-noise as the output. This approximated distortion would be a tensor with the same size as the input tensor. Hence, the NN should output tensors of the same shape as its input. Accordingly, our DDPM model is parameterized by a NN, \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\cdot,t)\), with \(3\) conditional hidden layers (with softplus activation functions), each of which has \(128\) neurons conditioned on \(t\). The output layer is a simple linear layer with the same size as the input. To enable employing only one neural model for the entire denoising time-steps, the hidden layers are conditioned on \(t\) by multiplying the embeddings of time-step. More specifically, inspired by the Transformer architecture (Vaswani et al., 2017 [14]), we share the parameters of the NN across time-steps via incorporating the embeddings of time-step into the model. Intuitively, this makes the neural network "know" at which particular time-step it is operating for every sample in the batch. We further emphasize that the notion of time-step equivalently corresponds to the level of residual noise-plus-distortion in the batch of received signals. This will be verified in our simulation results in Section IV (Fig. 1), where data samples with different levels of "noisiness" can be observed at different time-steps \(t\in[T]\). #### Iii-B2 Training and Sampling Algorithms Our diffusion model is trained based on the loss function given in (14), using the MSE between the true and the predicted distortion noise. In (14), \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\) stands for data samples from a training set with unknown and possibly complex underlying distribution \(q(\cdot)\), and \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)\) denotes the approximated noise-plus-distortion at the output of the NN. We note that \(\bar{\alpha}_{t}\) in (14) is a function of the variance scheduling \(\beta_{t}\) that we design. Therefore, \(\bar{\alpha}_{t}\)'s are known and can be calculated/designed beforehand. This leads to the fact that by properly designing the variance scheduling for our forward diffusion process, we can optimize a wide range of desired loss functions \(\mathcal{L}_{t}\) during training. Hence, intuitively speaking, the NN would be able to "see" different structures of the distortion noise during its training, making it robust against a wide range of distortion levels for sampling. The training process of the proposed DDPM is summarized in Algorithm 1. We take a random sample \(\mathbf{x}_{0}\) from the training set \(\mathcal{S}\); A random time-step \(t\sim\text{Unif}[T]\) is embedded into the conditional model; We also sample a noise vector \(\boldsymbol{\epsilon}\) (with the same shape as the input) from normal distribution, and distort the input by this noise according to the desired "variance scheduling" \(\beta_{t}\); The NN is trained to estimate the noise vector in the distorted data \(\mathbf{x}_{t}\). The sampling process of our scheme is summarized in Algorithm 2. We start from the received batch of distorted signals \(\mathbf{y}_{\mathsf{r}}\), and then iteratively denoise it using the trained NN. More specifically, starting from \(\mathbf{y}_{\mathsf{r}}\), for each time step \(t\in\{T,T-1,\ldots,1\}\), the NN outputs \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)\) to approximate the residual noise-plus-distortion within the batch of received signals. A sampling algorithm is then run as expressed in step \(4\) of algorithm 2, in order to sample \(\mathbf{x}_{t-1}\). The process is executed for \(T\) time-steps until \(\mathbf{x}_{0}\) is reconstructed ultimately. ``` 1:\(\mathbf{x}_{T}=\mathbf{y}_{t}\) 2:for\(t=T,...,1\)do 3:\(\mathbf{z}\sim\mathcal{N}(\mathbf{0}_{2K},\mathbf{I}_{2K})\) if \(t>1\), else \(\mathbf{z}=\mathbf{0}_{2K}\) 4:\(\mathbf{x}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}-\frac{1- \alpha_{t}}{\sqrt{1-\alpha_{t}}}\boldsymbol{\epsilon}_{\boldsymbol{\theta}}( \mathbf{x}_{t},t)\right)+\sqrt{1-\alpha_{t}}\mathbf{z}\) 5:endfor 6:return\(\mathbf{x}_{0}\) ``` **Algorithm 1** Training algorithm of DDPM ## IV Evaluations In this section, we provide numerical results to highlight the performance of the proposed scheme under AWGN and fading channels. We show that the DDPM method can provide _resilience_ for the communication system under low-SNR regimes, non-Gaussian additive noise, and quantization errors. We also compare the performance of our DDPM-based scheme to the DNN-based receiver of [7] as one of the seminal benchmarks for ML-based communications systems. For training the diffusion model, we use adaptive moment estimation (Adam) optimizer with learning rate \(\lambda=10^{-3}\) over \(2000\) epochs, i.e., the stopping criterion in Algorithm 1 is reaching the maximum number of epochs [10, 11, 13]. Transmit SNR is defined as \(\Gamma=10\log_{10}(\frac{p}{2})\) dB. Without loss of generality, we set the average signal power to \(p=1\) for all experiments, and vary the SNR by setting the standard deviation (std) of noise \(\delta\)[18]. For Figs. 1 and 2, we set \(\kappa^{t}=0.05\) and \(\kappa^{r}=0.15\)[16]. We study the effect of different HWI levels in Fig. 3. Fig. 1 visualizes the denoising and generative performance of the implemented diffusion model during training over swiss roll dataset with \(10000\) samples. For this figure, we took "snapshots" by saving the model's current state at specific checkpoints (every \(400\) epochs) during the training process, and the corresponding output of the DDPM is plotted over time-steps \(t=\{50,60,\ldots,100\}\). As can be seen from the figure, our DDPM gradually learns to denoise and generate samples out of an isotropic Gaussian distribution, where data samples with different levels of "noisiness" can be observed at different time-steps \(t\in[T]\) from left to right. Moreover, as we reach the maximum number of epochs, the model can sooner (i.e., in fewer time-steps) generate samples. We now study the performance of our system under low-SNR regimes, non-Gaussian additive noise, and quantization errors. Fig. 2 demonstrates the reconstruction performance of our DDPM-based scheme compared to the conventional DNN-based benchmark [7] over a wide range of SNR values. For this experiment, the original data samples are first quataized into bitstreams, and then mapped to QAM symbols (as a widely-adopted constellation format in wireless networks [2, 4]) for transmission. The MSE metric (averaged over \(10\) runs of sampling) between the original and the reconstructed data samples is considered for evaluation. Since we have employed our DDPM at the receiver, we only exploit the receiver DNN of [7] and fine-tune it for benchmarking, in order to have a fair comparison. For the DNN benchmark, three linear layers with \(64\) neurons at hidden layers and rectified linear unit (ReLU) activation functions are considered. For \(16\)- and \(64\)-QAM scenarios, we considered \(5000\) training iterations, while for \(256\)-QAM, the DNN was trained for \(30000\) iterations with Adam optimizer and learning rate \(\lambda=0.01\). Notably, Fig. 2 highlights significant improvement in reconstruction performance among Fig. 1: Data visualization for the training process. The rows correspond to epochs \(400\), \(1200\), and \(2000\), respectively. Fig. 2: MSE between the original signal and the reconstructed one under AWGN channel and non-Gaussian additive noise. different quantization resolutions and across a wide range of SNR values, specially in low-SNR regimes. For instance, more than \(25\) dB improvement is achieved at \(0\) dB SNR for \(64\)-QAM scenario, compared to the the well-known model of [7] as one of the promising benchmarks in ML-based communications systems. Fig. 2 also highlights the _resilience_ of our approach against non-Gaussian noise. In this experiment we consider additive Laplacian noise with the same variance as that of AWGN scenario [4]. The non-Gaussian noise can happen due to the non-Gaussian interference in multi user scenarios [4]. Remarkably, although we do not re-train our diffusion model under Laplacian noise, the performance of our approach does not change (or even becomes better) under this non-Gaussian assumption. However, we can see from the figure that the DNN benchmark can experience significant performance degradation under non-Gaussian assumption, although we have re-trained the DNN with Laplacian noise. This highlights the _robust out-of-distribution performance_ of our proposed scheme. Fig. 3 studies the effect of different HWI levels on the reconstruction performance of our scheme over Rayleigh fading channels. For this figure, we set \(\kappa^{t}=0.05\) and vary the impairment level of the receiver, \(\kappa^{r}\), over the typical ranges specified in [16]. The reconstruction results are obtained in terms of MSE metric over \(20\) realizations of the system. The figure highlights an important characteristic of our proposed scheme. Our DDPM-based communication system is _resilient_ against hardware and channel distortions, as the reconstruction performance does not change with the increase in the impairment level. Notably, our DDPM-based scheme showcases a _near-invariant_ reconstruction performance with respect to channel noise and impairment levels, which also highlights the generalizability of the proposed approach. This is achieved due to the carefully-designed so-called "variance scheduling" of our DDPM framework in (3), which allows the system to become robust against a wide range of distortions caused by channel and hardware impairments. ## V Conclusions In this paper, we have studied the application of DDPMs in wireless communication systems under HWIs. After introducing the DDPM framework and formulating our system model, we have evaluated the reconstruction performance of our scheme in terms of reconstruction performance. We have demonstrated the _resilience_ of our DDPM-based approach under low-SNR regimes, non-Gaussian noise, and different HWI levels. Our results have shown that more than \(25\) dB improvement in MSE is achieved compared to DNN-based receivers, as well as robust out-of-distribution performance.
2309.06545
Evaluating Homomorphic Operations on a Real-World Processing-In-Memory System
Computing on encrypted data is a promising approach to reduce data security and privacy risks, with homomorphic encryption serving as a facilitator in achieving this goal. In this work, we accelerate homomorphic operations using the Processing-in- Memory (PIM) paradigm to mitigate the large memory capacity and frequent data movement requirements. Using a real-world PIM system, we accelerate the Brakerski-Fan-Vercauteren (BFV) scheme for homomorphic addition and multiplication. We evaluate the PIM implementations of these homomorphic operations with statistical workloads (arithmetic mean, variance, linear regression) and compare to CPU and GPU implementations. Our results demonstrate 50-100x speedup with a real PIM system (UPMEM) over the CPU and 2-15x over the GPU in vector addition. For vector multiplication, the real PIM system outperforms the CPU by 40-50x. However, it lags 10-15x behind the GPU due to the lack of native sufficiently wide multiplication support in the evaluated first-generation real PIM system. For mean, variance, and linear regression, the real PIM system performance improvements vary between 30x and 300x over the CPU and between 10x and 30x over the GPU, uncovering real PIM system trade-offs in terms of scalability of homomorphic operations for varying amounts of data. We plan to make our implementation open-source in the future.
Harshita Gupta, Mayank Kabra, Juan Gómez-Luna, Konstantinos Kanellopoulos, Onur Mutlu
2023-09-12T19:39:15Z
http://arxiv.org/abs/2309.06545v2
# Evaluating Homomorphic Operations ###### Abstract Computing on encrypted data is a promising approach to reduce data security and privacy risks, with homomorphic encryption serving as a facilitator in achieving this goal. In this work, we accelerate homomorphic operations using the Processing-In-Memory (PIM) paradigm to mitigate the large memory capacity and frequent data movement requirements. Using a real-world PIM system, we accelerate the Brakerski-Fan-Vercauteren (BFV) scheme for homomorphic addition and multiplication. We evaluate the PIM implementations of these homomorphic operations with statistical workloads (arithmetic mean, variance, linear regression) and compare to CPU and GPU implementations. Our results demonstrate \(50-100\times\) speedup with a real PIM system (UPMEM) over the CPU and \(2-15\times\) over the GPU in vector addition. For vector multiplication, the real PIM system outperforms the CPU by \(40-50\times\). However, it lags \(10-15\times\) behind the GPU due to the lack of native sufficiently wide multiplication support in the evaluated first-generation real PIM system. For mean, variance, and linear regression, the real PIM system performance improvements vary between \(30\times\) and \(300\times\) over the CPU and between \(10\times\) and \(30\times\) over the GPU, uncovering real PIM system trade-offs in terms of scalability of homomorphic operations for varying amounts of data. We plan to make our implementation open-source in the future. ## 1 Introduction Traditional security measures that operate on plain (unencrypted) data often expose the actual data during processing, creating security and privacy vulnerabilities. Homomorphic Encryption (HE) [1, 2, 3, 4, 5, 6, 7, 8] addresses this by enabling calculations on encrypted data without revealing sensitive information. A user can (1) encrypt data, and (2) send it to the server. Then, (3) computing resources in the server operate on the data without decrypting it, using HE, and (4) the encrypted results are returned to the user, preserving data privacy [9, 10, 11, 5]. However, HE is very costly due to the use of large ciphertexts and computation intensive operations [12, 13, 14, 15]. For example, performing homomorphic multiplication on two fully homomorphic (FHE) encrypted integers may require tens of millions of operations [16, 17, 18]. The complexity is further compounded by intricate mathematical operations, as each of these operations is executed on data that can be up to \(1000\times\) larger in size than the original plain data [2, 19, 2]. Recent research proposes the implementation of homomorphic operations on CPUs [20, 21, 22, 17], GPUs [24, 23, 22, 25, 4, 25], FPGAs [26, 27, 28, 29, 30, 31], and ASICs [32, 33, 34, 35, 36, 37], but these implementations do not fundamentally solve the _data movement bottleneck_ associated with homomorphic operations. _Processing-in-Memory (PIM)_, i.e., equipping memory with compute capabilities [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 16], can effectively alleviate the data movement needs. Recent PIM-based HE solutions [11, 16, 62, 63] leverage high parallelism and memory bandwidth inside the memory chips for acceleration. However, there is no evaluation of homomorphic operations on real PIM systems, which have recently been introduced [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 50]. To our knowledge, this study is the first to implement and evaluate homomorphic operations on a real PIM system. Using a real PIM system (UPMEM) [38, 44, 64], we accelerate the Brakerski-Fan-Vercauteren (BFV) scheme [65, 66] for homomorphic addition and multiplication. Our evaluation shows that the real PIM system accelerates the homomorphic addition operation by \(50-100\times\) over a state-of-the-art CPU and by \(2-15\times\) over a state-of-the-art GPU. For the homomorphic multiplication operation, the real PIM system provides a speedup of \(30-50\times\) over the CPU, but lags \(10-15\times\) behind the GPU due to the lack of native sufficiently wide multiplication support on the evaluated first-generation UPMEM PIM system. We also evaluate our implementation of three statistical workloads (mean, variance, linear regression) using homomorphic addition and homomorphic multiplication. In our evaluation, the real PIM system achieves up to \(300\times\) speedup over the CPU for all workloads and up to \(30\times\) over the GPU for arithmetic mean. However, it lags by up to \(50\times\) compared to the GPU for variance and linear regression, due to the low performance of multiplication on the first real-world PIM system. Our work makes the following contributions: * We develop the first implementation of homomorphic addition and multiplication on a real PIM system. * We evaluate the performance of homomorphic addition and multiplication on a real PIM system for different bit-key security levels (27-109 bits). We use three real-world statistical workloads (arithmetic mean, variance, linear regression) for evaluation. * Our findings demonstrate the capabilities and tradeoffs of real PIM systems for efficient cryptographic operations, providing a foundation for future developments in this direction. ## 2 Background and Motivation Homomorphic encryption (HE) [1, 2, 3, 4, 5, 6, 7, 8] enables processing (e.g., addition, multiplication, rotation) on encrypted data while preserving privacy. We focus on the BFV (Brakerski-Fan-Vercauteren) scheme for HE [65, 66], but the implementation techniques that we propose are also applicable to other HE schemes (e.g., BGV [67] and CKKS [68]). HE types include Fully Homomorphic Encryption (FHE), Partially Homomorphic Encryption (PHE), and Somewhat Homomorphic Encryption (SHE) [69]. FHE enables unrestricted operations, PHE permits one type of operation, and SHE supports both addition and multiplication with constraints on multiplicative depth. FHE, SHE, and PHE offer different trade-offs between security and efficiency [70, 71, 72, 1, 7, 8, 73, 1]. In this paper, we focus on SHE as it provides a balance between security and efficiency, allowing some computations (e.g., addition, multiplication) on encrypted data while still maintaining a high level of security. HE poses two main **challenges** that limit its use in real-world applications. **1) Large memory footprint:** HE schemes require very long vectors with wide elements to encode information [13]. Prior work [32] shows that multiplying 2MB ciphertexts requires 32MB of auxiliary data, and 25MB ciphertexts would require over 1.4GB of auxiliary data. This amount of auxiliary data is too large to fit on a processor-centric chip which limits the scalability and performance of HE. **2) Frequent data movement:** The large amount of data that homomorphic algorithms need to operate on is moved back-and-forth between off-chip memory/storage units and compute units. Prior work [73] shows that homomorphic operations exhibit low arithmetic intensity (<1 operations/byte). As a result, in processor-centric systems, such as CPUs and GPUs, it is challenging to efficiently offset the performance and energy expenses incurred when transferring large amounts of data. Several recent works [4, 13, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37] explore domain-specific architectures, such as GPUs, FPGAs, and ASICs, to accelerate homomorphic operations. These efforts have achieved significant speedups compared to CPUs. However, challenges remain in resource limitations, data movement, and practical implementation of especially ASIC-based accelerators [35]. In this work, our **goal** is to evaluate the suitability of real-world general-purpose processing-in-memory architectures to compute homomorphic operations. To this end, we implement homomorphic addition and multiplication on the UPMEM PIM system [38, 39, 44], and evaluate them on real-world statistical and machine learning workloads. Processing-in-memory (PIM) [61, 38, 39, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50] systems can accelerate memory-intensive applications [74, 75, 46, 47, 48, 49, 64, 46, 44, 45, 46, 47, 49, 50] by equipping memory arrays with compute capabilities. These systems can potentially address the challenge of large ciphertexts in HE algorithms by reducing the overhead of data transfers between the memory and the CPU [77, 45]. In addition to reducing data movement, PIM also offers high levels of parallelism [38, 39], which are useful for performing costly homomorphic operations. Thus, by computing directly in memory, PIM can significantly improve the performance of HE. Various real-world PIM systems have recently been introduced [38, 39, 40, 41, 42, 43, 44, 50]. These real-world PIM systems have some common characteristics [64]: there is a central host processor connected to conventional main memory, alongside PIM-enabled memory chips with processing elements that access memory with high bandwidth and low latency. In this work, we use the UPMEM PIM system [38, 39, 44, 78], which consists of fine-grained multithreaded PIM cores near DRAM banks. For more details on the UPMEM PIM system, we refer the reader to [44, 48, 49, 46, 47, 48, 49, 50]. ## 3 Implementation We consider an environment where users offload computations on encrypted data to a PIM system. Users handle key generation, encryption, and decryption to guarantee their data privacy. Computation of homomorphic operations takes place in a PIM system. In this work, we implement addition and multiplication operations. The security level of HE relies on the polynomial modulus degree [79], affecting ciphertext length, vulnerability to attacks, and noise tolerance. For instance, for 27-bit security, we need a polynomial that has 1024 27-bit coefficients, which indicates a relatively lower security level in HE. Increasing the bit length enhances security. In this work, we also evaluate 54-bit (2048-coefficient polynomial) and 109-bit (4096-coefficient polynomial) security levels. To represent 27-, 54-, and 109-bit coefficients, we use integers of 32, 64, and 128 bits, respectively. The reason is that the UPMEM PIM system that we use in our evaluation has native support for 32-bit integers. **Homomorphic Addition.** We implement homomorphic addition using polynomial addition [80, 81] on the UPMEM PIM system. Each PIM thread running on a PIM core performs the element-wise addition of the coefficients of two polynomials. UPMEM PIM cores [44] support native 32-bit integer addition (add) and 32-bit integer addition with carry-in (addc), which we use to implement 64- and 128-bit addition (and can be extended to any multiple of 32 bits). **Homomorphic Multiplication.** We implement homomorphic multiplication using polynomial multiplication and polynomial addition [82, 83, 84, 85]. Each PIM thread running on a PIM core performs the polynomial multiplication and polynomial addition of the coefficients of two polynomials to generate the desired result. For 32-bit coefficients, we rely on the compiler-generated 32-bit shift-and-add based multiplication.1 For 64- and 128-bit multiplications, we divide the bits into chunks of 32-bits and apply the Karatsuba algorithm [86], which requires less operations than the traditional multiplication algorithm. We do not incorporate Number Theoretic Transform (NTT) [87, 88] techniques to optimize multiplication. We leave them for future work. Footnote 1: The UPMEM PIM system performs 8-bit and 16-bit multiplications using the native 8-bit hardware multipliers, but employs a software-based shift-and-add algorithm for higher bit widths [38, 44, 64]. **Statistical Workloads.** We implement three statistical workloads (arithmetic mean, variance, linear regression) using homomorphic addition and homomorphic multiplication techniques. The arithmetic mean [89, 90] workload employs polynomial addition performed on the UPMEM PIM cores and scalar division performed on the host processor. The variance [91, 92] workload uses polynomial multiplication which is performed on the UPMEM PIM cores and a final scalar division performed on the host processor. Similarly, linear regression [93, 94] workload uses both polynomial addition and multiplication to perform the vector-matrix multiplication, which is employed on the UPMEM PIM cores. ## 4 Evaluation ### Methodology We evaluate homomorphic addition and multiplication on a first-generation UPMEM PIM system [38, 39, 44, 78], a 4-core Intel i5-8250U CPU [95], and an NVIDIA A100 GPU [96]. The UPMEM system contains 2,524 PIM cores (running at 425 MHz) and 158GB of PIM-enabled memory with a total bandwidth of 2,145 GB/s. We compare our PIM implementations to our own custom CPU and GPU implementations. We also compare to an optimized CPU implementation, the SEAL CPU library [79], which leverages the Residue Number System (RNS) [97] and the Number Theoretic Transform (NTT) [98] implementations for faster operations. We first evaluate microbenchmarks for vector addition and vector multiplication (Section 4.2). We experiment with different numbers of ciphertexts between 20,480 to 327,680 for addition, and between 5,120 and 81,920 for multiplication. We run experiments for integers of 32 bits (27-bit coefficients), 64 bits (54-bit coefficients), and 128 bits (109-bit coefficients). We then evaluate SHE implementations of three statistical workloads (arithmetic mean, variance, linear regression) that employ our PIM-based homomorphic encryption operations (Section 4.3). We plan to open-source all workloads. ### Vector Addition and Multiplication Figure 1 shows the execution time of vector addition (1(a)) and multiplication (1(b)) on homomorphically encrypted ciphertexts for our real-world UPMEM PIM-based implementation (PIM), our custom CPU and GPU implementations, and the SEAL library (CPU-SEAL). The figure also shows the speedup of PIM over the custom CPU implementations. We make several observations about these experimental results. First, the performance of PIM implementations saturates at 11 or more PIM threads (not shown in Figure 2). This is in line with the observations in prior works [45, 38, 64]. Second, the large number of PIM cores and the native support for 32-bit integer addition in PIM cores result in fast execution of vector addition on the PIM system. Figure 1(a) shows the results for 128-bit addition. The trends are the same for 32-bit and 64-bit addition. For 32-, 64-, and 128-bit addition, the PIM implementation outperforms CPU, CPU-SEAL, and GPU by \(20-150\times\), \(35-80\times\), and \(15-50\times\), respectively. **Key Takeaway 1**. _With native hardware support for 32-bit integer addition and large number of PIM cores, the UPMEM PIM system outperforms CPU and GPU for homomorphic addition._ Third, vector multiplication on the UPMEM PIM system suffers from the lack of native 32-bit multiplication hardware, as multiplication wider than 16 bits is based on compiler generated shift-and-add algorithm. Figure 1(b) shows the results for 128-bit multiplication. We observe similar trends for 32-bit and 64-bit multiplication. For 32-, 64-, and 128-bit multiplication, the PIM implementation outperforms CPU by \(40-50\times\), and CPU-SEAL for 32 bits by \(2\times\). However, the PIM implementation is \(12-15\times\) slower than GPU, and \(2-4\times\) slower than CPU-SEAL for 64 and 128 bits. **Key Takeaway 2**. _The lack of native support for 32-bit integer multiplication hampers the performance of PIM for homomorphic multiplication. Future PIM systems with native 32-bit multiplication hardware could potentially outperform CPUs and GPUs._ ### Statistical Workloads We implement and evaluate the performance of three real-world statistical workloads (arithmetic mean, variance, linear regression) that utilize homomorphic addition and multiplication for the CPU, real-world PIM, CPU-SEAL and GPU implementations. Figure 2 shows the execution times of the three workloads on CPU, PIM, CPU-SEAL, and GPU. For arithmetic mean and variance, we evaluate scenarios with 640, 1280, and 2560 users. For linear regression, we evaluate 640 users, and 32 and 64 ciphertexts per user (data samples with 3 features). We make several observations from Figure 2. First, arithmetic mean uses only homomorphic addition. As a result, PIM is significantly faster than CPU, CPU-SEAL, and GPU. Figure 2(a) shows PIM speedups of \(25-100\times\) over CPU, \(11-50\times\) over CPU-SEAL, and \(9-34\times\) over GPU for different numbers of users. Second, as variance uses the square operation (i.e., homomorphic multiplication of two equal numbers), the PIM implementation is heavily burdened by the slow multiplication. In Figure 2(b), we observe that PIM outperforms only the custom CPU implementation (by \(6-25\times\)) for different numbers of users. CPU-SEAL and GPU are, respectively, \(2-10\times\) and \(13-50\times\) faster than PIM. Third, for linear regression the trends are the same as for variance, given that linear regression Figure 1: Execution time (ms) of ciphertext vector addition (a) and vector multiplication (b) for 128-bit (109-bit) wide polynomial coefficients on CPU, PIM, CPU-SEAL and GPU. also uses multiplication heavily. Figure 2(c) shows that PIM is only faster than the custom CPU implementation (by \(7.5\times\)) for 32 ciphertexts. CPU-SEAL and GPU are, respectively, \(11.4\times\) and \(54.9\times\) faster than PIM for 64 ciphertexts. Fourth, we observe that PIM execution time remains constant for different numbers of users. This is achieved by dynamically adjusting the utilization of PIM cores, which is particularly beneficial in our experiments as they involve a large number of users. This approach differs from CPUs and GPUs, which have a limited number of cores and must use them regardless of the number of users in our experiment. **Key Takeaway 3**.: _The computational power of PIM scales with memory capacity [99, 100] via the addition of more memory banks and corresponding PIM cores. This memory-capacity-proportional performance scalability provided by PIM holds promise for accommodating expanding numbers of users and more parallel computations as memory capacity grows._ ## 5 Related Work Several recent works explore the suitability of real-world processing-in-memory (PIM) architectures [16, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61] to accelerate a variety of memory-intensive tasks [46, 47, 74, 75, 76]. To our knowledge, this is the first work to explore the use of a real PIM system to accelerate homomorphic operations. Acceleration of homomorphic operations on GPUs, FPGAs, or ASICs is the subject of various recent works. All these processor-centric techniques suffer from data movement bottlenecks between memory and compute units. GPUs can accelerate HE schemes [4, 22, 23, 24, 25]. However, GPUs suffer from high power consumption for homomorphic operations [101, 102]. FPGAs can also accelerate homomorphic operations [26, 27, 28, 29, 30, 31], but they are limited in hardware resources and suffer from data movement bottlenecks [103, 104]. Several recent works propose ASIC designs [13, 32, 33, 34, 35, 36, 37] for CKKS algorithms, but they are only evaluated in simulation. Similarly, PIM-based solutions [11, 16, 105] for accelerating homomorphic operations are also limited to simulation. ## 6 Conclusion We presented initial results on the use of a real-world general-purpose PIM architecture (i.e., the UPMEM PIM system [38, 50]) to accelerate homomorphic operations. Our PIM implementations of homomorphic addition, multiplication and statistical workloads (mean, variance, linear regression) show great promise when compared to CPU and GPU implementations, as long as the necessary integer operations are natively supported by the PIM hardware. We aim to implement more homomorphic operations and optimizations as future work. ## Acknowledgments We acknowledge support from the SAFARI Research Group's industrial partners, especially Google, Huawei, Intel, Microsoft, VMware, and the Semiconductor Research Corporation. This research was partially supported by the ETH Future Computing Laboratory and the European Union's Horizon programme for research and innovation under grant agreement No. 101047160, project BioPIM (Processing-in-memory architectures and programming libraries for bioinformatics algorithms). This research was also partially supported by ACCESS - AI Chip Center for Emerging Smart Systems, sponsored by InnoHK funding, Hong Kong SAR.
2309.09676
Conditioning Latent-Space Clusters for Real-World Anomaly Classification
Anomalies in the domain of autonomous driving are a major hindrance to the large-scale deployment of autonomous vehicles. In this work, we focus on high-resolution camera data from urban scenes that include anomalies of various types and sizes. Based on a Variational Autoencoder, we condition its latent space to classify samples as either normal data or anomalies. In order to emphasize especially small anomalies, we perform experiments where we provide the VAE with a discrepancy map as an additional input, evaluating its impact on the detection performance. Our method separates normal data and anomalies into isolated clusters while still reconstructing high-quality images, leading to meaningful latent representations.
Daniel Bogdoll, Svetlana Pavlitska, Simon Klaus, J. Marius Zöllner
2023-09-18T11:26:48Z
http://arxiv.org/abs/2309.09676v1
# Conditioning Latent-Space Clusters ###### Abstract Anomalies in the domain of autonomous driving are a major hindrance to the large-scale deployment of autonomous vehicles. In this work, we focus on high-resolution camera data from urban scenes that include anomalies of various types and sizes. Based on a Variational Autoencoder, we condition its latent space to classify samples as either normal data or anomalies. In order to emphasize especially small anomalies, we perform experiments where we provide the VAE with a discrepancy map as an additional input, evaluating its impact on the detection performance. Our method separates normal data and anomalies into isolated clusters while still reconstructing high-quality images, leading to meaningful latent representations. anomaly, corner case, vision, autonomous driving, VAE, latent space, cluster ## I Introduction Developing autonomous vehicles and deploying them to large Operational Design Domains (ODD) poses a significant challenge, especially with respect to the long tail of unexpected or unfamiliar objects. Although perception systems of autonomous vehicles are able to detect known classes reasonably well nowadays, they still need to be aware of situations where they encounter the unknown. As deep neural networks (DNN) tend to predict false positives with high uncertainty, both false negatives and false positives are of interest. While often lidar sensors are used in production systems, here we focus on a purely camera-based setup, as those introduce their own set of issues. We focus on detecting anomalies in high-resolution road images from mostly urban scenes based on the latent space of a Variational Autoencoder (VAE). Utilizing primarily normal but also abnormal data during training, the data is being fit on two prior distributions. This way, the VAE is conditioned to build two separate clusters in the latent space, one for normal samples and one for anomalies. During test time, distance measures can be used to detect anomalies. We used multiple datasets to define normality and anomalies during training and evaluation. The work is structured as follows: In Section II, we introduce related work from the field of anomaly detection, with a focus on Variational Autoencoders. In Section III, we introduce our approach, including our VAE architecture. In Section IV, we highlight our experimental setup and demonstrate our results. Finally, we conclude this work in Section V. More information can be found in [1]. ## II Related Work In autonomous driving, detecting anomalies is of utmost importance to scale existing systems, which operate in small Operational Design Domains, as infrequent events occur more often for a growing vehicle fleet that utilizes the same software system [2]. Based on common corner case systematizations [3, 4, 5], we are especially interested in the _object_ and _scene_ levels, which describe unknown objects or, more generally, unexpected patterns in an input sample. In this section, we introduce current approaches for such detections, also known as outliers or out-of-distribution (OOD) samples. As one of the early approaches, Ruff et al. [6] trained a deep neural network in order to compute a hypersphere representing an approximation of normality. A binary classification can be computed based on the distance of new samples to the hypersphere. However, in the real world, normality is represented in the form of many "heterogeneous semantic labels" [7], which leads to a weak decision boundary. Hendrycks et al. proposed an "outlier exposure" objective [8], utilizing curated anomaly data for training, leading to a more uniform softmax distribution for anomalies, which was later adopted by Papadopoulos et al. [9]. Some approaches utilize the softmax confidence in order to detect anomalies [10, 11]. However, this can lead to false positives since the softmax activation function is sensitive to changes in the input. Thus, Fig. 1: Our VAE-based method for real-world anomaly classification, which separates normal and abnormal data in its latent space. Discrepancy images as additional inputs also emphasize small unknown objects, here _a cat_. Liu et al. proposed an energy-based approach [12], showing an improved alignment to the density of the input samples. Based on a GAN, Nitsch et al. generated virtual anomalies for an improved decision boundary [13]. Similarly, Grcic et al. used normalizing flows for the same task [14], outperforming most other methods [15]. Going one step further, Du et al. introduced Virtual Outlier Synthesis (VOS), which generates synthetic anomalies in the latent space [16]. In the field of semantic segmentation, Cen et al. included unknown objects in their class list, leading to an open-world segmentation approach based on Euclidean distances between feature vectors [17]. Di Biase integrated model uncertainty into their system in order to reduce wrong classifications [18]. As we have shown, there exist many different methods to detect anomalies. Since our work is based on VAEs, we now provide a detailed overview of methods based on encodings. ### _Encoding-based Anomaly Detection_ Breitenstein et al. have categorized anomaly detection in the domain of autonomous driving into five different categories: Reconstruction, Prediction, Generative, Confidence Score, Feature Extraction [19]. In this work, we examine the properties of Variational Autoencoders in order to classify image samples as anomalies. VAEs can be utilized for dense anomaly detections, which fall under either the _Reconstruction_ or _Generative category_, and anomaly classifications, which can be categorized as _Feature Extraction_. Methods from the first category are based on the assumption that the utilized training data defines normality, leading to failed reconstructions given samples that include parts that were not included in the training data. Methods from the second category, which take a look at the latent space, assume that latent representations from normal and OOD samples have a sufficient difference. **Reconstructive and Generative.** A well-trained VAE will reconstruct unseen anomalies [20], which is why specialized methods were developed for anomaly detection. Utilizing a Generative Adversarial Network (GAN), in which both the generator and the discriminator are implemented as AE, Vu et al. designed a network where the discriminator learns to reconstruct normal data while failing to do so when presented with OOD data [21]. Utilizing the reconstruction probability, An et al. go beyond utilizing the direct reconstruction error, which is incapable of incorporating high-level structures [22]. Among others, Munjal et al. introduced an adversarial loss in order to address this issue [23]. However, this method is not effective for high-complexity scenes. On a similar note, Sompealli et al. proposed to minimize the Wasserstein distance and include a latent space regularization, which led to better reconstructions of normal data and worse ones for anomalies. On the other hand, Bolte et al. [24] proposed a multi-stage approach, where an Autoencoder was used for image prediction. Based on this, an engineered approach followed, which included prediction errors, pixel classification, and distance weighting. Similarly, Amini et al. [25] proposed a pixel-wise uncertainty for reconstructions with a VAE. It is also possible to combine both methods which are based on reconstructions and feature extraction. Abati et al. [26] utilize reconstruction error and latent features with a low log-likelihood in order to detect anomalies. Similarly, Wang et al. [27] use a discrete latent space, where a model learns the distribution. Reconstructions are then based upon a re-sampled latent space and used for anomaly detection. Park et al. [28] proposed a memory module where prototypes of normality are stored. Later, a reconstruction-based approach, using these memory items, is used for anomaly detection. **Feature Extraction.** As some previous techniques already utilized feature extraction partly, this section highlights works focusing on this technique. Wurst et al. [33] utilized a triplet-based Autoencoder, enforcing similarity in the latent space, to detect unusual traffic scenes. Similarly, Harmenting et al. [34] uses clusters in the latent space generated by an Autoencoder to detect novel scenarios. For more complex data, Sundar et al. [35] developed a method to divide datasets into smaller subsets. Based on these, multiple VAEs are trained to generate the latent space. For detection, they utilize all trained VAEs and detect high sensitivity. Akcay et al. [36] compared latent representations of image reconstructions and the original input to detect anomalies. Chalapathy at al. [37] utilize a one-class classification objective based on features learned by a VAE, where they focus on generating features that are designed for the task of anomaly detection [38]. Park et al. [39] utilize rate-distortion theory in order to compute anomaly scores, only using the encoding part of a VAE. The work of Liu et al. [40] is based on attention maps for every element of the latent vector, where they compute differences to the learned normality, leading to an attention map that highlights anomalies in an image. Finally, Dilokthanakul et al. [41] proposed a VAE which uses a mixture of Gaussians as prior, assuming multiple distributions in the training data, which led to a better separation of classes in the latent space. While many of the presented approaches work with simple datasets, in our work, we are interested in high-resolution images [42, 43] with anomalies in urban road scenarios [44]. Here, the challenge arises that anomalies often only occupy small regions of an image, which makes classification harder, as normality is represented by highly complex training data. Our approach evaluates whether an auxiliary input that highlights even small anomalies in the image space, combined Fig. 2: We used the Cityscapes [29] and Fishyscapes [30] (normal) datasets as normality (left) and the RoadAnomaly21 [31], Fishyscapes (anomalies), and Lost and Found [32] datasets with anomalies (right). Reprinted from [1]. with a conditioned latent space, allows for the classification of anomalies in such datasets. ## III Method In the following, we describe our anomaly detection method, which conditions the latent space of a VAE to enforce separations of clusters corresponding to anomalous and normal data. ### _VAE Architecture_ We use a conditioned latent space variational autoencoder (CL-VAE) [45]. Our architecture roughly follows Hou et al. [46]. However, we utilize residual blocks, as shown in Figure 2(a)1, as those are easier to train. Furthermore, we inserted an additional pooling layer and adjusted the residual block by replacing the exponential linear unit (ELU) activation function with the randomized leaky rectified linear units (RRelu) as proposed by Xu et al. [47], see Figure 2(b). The VAE was trained with the CL-VAE ELBO loss. We performed experiments with two auxiliary losses to enforce the separation of normal and anomalous data. First, the distance loss \(\mathcal{L}_{distance}\) aims at maximizing the distance between the two means to push the clusters away from each other: Footnote 1: Implementation inspired by LukeDitria/CNN-VAE \[\mathcal{L}_{distance}=-\|\mu_{1},\mu_{2}\|_{1}=-|\mu_{1}-\mu_{2}| \tag{1}\] Second, we use \(\mathcal{L}_{i}\) proposed by Yang et al. [48] to maximize the distance between each data point and the cluster mean given by the k-means algorithm: \(\mathcal{L}_{i}=(\mu_{i}-z_{i})^{2}\). The overall _cluster loss_ is then given by: \[\mathcal{L}_{cluster}=\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}_{i} \tag{2}\] We also incorporate the feature perceptual loss [46] using a pre-trained backbone to enforce reconstruction quality. This concept ensures meaningful latent representations of the input samples, which can be used for downstream tasks. Discrepancy images passed as the fourth channel were not considered as the backbone was pre-trained on RGB images. ### _Generation of Discrepancy Images_ We use discrepancy images, highlighting areas with anomalous objects' locations, as an additional input to a VAE as a fourth channel. Following the method proposed by Lis et al. [49], we first create a semantic segmentation prediction for a given image. A GAN then tries to recreate the original image from this semantic segmentation image. Finally, the discrepancy network is used to generate the discrepancy image by comparing this recreated image to its counterpart. The discrepancy network comprises three streams: a pre-trained CNN extract features from an original and a resynthesized image, and a custom CNN extracts features from a semantic segmentation map. The extracted features pass through a decoder which outputs the resulting discrepancy image. In the original approach by Lis et al. [49], the discrepancy detector is trained on the dataset of normal data with altered labels. In particular, labels of some randomly selected objects are replaced with random class labels, thus creating synthetic anomalies. However, due to natural class imbalance, the model learned to classify objects of rare classes as anomalies because randomly choosing a replacement class makes them occur more frequently as an anomaly replacement class than a normal class. To mitigate this issue, we propose the _frequency-based label replacement_ as shown in Figure 4. To create a synthetic anomaly dataset for training, rare classes, i.e., those which occur less frequently in a dataset of normal data, are chosen as frequently as a replacement as common ones. ## IV Evaluation In the following, we describe the evaluation of our anomaly detection method. First, we provide details on our experimental setup, followed by several analyses. ### _Experimental Setting_ **Training Data:** We utilized three datasets to train the VAE. _Cityscapes_[29] was used to to represent normal data and both _LostAndFound_[32] and _RoadAnomaly21_ from the _SegmentMedfYouCan_ benchmark [31] were used to represent anomalous data. Samples from these datasets can be found in Figure 2. For Cityscapes, we used the pre-defined train-val-test split. The LostAndFound dataset was filtered as follows: We deleted images with less than 3,000 anomalous pixels per image and images containing children, as those are considered normal in Cityscapes. We have selected only a few images with different anomalies from each scene to avoid overfitting. The resulting filtered dataset thus contained 172 train, 99 validation, and 64 test images. Finally, all 110 images from the RoadAnomaly21 dataset were split according to the 70:20:10 rule. For training, all images were downsampled to \(256\times 256\) Fig. 4: Discrepancy images for a Cityscapes image containing an object of the rare but normal class _bus_. The original approach by Lis et al. [49] (middle) leads to higher anomaly scores. The proposed frequency-based approach (right) leads to lower anomaly scores. Reprinted from [1]. Fig. 3: Overall architecture of the deployed VAE (left) and the components of the ResBlock (right). Adapted from [1] **Test Data:** For evaluation, the test data from LostAndFound and RoadAnomaly21 datasets were used, which were split as described above. We also used _FS Static_ images from the _Fishyscapes_ dataset [30]. Because of the small dataset size, it was only used at the test stage. Just 30 images are publicly available, 10 with normal and 20 with anomalous data. **Models and Training:** Following the approach proposed by Lis et al. [49], we used a pre-trained PSPNet [50] with a pre-trained ResNet backbone [51] to predict semantic segmentation masks for input images and a pre-trained pix2pixHD model [52] for image resynthesis. The discrepancy module included a pre-trained VGG [53] for feature extraction. The VAE was trained for 100 epochs using the ADAM optimizer [54] with a learning rate of \(1\)e-\(4\) and a batch size of 12. The learning rate decreased linearly during training. All trainings were performed on an Nvidia GeForce RTX 3090. ### _Impact of Frequency-based Label Replacement_ In our discrepancy module, we used Cityscapes as a normal dataset where no anomalies should appear. We have analyzed the average pixel-wise anomaly score in generated grayscale discrepancy images, where 0 corresponds to normal and 1 to anomalous data. Ideally, all discrepancy scores should be zero, as no anomalies exist in the data. As Figure 5 demonstrates, the average pixel value in discrepancy maps is lower for the proposed frequency-based label replacement variant than the random-class approach proposed by Lis et al. [49]. Furthermore, a visual comparison of the resulting discrepancy images as shown in Figure 4 confirms that our frequency-based class selection results in lower anomaly scores for normal classes. Furthermore, we evaluated the impact of the frequency-based class selection on LostAndFound and RoadAnomaly using the anomaly detector from Lis et al. [49]. Figure 6 shows that our approach leads to improved classifications for RoadAnomaly dataset but worse results for LostAndFound. ### _VAE Reconstruction Performance_ We evaluated the impact of two hyperparameters on the reconstruction performance: The size of the latent space and the \(\beta\) parameter of the KL divergence. We used the Frechet Inception Distance (FID) [55] to measure the quality of the reconstructions. Figure 8 shows that both a larger latent space and smaller \(\beta\) lead to more accurate reconstructions. Finally, we evaluated the impact of the feature perceptual loss on the reconstruction quality. Figure 7 shows that using the feature perceptual loss results in less blurry images. ### _Impact of Discrepency Image_ To evaluate the effect of the discrepancy maps, we first calculated the mean pixel scores for both normal and abnormal data. We found that the score is much higher for images including anomalies than those without. However, an ablation study without the input revealed that the discrepancy map had little effect on the structure of the latent space, especially high-dimensional latent states. ### _Anomaly Classification via Clustering_ To classify an image as normal or anomalous during evaluation, K-Means clustering of the latent space of the trained VAE is performed. We used PCA to visualize the distribution of inputs in the latent space. Our experiments have shown that the larger size of the latent space improves the reconstruction strength of the VAE, as shown above, and the clustering in the latent space. A large latent space size \(512\times 4\times 4\) led to better results than small ones like \(64\times 4\times 4\) (see Figure 9). A quantitative analysis of cluster assignments for different \(\beta\) values, as shown in Table I, has revealed that smaller \(\beta\) Fig. 5: Distribution of mean anomaly scores in the discrepancy maps generated for the Cityscapes test set, comparing the original approach by Lis et al. [49] (blue) to our frequency-based label replacement (orange). Reprinted from [1]. Fig. 6: ROC curves for anomaly detection using the discrepancy maps generated with the proposed frequency-based label replacement: LostAndFound (left) and RoadAnomaly (right) test data. Reprinted from [1]. Fig. 7: Image reconstruction by a VAE with the latent space size of \(512\times 4\times 4\) and a \(\beta=0.01\) trained with (right) and without (left) the feature perceptual loss. Reprinted from [1]. values lead to lower false positive rates. On the right side of Figure 9, it can be seen that for \(\beta=0.01\), the proposed approach can detect most anomalous data, i.e., data points corresponding to three anomaly datasets. Adding the previously described cluster loss in Equation 2 did not help to reduce the number of false positives. Furthermore, the distance loss from Equation 1 significantly increased the distance between the clusters. However, this latent space structure is unsuitable for cluster separation for normal and anomalous data [1]. ## V Conclusion In this work, we have presented an approach to detect image samples containing anomalies based on the latent space of a Variational Autoencoder. The latent space was conditioned in a way to create individual clusters for those categories, which allowed for the detection of anomalies during inference. We could show, that our model is even able to detect small anomalies from datasets without a domain shift compared to the training data. However, similar to other anomaly detection approaches [16], our method still produces many false positives. We have performed experiments with different components, such as a distance loss, a cluster loss, or an additional discrepancy map as the input, evaluating their impact on the performance of the model. While high false-positive rates are not suitable for production systems, our approach can be utilized for an active learning system, where a human oracle can choose relevant frames from a pre-selection, based on the detection results from our method. ## VI Acknowledgment This work results partly from the project KI Data Tooling (19A20001J), funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK). \begin{table} \begin{tabular}{l l l l l} \hline \hline \(\beta\) & 1 & 0.1 & 0.01 & 0.001 \\ \hline FPR & 0,4332 & 0,3231 & 0,3557 & 0,4065 \\ TPR & 0,9894 & 0,9681 & 1 & 1 \\ \hline \hline \end{tabular} \end{table} TABLE I: False and true positive rate for anomaly classification using a VAE with latent space for different \(\beta\) values. Fig. 8: Image reconstructions for different \(\beta\) values (top) and latent map sizes (bottom) of a VAE with the latent feature map of size \(z\times 4\times 4\). Average FID and MSE values were measured on the Cityscapes test dataset. Adapted from [1]. Fig. 9: Impact of the dimensionality (left) and \(\beta=0.01\) (right) on clustering the latent space of a VAE. Adapted from [1].
2302.14233
Goal Driven Discovery of Distributional Differences via Language Descriptions
Mining large corpora can generate useful discoveries but is time-consuming for humans. We formulate a new task, D5, that automatically discovers differences between two large corpora in a goal-driven way. The task input is a problem comprising a research goal "$\textit{comparing the side effects of drug A and drug B}$" and a corpus pair (two large collections of patients' self-reported reactions after taking each drug). The output is a language description (discovery) of how these corpora differ (patients taking drug A "$\textit{mention feelings of paranoia}$" more often). We build a D5 system, and to quantitatively measure its performance, we 1) contribute a meta-dataset, OpenD5, aggregating 675 open-ended problems ranging across business, social sciences, humanities, machine learning, and health, and 2) propose a set of unified evaluation metrics: validity, relevance, novelty, and significance. With the dataset and the unified metrics, we confirm that language models can use the goals to propose more relevant, novel, and significant candidate discoveries. Finally, our system produces discoveries previously unknown to the authors on a wide range of applications in OpenD5, including temporal and demographic differences in discussion topics, political stances and stereotypes in speech, insights in commercial reviews, and error patterns in NLP models.
Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, Jacob Steinhardt
2023-02-28T01:32:32Z
http://arxiv.org/abs/2302.14233v2
# Goal Driven Discovery of Distributional Differences via Language Descriptions ###### Abstract Mining large corpora can generate useful discoveries but is time-consuming for humans. We formulate a new task, D5, that automatically discovers differences between two large corpora in a goal-driven way. The task input is a problem comprising a research goal ("_comparing the side effects of drug A and drug B_") and a corpus pair (two large collections of patients' self-reported reactions after taking each drug). The output is a language description (discovery) of how these corpora differ (patients taking drug A "_mention feelings of paranoia_" more often). We build a D5 system, and to quantitatively measure its performance, we 1) contribute a meta-dataset, OpenD5, aggregating 675 open-ended problems ranging across business, social sciences, humanities, machine learning, and health, and 2) propose a set of unified evaluation metrics: validity, relevance, novelty, and significance. With the dataset and the unified metrics, we confirm that language models can use the goals to propose more relevant, novel, and significant candidate discoveries. Finally, our system produces discoveries previously unknown to the authors on a wide range of applications in OpenD5, including temporal and demographic differences in discussion topics, political stances and stereotypes in speech, insights in commercial reviews, and error patterns in NLP models. Machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning machine learning, machine learning, machine learning, machine learning machine learning, machine learning, machine learning, machine learning machine learning, machine learning, machine learning, machine learning, machine learning machine learning, machine learning, machine learning machine learning, machine learning machine learning, machine learning, machine learning, machine learning, machine learning, machine learning machine learning, machine learning, machine learning machine learning, machine learning machine learning, machine learning, machine learning machine learning, machine learning, machine learning, machine learning, machine learning machine learning, machine learning, machine learning, machine learning machine learning, machine learning machine learning, machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, machine learning machine learning, machine learning machine learning machine learning, machine learning machine learning, the baseline system from Zhong et al. (2022), incorporating the goal produces relevant hypotheses 31% of the time more often (21% for novelty and 28% for significance). Besides quantitative evaluation, a repository of open problems like OpenD5 allows the following list of operations: **Automate discoveries.** Every time we build a better D5 system, we can apply it to a repository of open problems and send the discoveries to researchers who posed them. We show this paradigm is plausible by using our system to automatically produce useful discoveries on OpenD5 (Section 5), including insights from commercial reviews, temporal and demographic differences in discussion topics, political stances and stereotypes in speeches, differences in lyric styles, and error patterns in NLP systems. We anticipate future systems to produce more discoveries. **Train better D5 systems.** Like other machine learning tasks, we can train a system once we have a dataset. We describe a self-supervised learning algorithm that uses a repository of problems (without solutions) to train LMs to propose more valid hypotheses (Section 6). As a proof-of-concept, we show that it can make LMs better describe the differences between groups of text samples. **Analyze the limitations of our evaluation.** Using concrete examples from OpenD5, we show that our current evaluation metrics do not encourage diverse findings, do not always produce causal conclusions, and cannot evaluate discoveries involving heavy expert knowledge (Section 7). These analyses inform areas for future improvement. To conclude, by collecting OpenD5, we show that D5 can be benchmarked, automated, analyzed, and learned, just like any other machine learning task. Since the authors are not domain experts in most of the open problems we have collected, we hope future research can improve by gathering feedback from domain experts and a more authentic meta-dataset, potentially accelerating discoveries.1 Footnote 1: We share the code at [https://github.com/ruigi-zhong/D5](https://github.com/ruigi-zhong/D5), and the dataset at [https://doi.org/10.5281/zenodo.7683302](https://doi.org/10.5281/zenodo.7683302). The license information is in Appendix G. ## 2 OpenD5 We contribute a new meta-dataset, OpenD5, which contains 675 open-ended D5 problems. We describe how the problems are represented, how we collected them, and their open-ended nature. **Representation.** Each problem in OpenD5 is represented by 1) a corpus pair, Corpus A and Corpus B, with on average 17K samples, and 2) a description of the research goal. In this task, the input is a research goal and a corpus pair, while the outputs are valid and meaningful discoveries in natural language predicates (Figure 1). For example, Corpus A/B can be self-reported reactions after taking drug A/B, and the research goal is to understand the side effects of drug A; one discovery can be that Corpus A has more samples that "_mentions feelings of paranoia_". We use 50% of each corpus as the "research" split and 50% as the "validation" split. The system can only access the research split, while the validation split is reserved for Figure 1: Each problem in OpenD5 contains 1) a corpus pair, which has \(\sim\)17K samples on average and is partitioned into two halves called “research split” and “validation split”, and 2) a natural language description of the research goal, which also contains information about how the corpus pair was collected. A D5 system takes the goal and the research split as inputs and generates valid and meaningful discoveries in natural language as outputs. The underlined texts in the research goal vary across problems, while the rest are templates. the evaluators to validate the discovery. A validation split prevents overfitting the discoveries to the given corpora and is analogous to the train-test split in machine learning. **Collection Process.** We collected 675 problems in total, ranging across business, social sciences, humanities, health, and machine learning; see Figure 2 for a few examples. To build OpenD5, two of the authors performed an extensive literature review on problems that could potentially benefit from our system, e.g., reading survey papers (Nguyen et al., 2020) and courses2 on computational social sciences, and skimming through the ACL proceedings from the past decade3 and datasets from Kaggle4 that has an NLP tag; we then brainstormed the research goals, scraped/generated the corresponding corpora, and post-processed them over nine months, resulting in 675 problems. Appendix G includes the complete list of citations for the datasets we aggregated. Footnote 2: [http://www1.cs.columbia.edu/~smara/teaching/S18/](http://www1.cs.columbia.edu/~smara/teaching/S18/) Footnote 3: [https://aclanthology.org](https://aclanthology.org) Footnote 4: [https://www.kaggle.com](https://www.kaggle.com) **Open-Endedness.** Since we hope to build systems that can tackle challenging research problems, we did not avoid cases where we do not know the ground truth answer. On the contrary, we favored problems which we do not have an answer for. This means that, for some problems, it might be infeasible to produce any meaningful discovery. This is different from standard benchmarking practices, where humans can provide a reference solution to evaluate an AI system. However, even though we do not know the ground truth, once a system produces a discovery, we can still evaluate it. We present our evaluation metrics in the next section. ## 3 Evaluation For the research goal of comparing the side effects of drug A and drug B, how do we evaluate a system-generated discovery that Corpus A "_mention feelings of paranoia_" more often? First, it needs to be **valid**, such that indeed more samples from Corpus A satisfy this predicate, which can be evaluated (approximately) objectively. Second, it needs to be **meaningful** to the research goal of understanding side effects, which depends on the researcher's subjective judgement. We define validity and meaningfulness below. ### Validity Similar to Zhong et al. (2022), we require an output discovery \(h\) to be a truth predicate on a text sample. For example, if \(h\) = "_mentions about family and children_", then \(h\) is true on the string \(x_{1}\) = "_My daughter loves me._" and false on the string \(x_{2}\) = "_I'm going to school_". Define \(T(h,x)\in[0,1]\) as "the certainty that \(h\) is true on \(x\)", e.g., \(T(h,x_{1})\approx 1\) and \(T(h,x_{2})\approx 0\). We approximate \(T(h,x)\) by asking three Turkers how certain they are and averaging their responses (see Appendix A for more details). Let \(\mathcal{D}_{A}^{\text{val}}\) and \(\mathcal{D}_{B}^{\text{val}}\) denote the validation set for Corpus A and B, respectively. Then Figure 2: OpenD5 contains 675 problems, and we show some examples here by row. Appendix G includes the citations. we define the validity \(V\) as \[V(h):=\mathbb{E}_{x\sim\mathcal{D}^{\text{ud}}_{A}}[T(h,x)]-\mathbb{E}_{x\sim \mathcal{D}^{\text{ud}}_{B}}[T(h,x)]. \tag{1}\] In practice, we do not have the budget to compute \(V(h)\) since it requires asking humans to read the entire validation split just to evaluate one single discovery \(h\); therefore, we approximate this quantity by choosing a subset of samples from Corpus \(A\) and Corpus \(B\) to estimate \(V\). We compute a \(p\)-value for the null hypothesis that \(V\leq 0\) by conducting a t-test to compare the mean of \(T(h,x)\) on the subset from Corpus \(A\) to that of Corpus \(B\). A discovery should ideally have a large \(V\) value and a small \(p\)-value. ### Meaningfulness Not every valid discovery is meaningful. For example, if the goal is to understand the topical differences between news from 2008 (Corpus A) and news from 2007 (Corpus B), the discovery that Corpus A "_contains news from 2008_" is completely valid by definition but meaningless, since it provides only trivial information and is irrelevant to the goal of understanding topical differences. McGarry (2005) surveyed a list of desirable properties for discovery, and we condensed them into three submetrics to rate how meaningful a discovery is based on the research goal: 1) relevance, 2) novelty, and 3) significance. We evaluate these independently of validity and assume that the discovery is already valid. For example, the discovery that "something can travel faster than light" is meaningful if true, even though it is highly implausible. We rate each submetric with 1, 1, or 2, where higher is better. The evaluation instructions are below. **Relevance.** How relevant the discovery is to the goal. For example, suppose we were a student comparing essays rated as convincing vs. not convincing to figure out what writing style is convincing. Then: * The discovery "_write in first person_" is directly related to the writing style, so we rate it 2. * The discovery "_use the word "I""_", is not exactly a writing style, but can still inform the relevant underlying principle of "_write in first person_", so we rate it 1. * The discovery "_argue for abortion_" does not tell us about the underlying writing style, so we rate it 1. **Novelty.** The difficulty of generating the discovery, e.g. can we think of the discovery in 5 minutes with the goal but without looking at the corpora? For example, suppose we were an airline manager trying to find improvements to the flight experience, and we were comparing negative reviews vs. positive reviews. Then: * The discovery "_contain more negative language_" is almost certain for negative reviews, so we rate it 1. * The discovery "_conplain about the crew members_" is not entirely novel, but is not tautologically true and hence requires confirmation, so we rate it 1. * The discovery "_mention a language barrier with the crew members_" is specific and hard to think of without looking at the data, so we rate it 2. Note that our evaluation is "blinded to the samples": we still consider a discovery novel as long as it is hard to think of before looking at the corpora, even if it might be easy to think of after looking at the corpora. For example, the physical law that \(F=ma\) is easy to observe if we have collected and plotted the data on acceleration, mass, and force; however, it might be difficult to think of before we see any such data, so we consider it novel. **Significance.** Given the research goal, how beneficial is it to learn the discovery for the first time? For example, suppose we were an Amazon retailer trying to figure out what customers like and dislike about my product based on negative reviews and positive reviews. Then: * The discovery "_accuses the team pushing out a bad product_" is not significant since it cannot direct the retailer to improve the product, so we rate it 1. * The discovery "_asks for a more durable product_" gives some hints about how to improve the product, but isn't sufficiently helpful on its own, so we rate it 1. * The discovery "_says the wrench is missing_" can lead to concrete actions for improvement, so we rate it 2. To conclude, an ideal discovery would have a high \(V\) value with a small \(p\)-value and achieve ratings of 2 across all of relevance, novelty, and significance. The latter three submetrics are inherently subjective; however, the next section shows that we can still use them to compare hypothesis proposers and draw robust conclusions. ## 4 Method We describe our D5 system, which maps from a corpus pair and a research goal to a set of natural language predicates. Our system is inspired by a two-stage model of how humans discover patterns in data: creatively brainstorming hypotheses and then rigorously validating them on the data (Ludwig and Mullainathan, 2022). Analogously, we first 1) propose hypotheses conditioned on the research goal and a subset of samples from the corpus pair (Section 4.1), and then 2) validate each hypothesis whether it is more often true on one corpus than the other and outputs the valid ones as the final discoveries (Section 4.2); see Appendix Figure 5 for a more illustrative overview. Our system closely mirrors that of Zhong et al. (2022), except that we leverage the research goal to propose more meaningful hypotheses. Using OpenD5, we quantitatively show in Section 4.3 that GPT-3 text-davinci-003 (abbreviated as GPT-3) can use the research goal to propose more meaningful hypotheses. As indicated in Section 2, our system only accesses the research split of each corpus, which we denote as \(\mathcal{D}^{\text{res}}_{A}\)/\(\mathcal{D}^{\text{res}}_{B}\). ### Hypothesis Proposer We prompt GPT-3 (Ouyang et al., 2022) to propose hypotheses. We construct the prompt by concatenating a few random samples from \(\mathcal{D}^{\text{res}}_{A}\) and \(\mathcal{D}^{\text{res}}_{B}\), the research goal, and an instruction to output a list of hypotheses. Different from Zhong et al. (2022), we include the research goal to elicit meaningful hypotheses. We continue sampling hypotheses from GPT-3 until we obtain a set of 60 distinct hypotheses, which we call \(H_{\text{init}}\). See Figure 3 left for an example prompt, and Appendix C for additional details. ### Hypothesis Validator Many hypotheses in \(H_{\text{init}}\) are invalid: they are not more often true on \(\mathcal{D}_{A}\) than on \(\mathcal{D}_{B}\) (i.e. \(V(h)<0\)). To automatically filter them out, we use a language model \(T^{\prime}\) to simulate the Turkers' judgement \(T\) and hence approximate the validity score \(V\) of a hypothesis \(h\) with \(V^{\prime}(h)\), where \[V^{\prime}(h):=\mathbb{E}_{x\sim\mathcal{D}^{\text{res}}_{A}}[T^{\prime}(h,x )]-\mathbb{E}_{x\sim\mathcal{D}^{\text{res}}_{B}}[T^{\prime}(h,x)]. \tag{2}\] To compute \(T^{\prime}\), we ask FLAN-T5 (Chung et al., 2022) whether \(x\) satisfies \(h\) with the prompt shown in Figure 3 right. To better simulate Turker's judgment, we collected additional Turker annotations to fine-tune FLAN-T5 (see Appendix D for details). We then perform a t-test to compare the mean value of \(V^{\prime}(h,x)\) on the research split of Corpus \(A\) and the mean value on Corpus \(B\), rule out the hypotheses with \(p\)-value greater than 0.001, and output them as a set of discoveries. Finally, we obtain additional discoveries by repeating the same process but asking our system to propose and validate hypotheses about Corpus \(B\) rather than Corpus \(A\). ### Goal Leads to More Meaningful Hypotheses Compared to Zhong et al. (2022), we added the research goal to our prompt when generating hypotheses. Does this improve the quality of the proposed hypotheses? To investigate this, we sampled 100 problems from OpenD5 with distinct research goals and randomly sampled 2 hypotheses from GPT-3 with and without using research goal (see Figure 3), resulting in 400 hypotheses to evaluate. Three authors then rated their meaningfulness based on the three metrics defined in Section 3, while being blinded about which hypotheses were generated with the research goal. The results are shown in Table 1. We found that, when prompted with the research goal, GPT-3 on average proposes more relevant, novel, and significant hypotheses; additionally, it proposes hypotheses with ratings higher than 13%/21%/28% more often in terms of relevance/novelty/significance. Since this is a subjective evalua Figure 3: All underlined content in the prompt differs across problems, while the other content in the prompt is templated. **Left**: proposer prompt. The generated hypotheses are in blue. All content with colored background is excluded from the figure for brevity. For the baseline of not using the research goal, we removed the “research goal” block from the prompt. **Right**: the validator prompt. tion, the Kappa inter-annotator agreement is only moderate, ranging from 0.37 to 0.56. However, we can still robustly conclude that the model can propose more meaningful hypotheses when conditioned on the goal: we calculate the \(p\)-values for the null hypothesis that with-goal and no-goal have equal performance, and we find \(p\)-values to be highly significant and robust across evaluators, for all three submetrics. We provide additional analyses in Appendix B.5 Footnote 5: The experiments in this paper were run at different iterations of the data collection process; since they require intense manual effort and no automatic metric is available, it is expensive to re-run them on our final polished version. The differences between iterations are mainly due to 1) noticing data sharing constraints due to licenses, 2) increasing diversity by including new problems or removing similar problems, or 3) improving the research goal description. For reproducibility, we include the set of research goals for each experiment in our github repo. ## 5 Application Every time we build a better D5 system in the future, we may use it to automatically generate useful discoveries on an existing aggregation of open problems like OpenD5 and send the discoveries to researchers who posed the problems. In this section, we use our current system to automatically generate discoveries on OpenD5. ### Automatically Generating Discoveries on OpenD5 We ran our system on all problems in OpenD5, obtaining in total 3296 discoveries across 402 problems. However, we do not have enough budget to validate every finding, since estimating \(V\) is expensive (Section 3.1). Therefore, from the remaining 3296 discoveries, we manually selected 21 discoveries that 1) the authors think are meaningful enough, 2) are representative of potential use cases, 3) do not require expert knowledge for Turkers to judge, and 4) are likely to achieve a small \(p\)-value with fewer than 200 samples from \(\mathcal{D}^{\text{val}}_{A}\) and \(\mathcal{D}^{\text{val}}_{B}\). We then estimated their validity based on the procedure described in Section 3.1 by using fewer than 200 samples from the validation split and calculated the \(p\)-values.6 Since we are testing multiple discoveries and each of them can be statistically significant merely due to chance, we keep 15 discoveries with \(V\) that are significantly non-zero with \(p\)-value below 7%, a threshold determined by the Benjamini Hochberg's procedure with a false discovery rate of 10%. In other words, fewer than 10% of the discoveries below are false discoveries in expectation. Footnote 6: We determined the number of samples s.t. \(V^{\prime}\) can achieve a \(p\)-value of \(0.005\). Estimating \(V\) for these discoveries costs \(\sim\)$1500. ### Example Discoveries on OpenD5 For each example discovery, we also report the estimated \(V\) based on Turker's rating and the AUCROC score of using the discovery to discriminate samples from \(\mathcal{D}^{\text{val}}_{A}\) and \(\mathcal{D}^{\text{val}}_{B}\). All italicized text in quotes from this section are literal copies of what our system generated. Comparing lyrics from different eras.Compared to lyrics from the 70s, those from the 80s more often "_references violence or aggression_" (\(V\approx 0.06\), AUCROC \(\approx\) 0.58). Analyzing gender differences in self-reported happy moments.Compared to self-reported happy moments written by males, those by females "_mentions children or family_" more often (\(V\approx 0.08\), AUCROC \(\approx\) 0.56). Analyzing errors in NLP systems.We considered the task of perspectrum classification (Chen et al., 2019), which has the following instruction: "given a perspective and a claim, classify whether the given perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining." We considered two few-shot learning systems: GPT-3 Instruct Curie (Ouyang et al., 2022) and Tk-Instruct-11B (Wang et al., 2022). We focused on the perspectives where the ground truth label is undermining, and compare the following two corpora: Corpus A - the set of perspectives where Curie correctly classifies the input as undermining but Tk-11B is wrong, and Corpus B - the set where TK-11B is correct while Curie is wrong. We found \begin{table} \begin{tabular}{l c c|c c|c c} \hline \hline & with-goal & no-goal & kappa & spearmanr & \(p\) of avg & worst \(p\) of ind \\ \hline Relevance & 1.68 & 1.20 & 0.56 & 0.71 & 1 \(\times\) 10\({}^{-10}\) & 1 \(\times\) 10\({}^{-8}\) \\ Novelty & 1.24 & 0.97 & 0.37 & 0.50 & 5 \(\times\) 10\({}^{-6}\) & 4 \(\times\) 10\({}^{-2}\) \\ Significance & 1.56 & 1.05 & 0.46 & 0.64 & 2 \(\times\) 10\({}^{-10}\) & 2 \(\times\) 10\({}^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Left.** For each metric, we report the average rating on hypotheses generated with or without using the research goal, and find that the former performs better. **Middle.** The inter-annotator agreement rate averaged across pairs of author evaluators, measured by Kappa and Spearman rank coefficient; we find substantial correlations between evaluators across all these subjective metrics, with relevance \(>\) significance \(>\) novelty. **Right.** We compute the \(p\)-values for the null hypothesis that “with-goal and no-goal result in the same performance”. The \(p\) of avg column reports the \(p\)-values after we average the ratings from all evaluators, while the “worst \(p\) of ind” column takes the max of all \(p\)-values based on ratings of individual evaluators. Overall, the conclusions are statistically significant and they can be robustly reproduced across individual evaluators. that Corpus B more often "_Uses language that is positive or uplifting_" (\(V\approx 0.12\), AUCROC \(\approx\)0.67). One possible explanation is that Curie made many mistakes by misinterpreting undermining as a label for negative sentiment rather than a logical relation between the claim and the perspective. For another example, we compared two natural language inference models, one trained on MNLI and the other trained on SNLI. Then we tested them on MNLI and compare two corpora: Corpus A - the set of inputs where the model trained with in-distribution data (MNLI) is wrong but the model trained with out-of-distribution data is correct, and Corpus B vice versa. We found that the latter more often "_has an informal tone, such as slang or colloquial speech_" (\(V\approx 0.08\), AUC-ROC \(\approx\)0.62). One possible explanation is that MNLI contains more different genres and hence more informal speeches, causing the former model to perform better on these examples. **Understanding political stances and stereotypes in speeches.** When comparing presidential speeches on immigrants from Obama to those from Trump, the former "_argues for a path forward to promote the fair and just treatment of immigrants_" (\(V\approx 0.16\), AUCROC \(\approx\) 0.73), while the latter more frequently "_refers to illegal immigrants as criminals_" (\(V\approx 0.09\), AUCROC \(\approx\) 0.62). **Analyzing airline customer reviews.** We compared the concerns in reviews of the airline Air Canada v.s. its subsidiary, Air Canada Rogue, which is considered a low-price wing of Air Canada. The latter more often " _mentions lack of legroom_"(\(V\approx 0.16\), AUCROC \(\approx\) 0.68). **Identifying temporal differences in news headlines.** We compared headlines published by ABC news across different years. Compared to the year 2014, headlines from the year 2010 "_mention disasters and crimes, such as plane accidents and assaults_" more often (\(V\approx 0.03\), AUCROC \(\approx\) 0.53). Compared to year 2019, year 2020 more often "_discusses coronavirus-related topics_" (\(V\approx 0.21\), AUCROC \(\approx\) 0.65). **Describing distribution shift.** We compared the premises from the SNLI dataset and MNLI dataset, and the former "_involves physical activity, such as walking, playing, climbing, or biking_" (\(V\approx 0.13\), AUC-ROC \(\approx\)0.64). One possible explanation is that SNLI is based on image captions. **Comparing discussion topics between bots and human users.** We compared the topical differences between tweets identified as written by bots vs. human users on Twitter, and our system finds that the bots more often "_contains keywords related to business, finance or trading_" (\(V\approx 0.08\), AUCROC \(\approx\) 0.61). One possible explanation is that bots are frequently used to generate finance-related scams. **Describing text clusters.** We present two example descriptions for text clusters. One from Wikipedia: "_references pop culture, such as movies, books, and television shows_." (\(V\approx 0.21\), AUC-ROC \(\approx\) 0.73); one from PoetryFoundation.com: "_uses vivid imagery and metaphors to convey a feeling_" (\(V\approx 0.09\), AUC-ROC \(\approx\)0.65). We hope future works can collect more open problems, allowing D5 systems to produce more impactful discoveries. ## 6 Self-Supervised Learning Since the problems in OpenD5 are open-ended, our system could potentially produce discoveries with higher validity scores. Therefore, we design a self-supervised learning algorithm to improve a language model's ability to propose more valid hypotheses, using the principle that **it is easier to validate a discovery than to generate one**. **Algorithm.** Suppose we are given a set of problems for training and an initial language model \(m_{\text{init}}\). Our goal is to automatically generate a set of _prompt-completion_ pairs to fine-tune \(m_{\text{init}}\) so that it can propose hypotheses that are more valid. To generate a _prompt_, we randomly sample a problem and create a proposer prompt following the procedure in Section 4.1. To generate the desired _completion_ given a prompt, we sample multiple hypotheses from \(m_{\text{init}}\), approximate their \(V^{\prime}\) score on the samples in the proposer prompt with the same language model \(m_{\text{init}}\) (Section 4.2), and select the highest scoring hypothesis. Finally, we use the prompt-completion pairs to fine-tune \(m_{\text{init}}\). However, since we cannot fine-tune instruction-tuned GPT-3, we can only experiment with Flan-T5 (Chung et al., 2022), an open-sourced instruction-tuned model that might only work well for easier "mini-problems". Therefore, as a proof of concept, we test our algorithms for describing groups of four samples, where each group comes from a text cluster. As an overly simplified example, we will give the LM the prompt "_Group A: 1. dog 2. cat 3. pig 4. cow. Group B: 1. phone 2. laptop 3. desk 4. cup_" as an input and the LM can output "_mentions an animal_" as a hypothesis of how group A differs from group B. **Data.** We created 33 corpora by merging all corpora in OpenD5 with the same domain, and automatically generated 4503 text clusters using RoBERTa embeddings (Aharoni and Goldberg, 2020). We focused on clustering because it can automatically generate a large amount of semantically coherent groups of samples. To create a pair of four samples, we randomly sampled a corpus, sampled two clusters within that corpus, and took four random samples from each cluster. To test cross-corpus generalization, we reserved 28 of the 33 corpora to create mini-problems for evaluation, using the rest for training. We used Flan-T5 (Chung et al., 2022) as \(m_{\text{init}}\) and sampled hypotheses with a temperature of 0.8. For training, we sampled 30,000 mini-problems and selected the best of eight hypotheses generated by \(m_{\text{init}}\) as the target completion; for evaluation, we sampled 200 mini-problems to calculate \(V\) with Turkers and 1500 mini-problems to calculate \(V^{\prime}\) automatically. **Results.** We evaluated randomly sampled hypotheses from the language model before and after self-supervised training. The automated "self-evaluation" validity score \(V^{\prime}\) improves substantially from 0.22 to 0.37, and the "true" validity score \(V\) according to Turker evaluation improves from 0.07 to 0.10, with a \(p\)-value of 0.02. This result provides preliminary evidence that our algorithm (or similar variants) could be applied to a large set of problems to improve the validity of the hypotheses; we expect future validators to simulate human judgments better, hence decreasing the approximated gap of improvement between \(V\) and \(V^{\prime}\). ## 7 Analysis We use OpenD5 to analyze the limitations of our metrics, and we discuss more limitations and future work in Appendix E. **Hypotheses about the corpora might not be appropriate predicates on individual samples.** When comparing highly rated definitions from UrbanDictionary.com to others, our system generates the hypothesis that the former "_is more likely to include slang or colloquial terms._" This is a statement about a collection of text samples, but the validator requires the hypothesis \(h\) to be a predicate on individual text samples \(x\). To address this problem, we use GPT-3 to automatically detect and remove comparatives from the hypotheses, e.g., rewriting the hypothesis above to "_include slang or colloquial terms._" However, some versions of this problem were harder to remove. For example, when comparing reviews from American Airlines (AA) flights and Delta Airlines to understand which aspects of each airline are doing better/worse, the proposer generated the hypothesis "_mentions **American Airlines' staff being unfriendly and unhelpful_". Interpreted literally, this hypothesis can only be true on the corpus of AA reviews, since it presupposes the review to be about AA. The correct predicate for use on individual samples should instead be "_mentions stuff being unfriendly and unhelpful_" (without the words "_American Airlines_"). Therefore, future systems should explicitly convert corpus-level statements to their corresponding correct predicates, and the metrics should evaluate whether the validity of the predicates implies the corpus-level statements. **Our metrics do not evaluate diversity.** There are often multiple valid and meaningful discoveries, and our system should ideally generate all of them. For example, when comparing low-rating and high-rating reviews to understand what stands out to customers, both "_mentions the hidden fees and poor customer service at the airport_" and "_mentions the airline charging extra for carry-on items_ " could be valid discoveries. However, our current system sometimes repeats a discovery using similar paraphrases, e.g., "_mentions the rude and unprofessional attitude of the staff_" and "_mentions the staff being rude and unhelpful_". Future evaluation metrics can take diversity into account. **Interpreting discoveries requires domain experts.** We used Turkers' judgment when computing \(T(h,x)\) to judge the validity of a discovery. However, many discoveries require expert knowledge to interpret properly. For example, it requires medical training to reliably judge whether a self-reported drug-use experience satisfies "_mentions psychedelics, such as LSD and shrooms._" **Correlation \(\neq\) causation.** Our metrics currently do not evaluate whether the discovery is causally related to how the corpus pair was generated. For example, when comparing self-reported happy moments from females and males, even if the former corpus has more samples that "_mention children and family_", it does not necessarily imply that family plays a more important role in inter-personal relations for females; an alternative hypothesis is that females might mention people in general more often than males, hence leading to the observation that they mention family more often. Spurious correlations could also sneak into our validity evaluation: for example, if the Turkers implicitly associate female activities as family-related (Greenwald and Banaji, 1995), we might falsely make this discovery due to evaluator biases. Future metrics should also consider plausible alternative hypotheses to evaluate causality and control the potential biases from the human evaluators. We should also treat the discovery from D5 with caution to prevent automating and amplifying societal biases. ## 8 Related Work **Inductive Reasoning with NLP Models.** Recent works show that language models are capable of inductive reasoning under restricted settings, discovering patterns from a set of text data points and describing them with language (Honovich et al., 2022). Zhou et al. (2022) and Ye et al. (2022) use this capability to improve zero/few-shot accuracy by inferring the most likely instruction using input-output example(s) of the target task. Zhong et al. (2022) and Singh et al. (2022) use this capability to discover patterns in datasets, and we improve by building a meta-dataset of open-ended problems and require the discovery to be meaningful. ML models can also perform inductive reasoning in other modalities, such as vision. For example, Hernandez et al. (2021) describes visual features that activate a neuron; Zhu et al. (2022) describes distribution shifts between the training distribution and the test distribution for images; and Eyuboglu et al. (2022) describes errors made by vision models. We hope future models can perform inductive reasoning in additional modalities, such as sound (Aghajanyan et al., 2023) or physical senses (Thomason et al., 2016). **Automated Discovery.** It is not new to automatically discover patterns by learning from empirical data. To list a few classical methods, linear regression analyzes the effect of each real-valued feature by interpreting the learned weights (Draper and Smith, 1998); n-gram models can extract discriminative phrases, thus yielding insights about corpus-level differences (Manning and Schutze, 1999); small decision trees can extract interpretable if-then statements (Letham et al., 2015); and an entity embedding model learned on existing relations between entities can predict unseen relations (Socher et al., 2013). In comparison, D5 produces discoveries in the form of natural language predicates, which are flexible and interpretable; additionally, it is more directed at the research goal, while machine learning classifiers like linear regressions will pick up any discriminative features. **Epistemology.** While the process of validating a hypothesis is well-formulated, it is much less well-understood how to automatically generate hypotheses and decide what discoveries are meaningful (Shapere, 1964; Heckman and Singer, 2017). Related works in this area have been sparse, among which McGarry (2005) sketches high-level principles for evaluating knowledge discoveries and Ludwig and Mullainathan (2022) proposes to crowd-source hypotheses from MTurk workers. We concur with the perspective of Polanyi et al. (2000) that meaningfulness of a hypothesis cannot be explicitly verbalized with simple logic but is dependent on implicit community norms; therefore, the process of proposing hypotheses should be learned from empirical data (e.g. pre-training) rather than deduced from a priori analysis of concepts (Quine, 1969). We hope contributions from other domains can provide more empirical data on what discoveries are meaningful, hence guiding our system to produce more important discoveries. ## 9 Conclusion We formalized the task of D5, which discovers corpus-level differences via language descriptions in a goal-driven way. We defined its evaluation metrics - validity, relevance, novelty, and significance - and collected a meta-dataset, OpenD5, to evaluate D5 systems. We presented 10 use cases on D5, proposed a self-supervised learning algorithm, and analyzed the limitation of the current evaluation metrics. To conclude, like any other traditional machine learning task, D5 can be automated, benchmarked, learned, and analyzed. We hope future research can improve by gathering feedback from domain experts and a more authentic meta-dataset, potentially accelerating future discoveries. ## Acknowledgement We thank Xiaochuang Han and Sam Bowman for their early discussions on this project. We thank Cathy Chen, Erik Jones, Jessy Lin, Alex Pan, Chenglei Si, Xi Ye, and Tianyi Zhang for their helpful feedback on the paper draft. We thank OpenAI and Anthropic for providing model access. ## Individual Contributions Ruiqi Zhong proposed the D5 task, drew the conceptual connection to naturalistic epistemology, and proposed to treat it as a standardized machine learning task by collecting a dataset; co-designed the evaluation metric; collected most of the machine learning problems in OpenD5; conducted all the experiments; drafted the entire paper. Peter Zhang collected all the non-machine learning problems and the text clustering problems in OpenD5; co-designed the evaluation metrics, organized the hypotheses evaluation, and contributed directions for future work; left feedback on the paper draft. Steve Li led the development of the annotation interface described in Appendix F; provided feedback on the evaluation metrics and participated in the evaluation; left feedback on the paper draft. Jinwoo Ahn provided feedback on the annotation interface; provided feedback on the evaluation metrics and participated in the evaluation; left feedback on the paper draft. Dan Klein left feedback on the title, abstract, and intro. Jacob Steinhardt provided guidance throughout the project and left detailed feedback on the entire paper.
2310.20387
Overview of LiLAS 2020 -- Living Labs for Academic Search
Academic Search is a timeless challenge that the field of Information Retrieval has been dealing with for many years. Even today, the search for academic material is a broad field of research that recently started working on problems like the COVID-19 pandemic. However, test collections and specialized data sets like CORD-19 only allow for system-oriented experiments, while the evaluation of algorithms in real-world environments is only available to researchers from industry. In LiLAS, we open up two academic search platforms to allow participating research to evaluate their systems in a Docker-based research environment. This overview paper describes the motivation, infrastructure, and two systems LIVIVO and GESIS Search that are part of this CLEF lab.
Philipp Schaer, Johann Schaible, Leyla Jael Garcia Castro
2023-10-31T11:57:54Z
http://arxiv.org/abs/2310.20387v1
# Overview of LiLAS 2020 - Living Labs for Academic Search ###### Abstract Academic Search is a timeless challenge that the field of Information Retrieval has been dealing with for many years. Even today, the search for academic material is a broad field of research that recently started working on problems like the COVID-19 pandemic. However, test collections and specialized data sets like CORD-19 only allow for system-oriented experiments, while the evaluation of algorithms in real-world environments is only available to researchers from industry. In LiLAS, we open up two academic search platforms to allow participating research to evaluate their systems in a Docker-based research environment. This overview paper describes the motivation, infrastructure, and two systems LIVIVO and GESIS Search that are part of this CLEF lab. Keywords:Evaluation, living labs, academic search, reproducibility ## 1 Introduction The field of Information Retrieval (IR) originated in the domain of scientific/academic information and documentation. Back in the 1960s, the original Cranfield studies dealt with the indexation and the retrieval of scientific documents. Cleverdon et al. established their whole evaluation methodology around the use-case of scientific and academic retrieval requirements. Today, the search for relevant scientific documents is still an open endeavor, and although retrieval systems show substantial performance gains, it is not a solved problem yet. The current COVID-19 pandemic, for example, showed once again that even old problems like the search for scientific documents are not solved. Therefore, current efforts like the CORD-19 collection4 and the TREC COVID retrieval campaign5 gather much attraction and are in the spotlight of the IR community, even though at their core, they deal with the same - timeless - problem set as Cleverdon more than 50 years ago. Footnote 4: [https://www.semanticscholar.org/cord19](https://www.semanticscholar.org/cord19) Footnote 5: [https://ir.nist.gov/covidSubmit/index.html](https://ir.nist.gov/covidSubmit/index.html) Besides these timeless retrieval issues, the need for innovation in academic search is shown by the stagnating system performance in controlled evaluation campaigns, as demonstrated in TREC and CLEF meta-evaluation studies [15, 1]. User studies in real systems of scientific information and digital libraries show similar conditions. Although massive data collections of scientific documents are available in platforms like arXiv, PubMed, or other digital libraries, central user needs and requirements remain unsatisfied. The central mission is to find both relevant and high-quality documents - if possible, directly on the first result page. Besides this ad-hoc retrieval problem, other tasks such as the recommendation of relevant cross-modality content including research data sets or specialized tasks like expert finding are not even considered here. On top of that, relevance in academic search is multi-layered [5] and a topic that drives research communities like the Bibliometrics-enhanced Information Retrieval (BIR) workshops [10]. The Living Labs for Academic Search (LiLAS) workshop fosters the discussion, research, and evaluation of academic search systems, and it employs the concept of living labs to the domain of academic search [13]. The goal is to expand the knowledge on improving the search for academic resources like literature, research data, and the interlinking between these resources. To support this goal, LiLAS introduces an online evaluation infrastructure that directly connects to real-world academic search systems [12]. LiLAS cooperates with two academic search systems providers from Life Sciences and Social Sciences. Both system providers support LiLAS by allowing participants of the lab to employ experimental search components into their production online system. We will have access to the click logs of these systems and use them to employ A/B tests or more complex interleaving experiments. Our living lab platform STELLA makes this possible by bringing platform operators and researchers together and providing a methodological and technical framework for online experiments [3]. ## 2 Related Work from CLEF and TREC CLEF and TREC hosted the Living Labs for Information Retrieval (LL4IR) and Open Search (TREC-OS) initiatives that are the predecessors to LiLAS. Both initiatives shared a common evaluation infrastructure that was released as an API1. This API allows academic researchers to access the search systems of other platforms. Participants of LL4IR and TREC-OS had access to the search systems' head queries and document sets. They had to precompute ranked result lists for a given set of candidate documents for the given head queries. Therefore it was a typical ad-hoc search task. Another task was run during the CLEF NewsReel campaign, where participants had to recommend news articles. This was possible by employing an offline test collection or in real-time via the Open Recommendation Platform (ORP) used by PLISTA. Footnote 1: [https://bitbucket.org/living-labs/ll-api](https://bitbucket.org/living-labs/ll-api) All these labs can be considered living labs and represent a user-centric study methodology for researchers to evaluate retrieval systems' performance within real-world applications. Thus, they aim to offer a more realistic experiment and evaluation environment as offline test collections and therefore, should be further investigated to raise IR-evaluation to a more holistic level. Within TREC and CLEF, only very few tracks or labs focused on the evaluation of academic search systems. Some used scientific documents or use-cases to generate test collections but did not necessarily focus on the unique requirements of the academic domain. Within CLEF, the Domain-specific track [8] compiled a collection of bibliographic records and research project descriptions from the Social Sciences to test the needs of scientific retrieval tasks. This test collection was created to contrast the then usual "general-purpose news documents" and to employ "different search criteria than those used for reference retrieval in databases of scientific literature items, and also offer no possibility for comparable test runs with domain-specific terminology". More recently, the TREC Precision Medicine / Clinical Decision Support Track released a large test collection in 2016 based on open access full-text documents from PubMedCentral. TREC-COVID is the latest retrieval campaign aiming at academic search with a particular focus on the rapidly growing corpus of scientific work on the current COVID-19 pandemic. The LiLAS workshop is a blend of the most successful parts of these previous evaluation campaigns. The Domain-specific track had a strong focus on scientific search, thesauri, and multilingual search. NewsREEL had an active technological component, and LL4IR/TREC-OS turned from product search to academic search but was not able to implement the scientific focus into the last iteration. There is much potential that is still not used in the question of how to evaluate academic search platforms online. ## 3 Stella - Evaluation Infrastructure for LiLAS Nowadays, testing approaches are commonly used to try out and evaluate how users interact when presented with some new or modified features on a website. Whenever the new or modified features differ from what has been done before, when multiple features change at once, or when the user interaction is to be gathered in a systematic way, A/B testing comes into place. An A/B testing, a controlled online experiment, allows to expose a percentage of real users and life-test those new or modified features [2, 9] offering to website designers and developers a living lab where to assess reactions and usage of new features better, allowing them for more accurate tuning based on data collected from production systems. For LiLAS, we use STELLA as our living lab evaluation infrastructure. STELLA is aiming to make it easier to evaluate academic retrieval information and recommendation systems [3]. Figure 1 shows an overview of how the steps flow from a researcher's or developer's idea to the evaluation feedback so the changes can be tuned and improved. It all starts with an idea, for instance adding synonyms to the keywords used by an end-user when searching for information. Developers will work on a modified version of the production system, including this change they want to analyze. Whenever an end-user goes to the system, everything will look as usual. Once the search keywords are introduced, STELLA will show end-user some results from the experimental system and some results from the regular production system. End-users will continue their regular interaction with the system. Based on the retrieved documents and the following interaction, STELLA will create an evaluation profile together with some statistics. Researchers and developers will then analyze STELLA's feedback and will react accordingly to get the usage level they are aiming at. STELLA's infrastructure relies on the container virtualization environment Docker [11], making it easier for STELLA to run multiple experimental systems, i.e., a multi-container environment, and compare them to each other and the production system as well. The core component in STELLA is a central Application Public Interface (API) connecting data and content providers with experimental systems, aka participant systems or participants, encapsulated as Docker containers. Further information can be found at the project website7, including some technical details via a series of blogs published regularly. Footnote 7: [https://stella-project.org/](https://stella-project.org/) Figure 1: **STELLA workflow, an online living lab supporting testing from ideas to evaluation**: Participants package their systems with the help of Docker containers that are deployed in the backend of academic information retrieval and recommendation systems. Users interact directly with the system, with a percentage diverted to the experimental features. Researchers and developers retrieve results and deliver feedback to tune and improve changes. Currently, STELLA supports two main tasks: ad-hoc retrieval and recommendation. In the following subsections, we will introduce two systems used during the STELLA development phase to understand better, learn, and test these two tasks. Although a fully functional version is already available, there is still room for improvement, particularly regarding the events logging, statistics analysis, and overall evaluation. LiLAS will promote an early discussion with future adopters and participants that will benefit not only STELLA but living labs in general. ### Livivo LIVIVO8 is a retrieval platform provided by ZB MED - Information Centre for Life Sciences. It serves the Life Sciences domain with a focus on medicine, health, nutrition, environment, and agriculture. LIVIVO includes unique features tailored to the German public and the national inter-library loan system, making it easier for researchers, practitioners, students, and the general public to access material licensed and hosted at different German libraries. LIVIVO brings together publication from 30 different sources, e.g., Medline, AGRICOLA, and AGRIS, including more than 58 million publications in different languages including English, German, Spanish, French and Portuguese. It uses automatic and semantic links to well-known vocabularies, for instance the Medical Sub-heading (MeSH) [14] for medical sciences, UMTHES [6] for environmental sciences, and AGROVOC [4] for agricultural sciences. The resultset ranked by relevance and Figure 2: LIVIVO, the ZB MED retrieval platform. Users can search by keywords, title, author or year and obtain a result set sorted by relevance together with additional features such is filters, publication details, access links, other publications like the one on display and library stock. can be narrowed down using filters such as the ZB MED subject fields. We include a sample query and corresponding results in 2. From March 2020, there is a dedicated portal serving Covid-19 related information. Regarding the integration to STELLA, a test instance of LIVIVO has been set up with a twofold purpose: introducing those elements needed in LIVIVO to integrate it to the STELLA framework, e.g., calling the STELLA API whenever a search is triggered in the production system, and evaluating the STELLA framework itself, i.e., how the containerization. Communication via the API and central STELLA server work with real production systems. We are also working on a LIVIVO dataset suitable for participant systems, mainly targeting MEDLINE articles written in English, about 25 million abstracts with their corresponding metadata including title, authors, affiliations, MeSH term among others. ### GESIS Search The internal GESIS academic search GESIS Search1 aims to aid their users in finding appropriate scholarly information on the broad topic of social sciences [7]. To this end, it provides different types of information from the social sciences, comprising literature (95k publications), research data (84k), questions and variables (12.7k), as well as instruments and tools (370). The publications are mostly in English and German and are annotated with further textual metadata like title, abstract, topic, persons, and others. With the Social Science Open Access Repository (SSOAR)2, GESIS Search also provides access to nearly 60k open access publications. Metadata on research data comprises (among others) Figure 3: GESIS Search, social science information portal. It provides different types of information from the social sciences, like literature, research data, questions and variables, as well as instruments and tools. a title, topics, datatype, abstract, collection method, primary investigators, and contributors in English and/or German. Regarding STELLA, the amount of different types of data allows not only for typical recommendations, such as from publications to publications but also for _cross-domain recommendations_, i.e., recommendations from different types such as from publications to research data. While this is still work in progress, the GESIS Search data and possible relevance indicators, such as click-paths, can be obtained. The data can be used to train a recommender and report lessons learned, and file issue requests on how to improve the training data. ## 4 Conclusion and Outlook We presented the artifacts we would like to use for the actual evaluation tasks at CLEF 2021. These artifacts are: (1) the STELLA living lab evaluation infrastructure, and (2) the two academic search systems LIVIVO and GESIS Search. These systems are from the two disjunct scientific domains life sciences and social sciences and include different metadata on research articles, data sets, and many other entities. With this at hand, we will derive the CLEF 2020 workshop participants' evaluation tasks for CLEF 2021. Promising task candidates are: * Ad-hoc retrieval for life Science documents * Dataset recommendation These tasks allows us to use the different data types available in the platforms. ## Acknowledgements This work was partially funded by the German Research Foundation (DFG) under the project no. 407518790.
2304.00123
Piecewise flat approximations of local extrinsic curvature for non-Euclidean embeddings
Discrete forms of the mean and directed curvature are constructed on piecewise flat manifolds, providing local curvature approximations for smooth manifolds embedded in both Euclidean and non-Euclidean spaces. The resulting expressions take the particularly simple form of a weighted scalar sum of hinge angles, the angles between the normals of neighbouring piecewise flat segments, with the weights depending only on the intrinsic piecewise flat geometry and a choice of dual tessellation. The constructions are based on a new piecewise flat analogue of the curvature integral along and tangent to a geodesic segment, with integrals of these analogues then taken over carefully defined regions to give spatial averages of the curvature. Computations for surfaces in both Euclidean and non-Euclidean spaces indicate a clear convergence to the corresponding smooth curvature values as the piecewise flat mesh is refined, with the former comparing favourably with other discrete curvature approaches.
Rory Conboye
2023-03-31T20:47:32Z
http://arxiv.org/abs/2304.00123v1
# Piecewise flat approximations of local extrinsic curvature for non-Euclidean embeddings ###### Abstract Discrete forms of the mean and directed curvature are constructed on piecewise flat manifolds, providing local curvature approximations for smooth manifolds embedded in both Euclidean and non-Euclidean spaces. The resulting expressions take the particularly simple form of a weighted scalar sum of hinge angles, the angles between the normals of neighbouring piecewise flat segments, with the weights depending only on the intrinsic piecewise flat geometry and a choice of dual tessellation. The constructions are based on a new piecewise flat analogue of the curvature integral along and tangent to a geodesic segment, with integrals of these analogues then taken over carefully defined regions to give spatial averages of the curvature. Computations for surfaces in both Euclidean and non-Euclidean spaces indicate a clear convergence to the corresponding smooth curvature values as the piecewise flat mesh is refined, with the former comparing favourably with other discrete curvature approaches. _Dedicated to the memory of Niall O Murchadha_ Keywords: Discrete differential geometry, Regge calculus, piecewise linear. + Footnote †: Department of Mathematics and Statistics [email protected] American University 4400 Massachusetts Avenue, NW Washington, DC 20016, USA ## 1 Introduction Our current concept of curvature first appeared as far back as the 1300's, when Nicole Oresme defined the curvature of a circle as the inverse of its radius [1]. This was later extended to general planar curves as the inverse radius of the osculating (kissing) circle, the circle that best-fits the curve at a point. With the advent of Calculus, a number of equivalent definitions were found, such as the change in normal or tangent vectors with respect to arc-length, or the change in arc-length along a one-parameter family of offset curves. When a curve is discretized using piecewise linear segments, discrete curvature measures naturally arise from these definitions by taking small finite changes instead of differential limits. Generalizing to higher dimensional manifolds in Euclidean space, the curvature definitions follow through with little alteration, though a direction must now be specified, and piecewise linear curves can be generalized to simplicial piecewise flat manifolds, formed by joining flat simplices (triangles, tetrahedra, etc.) along their edges or faces, known as hinges. Directly discretizing the smooth curvature definitions often leads to ambiguities here, so many approaches also use some form of spatial averaging. A particularly rich variety of such approaches has been developed in the field of Discrete Differential Geometry [2, 3, 4], mostly with an interest in image processing, architecture, and 3D object modelling. The curvature of _non_-Euclidean embeddings is particularly important in General Relativity, not least in determining the evolution of the spatial geometry in numerical formulations [5, 6, 7]. However, there are a number of challenges associated with discrete approximations for non-Euclidean embeddings. For a start, piecewise flat segments will generally not be embeddable in a non-Euclidean space. Instead, piecewise flat approximations must be made _intrinsically_, with edge-lengths defined by the lengths of geodesic segments on the manifold. Some of the embedding information can then be recovered by extending the piecewise flat approximation to the embedding space around it. In particular, hinge angles, the angles between the normals of neighbouring simplices, can be determined in this way. While a piecewise flat approach to General Relativity has existed since the 1960's, known as Regge Calculus [8], local curvature approaches have only involved single hinge angles [9, 10, 11], which fail to approximate smooth extrinsic curvature at a local level, even for highly symmetric models. Unfortunately, the approaches from Discrete Differential Geometry cannot easily be adapted to non-Euclidean embeddings either, since they rely on a Euclidean embedding for the addition of vectors and tensors or the defining of normal vectors at vertices (see section 2.4 for more details). Here, some of the spatial averaging concepts common in Discrete Differential Geometry are used to provide local approximations of smooth extrinsic curvature, but using only the hinge angles and the intrinsic geometry of a piecewise flat manifold, which can be applied to non-Euclidean embeddings. The foundation for this approach is a new piecewise flat analogue of a curvature path integral, found by expressing the finite change of a unit tangent vector in terms of the hinge angles. Integrals of this analogue are then taken over carefully defined regions, resulting in curvature averages as weighted scalar sums of the hinge angles. Similar approaches have already been used for both the scalar and Ricci curvature [12], also giving an effective piecewise flat Ricci flow [13]. The new curvature constructions are computed for a series of piecewise flat approximations of two irregular surfaces in Euclidean three-space, and for two different grid types for a surface embedded in a non-Euclidean Gowdy space [14]. The resulting piecewise flat curvatures give reasonable approximations for low resolutions, and indicate a clear convergence to the corresponding smooth curvature values as the resolutions are increased. The results for the Euclidean-embedded surfaces also closely match with results from approaches in Discrete Differential Geometry. The rest of the paper starts with some background material in section 2, defining smooth curvature, piecewise flat manifolds, and hinge angles, and outlining other discrete curvature approaches from Regge Calculus and Discrete Differential Geometry. Section 3 begins with a motivation and general procedure for the current approach, then extends curvature integrals to piecewise flat manifolds, and proves the piecewise flat analogue of the curvature path integral based on these. The new curvature constructions are developed in section 4, with the choice of spatial regions motivated and resulting expressions proved, given the curvature integral extensions. Details and results of the computations are provided in section 5. ## 2 Background ### Smooth differential curvature The curvature \(\kappa\) of a curve in two-dimensions can be defined as the rate of change of the unit tangent vector \(\hat{T}\), or the angular change \(\psi\) of \(\hat{T}\), with respect to the arc-length parameter \(s\), \[\kappa:=\left|\frac{d\hat{T}}{ds}\right|=\left|\frac{d\psi}{ds}\right|. \tag{2.1}\] For an \(n\)-dimensional manifold \(M^{n}\), embedded in some ambient space \(N^{n+1}\), this approach can be generalized by using the rate of change of a unit tangent vector \(\hat{v}\), in the direction of \(\hat{v}\) itself. Taking the normal component of the result ensures that changes of \(\hat{v}\) within \(M^{n}\) do not contribute, giving a directional curvature \[\kappa(\hat{v})=\langle\nabla_{\hat{v}}\hat{v},\hat{n}\rangle\,, \tag{2.2}\] where \(\hat{n}\) is the unit normal vector to \(M^{n}\), \(\nabla\) is the covariant derivative associated with the Levi-Civita connection on \(N^{n+1}\), and \(\langle\cdot,\cdot\rangle\) is the inner product on the tangent space of \(N^{n+1}\) at any given point. When the ambient space is Euclidean, \(\kappa(\hat{v})\) is equivalent to the inverse radius of the circle that best fits \(M^{n}\) in the direction of \(\hat{v}\) at a given point. It is common to measure the complete embedding curvature by generalizing \(\kappa(\hat{v})\) to include changes of a different unit tangent vector \(\hat{u}\), in the direction of \(\hat{v}\). Different fields of study have different names for this curvature, including the second fundamental form \(\alpha\), extrinsic curvature tensor \(K_{ab}\), and shape operator \(Q\). There are slight technical differences between the three but they are closely related, with \[\alpha(\hat{u},\hat{v})=K_{ab}\,\hat{u}^{a}\hat{v}^{b}=\langle Q(\hat{v}), \hat{u}\rangle=\langle\nabla_{\hat{v}}\hat{u},\hat{n}\rangle\,. \tag{2.3}\] Here, it will be more convenient to consider only \(\kappa(\hat{v})\), which can still give the complete embedding curvature at each point \(x\in M^{n}\). This can be done by giving the values of \(\kappa(\hat{v})\) in the \(n\) principal curvature directions, or for any set of \(n\) linearly independent vector fields tangent to \(M^{n}\) in a neighbourhood of \(x\), along with the \((n-1)/2\) vector fields given by the difference between each pair of these. The mixed-argument values of the second fundamental form can be given in terms of these by using its bilinear and symmetry properties, \[\alpha(\hat{u}-\hat{v},\hat{u}-\hat{v})=\alpha(\hat{u},\hat{u})- 2\alpha(\hat{u},\hat{v})+\alpha(\hat{v},\hat{v})\] \[\Leftrightarrow \alpha(\hat{u},\hat{v})=\frac{1}{2}\big{(}\kappa(\hat{u})+ \kappa(\hat{v})-\kappa(\hat{u}-\hat{v})\big{)}. \tag{2.4}\] Many applications also depend on the average of the directional curvature at each point, known as the mean curvature. This can be found by adding the directional curvatures for an orthonormal set of vector fields \(\{\hat{e}_{i}\}\) tangent to \(M^{n}\), \[H=\frac{1}{n}\sum_{i}^{n}\kappa(\hat{e}_{i}). \tag{2.5}\] Note that in much of the physics literature the mean curvature refers to the trace of the extrinsic curvature tensor and is denoted \(K\), with \(K=nH\). Also, for surfaces in \(\mathbb{E}^{3}\), the Laplace-Beltrami operator is twice the mean curvature vector \(H\hat{n}\). Integrating the mean curvature over all of \(M^{n}\) gives the _total_ curvature, \(H_{total}=\int_{M^{n}}HdV\). ### Piecewise flat submanifolds Piecewise flat manifolds are formed by joining flat Euclidean segments together. The most simple of these segments are \(n\)-simplices (line segments, triangles, tetrahedra, etc.), since their shape is entirely fixed by the lengths of their edges. Here, all piecewise flat manifolds are assumed to be simplicial, with their topology determined by the simplicial graph, and their intrinsic geometry by the set of edge-lengths. This is a major advantage of piecewise flat manifolds, with the topology isolated from the geometry, and the intrinsic geometry specified without any need for coordinates. The specific embedding of an orientable piecewise flat manifold in \(\mathbb{E}^{n+1}\) can be determined by the relative orientations of neighbouring simplices, with pairs of adjacent \(n\)-simplices _hinging_ along the co-dimension-one simplex that they share. These \((n-1)\)-simplices are known as hinges, with the angle between the normals on either side of a hinge \(h\) known as a hinge angle and denoted \(\phi_{h}\). A hinge angle can also be defined as the angle between vectors that are tangent to neighbouring \(n\)-simplices, and orthogonal to the hinge joining them, see figure 1. Here, a hinge angle is considered positive if the \(n\)-simplices are concave, and negative if convex, relative to a given orientation. As with piecewise linear approximations of a curve, there are many ways to form a piecewise flat approximation, or triangulation, of a smooth manifold \(M^{n}\subset\mathbb{E}^{n+1}\). The most common method is to take a set of points on \(M^{n}\) and then create a piecewise flat manifold \(S^{n}\) with these points as vertices, possibly with a global rescaling of \(S^{n}\) to give an equivalent \(n\)-volume to \(M^{n}\). In general, a good approximation will have uniformly small hinge angles. ### Non-Euclidean embeddings Since flat \(n\)-simplices are generally not embeddable in a non-Euclidean space \(N^{n+1}\), a piecewise flat approximation of a smooth manifold \(M^{n}\subset N^{n+1}\) can only be made intrinsically. This is done by constructing a simplicial graph on \(M^{n}\), using geodesic segments as edges, and defining a piecewise flat manifold \(S^{n}\) to have the same graph, with its edge-lengths determined by the lengths of the corresponding geodesic segments in \(M^{n}\). Information about the embedding of \(M^{n}\) can be found by extending \(S^{n}\) to form an \((n+1)\)-dimensional piecewise flat approximation of \(N^{n+1}\) in the region around \(M^{n}\), see figure 2. The space around a hinge in \(S^{n}\) will generally not be Euclidean, but it will be locally Euclidean on each side of \(S^{n}\), consisting of a series of flat connected \((n+1)\)-simplices. Hinge angles can then be determined in the usual way on each side, and an average of the two taken. Figure 1: Triangulations and hinge angles for a circle in \(\mathbb{E}^{2}\), and a sphere and cylinder in \(\mathbb{E}^{3}\). ### Other discrete curvature approaches The core of most piecewise flat curvature approaches is formed by a discrete analogue of a smooth curvature concept. Some examples include: * the hinge angles, the change in unit normal vector between neighbouring \(n\)-simplices; * the change in vertex-based unit normal vectors along the length of each edge; * the half-edge curvature, the inverse radius of a circular arc parallel to the midpoint and orthogonal to a normal vector at one end (see [2], Fig. 6 for a useful diagram). These generally appear in weighted scalar, vector, or tensor sums, often resulting from a generalization of a smooth curvature measure. Some of the more prominent of these are briefly described below. _Steiner and Regge calculus:_ Piecewise flat curvature analogues have existed since at least 1840, when Steiner [15] proposed an analogue of the total mean curvature for a convex polyhedron as the sum of its hinge angles, weighted by the corresponding edge lengths. This curvature analogue appeared as a coefficient in Steiner's formula for the volume enclosed by a surface offset from a convex polyhedron in \(\mathbb{E}^{3}\). In the 1980's, Hartle and Sorkin [16] gave a generalization of this formula, possibly unaware of Steiner's result, while introducing boundary terms to the Regge Calculus action. For the boundary of a piecewise flat four-manifold, the hinge angles were now weighted by the area of the hinge faces \(|h_{i}|\), \[H_{total}=\sum_{i}|h_{i}|\phi_{i}. \tag{2.6}\] Local curvatures have been suggested by taking a single hinge angle divided by a characteristic length [10], or by dividing \(|h|\phi_{h}\) by a hinge-based volume [11]. Unfortunately, these single-hinge approaches can fail to approximate smooth curvature values for even highly symmetric models, such as the cylinder triangulation in figure 1, where the diagonal edges have a zero hinge angle, despite the corresponding smooth curvature being non-zero. _Cotan formula:_ The cotan formula, first introduced by Pinkall and Polthier in [17] is probably the most well developed method for locally approximating the mean curvature of smooth surfaces in \(\mathbb{E}^{3}\). The formula is a weighted vector-sum of the edge-vectors around a Figure 2: For a surface embedded in a non-Euclidean three-manifold, a pair of three-dimensional piecewise flat layers can be constructed on either side to specify the hinge angles. vertex \(v_{i}\), producing a mean curvature vector at the vertex. The contribution from each edge is equivalent to the inverse radius of the edge-based circle arc mentioned above, weighted by half the circumcentric area associated with the edge. The circumcentric areas can be expressed in terms of the cotangents of the two angles \(\alpha\) and \(\beta\) opposite the given edge, giving the formula \[H\hat{n}_{i}=\frac{1}{4A_{i}}\sum_{v_{j}\in\star(v_{i})}(\cot\alpha+\cot\beta)(v _{i}-v_{j}), \tag{2.7}\] with \(v_{i}\) representing the vector in \(\mathbb{E}^{3}\) for the given vertex, \(v_{j}\) the vectors for the vertices joined to it by an edge, and \(A_{i}\) the Voronoi region dual to the vertex \(v_{i}\). The formula can easily be extended to higher dimensions and co-dimensions [18], and some convergence properties have been studied in [19, 20]. While the interpretation given here relies on a triangulation being Delaunay, the approach appears to be more robust in practice, with the formula even defined for a mixed Voronoi-barycentric dual tessellation in [18]. _Tensor sum of hinge angles:_ In order to extend the Steiner formula to non-convex polyhedra, Cohen-Steiner and Morvan [21, 22] used the geometric measure theory concept of a normal cycle, a generalization of the normal bundle, essentially allowing for an overlapping of the offset for non-convex parts. Over a region \(B\) of a piecewise flat submanifold of \(\mathbb{E}^{3}\), the result is almost the same as a tensor sum of the hinge angles \(\phi_{i}\), weighted by the length of the corresponding hinge lying within the region \(B\). The expression provided in [21] for the anisotropic curvature measure over \(B\) is \[\bar{H}(B)=\sum_{i}\frac{|h_{i}\cap B|}{2}\left[(\phi_{i}-\sin\phi_{i})e_{i}^{ +}\otimes e_{i}^{+}+(\phi_{i}+\sin\phi_{i})e_{i}^{-}\otimes e_{i}^{-}\right], \tag{2.8}\] with \(e_{i}^{+}\) and \(e_{i}^{-}\) representing the normalized sum and difference of the unit normal vectors to the triangles joined along \(h_{i}\). This expression is also shown to converge to the corresponding smooth curvature tensor under certain sampling conditions. _Other approaches:_ Taubin [23] uses a weighted tensor sum of the same half-edge curvatures used to derive the cotan formula, but in contrast to Cohen-Steiner and Morvan, the sum is used to approximate an integral of the directed curvature over all directions at a point, rather than a spatial integral of the curvature tensor itself. The principal directions are still eigenvectors of the resulting tensor, but the principal curvatures need to be found from a pair of linear equations in the eigenvalues of the tensor. Instead of a tensor sum, Rusinkiewicz [24] uses a least-squares fit of the curvatures along the edges of a triangle, with the curvature measure coming from the change in vertex normal vectors along an edge. ## 3 Preliminaries ### General Approach The aim of this paper is to use piecewise flat approximations of a smooth manifold to do two things: (i) to give effective local approximations of the extrinsic curvature; and (ii) to do so when the manifold is embedded in a non-Euclidean space. The first of these is achieved by a number of approaches in Discrete Differential Geometry, but all rely on a Euclidean embedding. This occurs either explicitly, such as adding vectors or tensors in \(\mathbb{E}^{3}\) or implicitly, by requiring the construction of normal vectors at vertices, for example. In Regge Calculus, non-Euclidean embeddings are dealt with effectively for approximations of the total mean curvature, but attempts at localizing this can be seen to deviate from the smooth curvature even for highly symmetric models. While the approaches in both fields are unsuited to the aim of this paper, they have inspired a set of guiding principles outlined below. _Guiding principles:_ * Integrals of smooth differential properties should be seen as entities in their own right, and it is these that should provide correlations between the piecewise flat and smooth. This follows an understanding from both Steiner and Regge, and has also been elucidated by Schroder in [25]. * Averages over spatial \(n\)-volumes should be used to properly sample geometric properties on piecewise flat manifolds. This follows many of the Discrete Differential Geometry approaches, with an explicit argument for this given in [18]. * Geometric approximations should be associated with an appropriate type of graph element rather than setting up coordinate bases at vertices or within \(n\)-simplices. This embraces the coordinate-independent nature of piecewise flat manifolds that served as a motivation for Regge's original work [8]. Following the first principle, the smooth curvature integral along and tangent to a geodesic segment can be extended to piecewise flat manifolds by recognizing its equivalence to the total angular change of a tangent vector (shown in section 3.2), and re-defining the curvature integral accordingly. While this can be used to approximate the smooth curvature directly, an average can also be taken over a set of shorter geodesic segments, as suggested by the second principle, allowing for better approximations while intersecting fewer piecewise flat segments. The orientation of these geodesic segments relative to a particular graph element can then be used to relate the curvature to a direction in the corresponding smooth manifold, fitting with the third principle. This leads to a general procedure for constructing piecewise flat curvature approximations. _General procedure:_ 1. Select a graph element to associate with a particular curvature. 2. Define an \(n\)-volume region around this graph element, intercepting an appropriate sampling of hinges and capable of being intrinsically foliated into parallel line segments. 3. Give the line integral of the curvature along each line segment, using the piecewise flat analogue of the cuvature integral along and tangent to a geodesic segment. 4. Integrate these curvature line integrals over the foliation to give an \(n\)-volume curvature integral. 5. Divide by the \(n\)-volume of the region to give an average of the curvature over this region. Before this procedure can be applied, in section 3.2, certain curvature integrals are re-defined so they can be used on piecewise flat manifolds, and in section 3.3, the piecewise flat analogue of the curvature integral along and tangent to a geodesic segment is given. The procedure above is then used to construct the piecewise flat mean and directed curvatures in section 4. ### Extending curvature integrals to piecewise flat manifolds The smooth concept of curvature at a point clearly does not apply to piecewise flat manifolds, giving zero values within each \(n\)-simplex and infinite values at the hinges. However, lemma 1 below relates a _path integral_ of the curvature on a smooth manifold to the finite angular change in a tangent vector, a concept that _is_ well-defined on piecewise flat manifolds. In fact, the hinge angles themselves can be viewed as the finite change in a tangent vector along a path intercepting the corresponding hinge. In definition 1, this curvature path integral is then re-defined _as_ the finite angular change in the tangent vector, so that it can be applied to piecewise flat manifolds. This forms the bases for extensions to two spatial curvature integrals that follow. **Lemma 1** (Smooth curvature integral along a geodesic segment).: _On a smooth manifold \(M^{n}\subset N^{n+1}\), the integral of the curvature along and tangent to a geodesic segment \(\gamma\) is given by the total angular change in the unit tangent vector \(\hat{v}\) to \(\gamma\),_ \[\int_{\gamma}\kappa(\hat{v})ds=\int_{\gamma}d\psi, \tag{3.1}\] _where \(\psi\) measures the angular change in \(\hat{v}\) along \(\gamma\)._ Proof.: From the definition of the directional curvature in (2.2), \[\kappa(\hat{v})=\left\langle\nabla_{\hat{v}}\hat{v},\,\hat{n}\right\rangle. \tag{3.2}\] Since \(\hat{v}\) is tangent to a geodesic in \(M^{n}\), the geodesic equation states that \(\nabla_{\hat{v}}^{M}\hat{v}=0\), where \(\nabla^{M}\) is the restriction of \(\nabla\) to \(M^{n}\). As a result, \(\nabla_{\hat{v}}\hat{v}\) must be in a direction normal to \(M^{n}\) at each point of \(\gamma\). The covariant derivative \(\nabla_{\hat{v}}\hat{v}\) is the rate of change of \(\hat{v}\) with respect to the arc-length parameter \(s\) along \(\gamma\), so \[\nabla_{\hat{v}}\hat{v}=\frac{d\hat{v}}{ds}=\lim_{\Delta s\to 0}\frac{ \Delta\psi\,\hat{n}}{\Delta s}=\frac{d\psi}{ds}\hat{n}, \tag{3.3}\] where the third term above assumes that \(\Delta\psi\) is the angular difference between the tangent vector at a point, and the parallel transport of the tangent vector a distance of \(\Delta s\) along \(\gamma\). The resulting angular change in the tangent vector also works in the same way as a curve in \(\mathbb{E}^{2}\), as shown in equation (2.1). The sign of the derivative is considered positive if the change is in the positive \(\hat{n}\) direction, and negative otherwise. The integral of the curvature along \(\gamma\) then becomes \[\int_{\gamma}\kappa(\hat{v})\,ds=\int_{\gamma}\left\langle\nabla_{\hat{v}} \hat{v},\,\hat{n}\right\rangle\,ds=\int_{\gamma}\left\langle\frac{d\psi}{ds} \,\hat{n},\,\hat{n}\right\rangle\,ds=\int_{\gamma}\frac{d\psi}{ds}\,ds=\int_{ \gamma}d\psi, \tag{3.4}\] giving the total angular change in the tangent vector \(\hat{v}\) along the geodesic segment \(\gamma\) **Definition 1** (Geodesic-tangent curvature integral).: For both a smooth manifold \(M^{n}\subset N^{n+1}\), and a piecewise flat approximation \(S^{n}\) of \(M^{n}\), the geodesic-tangent curvature integral associated with a geodesic segment \(\gamma\) is defined as the total angular change in the tangent vector to \(\gamma\) along its length, and denoted \(\mathbb{K}_{\gamma}\). If these are to be used to determine the average curvature value over an \(n\)-volume region, the corresponding \(n\)-volume integrals must be defined in terms of curvature path integrals instead of the curvature at a point. **Lemma 2** (Smooth directed curvature integrals).: _For an \(n\)-dimensional region \(B\) of a smooth manifold \(M^{n}\subset N^{n+1}\), with a vector field \(\hat{v}\) and integral curves \(c\) to \(\hat{v}\) that foliate \(B\), the integral of the curvature in the direction of \(\hat{v}\) over \(B\),_ \[\int_{B}\kappa(\hat{v})\ dV^{n}\ =\ \int_{B/c}\left[\int_{c}\kappa(\hat{v}) ds\right]dV^{n-1}.\] Proof.: Since \(B\) is foliated by the curves \(c\), the \(n\)-volume integral of \(\kappa(\hat{v})\) over \(B\) can be performed iteratively, with the integrals along the curves found first, and the results of these then integrated over the space \(B/c\) orthogonal to \(c\) in \(B\). **Definition 2** (\(n\)-volume directed curvature integral).: Take an \(n\)-dimensional region \(B\) of either a smooth manifold \(M^{n}\subset N^{n+1}\), or a piecewise flat approximation \(S^{n}\) of \(M^{n}\), with a vector field \(\hat{v}\) and integral curves \(c\) to \(\hat{v}\) that foliate \(B\). The directed curvature integral over \(B\), in the direction of \(\hat{v}\), is defined as the \((n-1)\)-dimensional integral of the curvature path integrals along and tangent to the curves \(c\), denoted \(\mathbb{K}_{c}(\hat{v})\), where they are defined, \[\mathbb{K}_{B}(\hat{v})=\int_{B/c}\mathbb{K}_{c}(\hat{v})dV^{n-1},\] with \(B/c\) representing the space orthogonal to the curves \(c\) in \(B\). On a smooth manifold, an orthonormal basis of vector fields can be constructed on a neigbourhood of any differential manifold up to dimension three, [26]. On such a neighbourhood, an \(n\)-volume integral of the mean curvature can be given as a sum of directed curvature integrals, as shown in Lemma 3 below. **Lemma 3** (Smooth mean curvature integrals).: _For a region \(B\subset M^{n}\subset N^{n+1}\), which can be decomposed into subregions \(D_{j}\) on which orthonormal sets of basis vector fields \(\{\hat{e}_{i}\}\) can be defined, the integral of the mean curvature over \(B\),_ \[\int_{B}H\ dV\ =\ \frac{1}{n}\sum_{j}\sum_{i}\int_{D_{j}}\kappa(\hat{e}_{i}) dV^{n}=\ \frac{1}{n}\sum_{j}\sum_{i=1}^{n}\mathbb{K}_{D_{j}}(\hat{e}_{i}).\] Proof.: On each subregion \(D_{j}\), since the vector fields \(\hat{e}_{i}\) are all orthogonal to each other over \(D_{j}\), the sum over the basis vector fields and the integral over \(D_{j}\) commute, so \[\int_{D_{j}}HdV^{n}=\int_{D_{j}}\frac{1}{n}\sum_{i=1}^{n}\kappa( \hat{e}_{i})dV^{n}=\frac{1}{n}\sum_{i=1}^{n}\int_{D_{j}}\kappa(\hat{e}_{i}) dV^{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{K}_{D_{j}}(\hat{e}_{i}).\] with the final term coming from definition 2, since \(D_{j}\) is foliated by the integral curves of each coordinate vector field. As a scalar field, the mean curvature is invariant to the choice of basis vector fields, so the total mean curvature over \(B\) is just the sum of the integrals for each subregion \(D_{j}\). **Definition 3** (\(n\)-volume mean curvature integral).: For an \(n\)-dimensional region \(B\) of either a smooth manifold \(M^{n}\subset N^{n+1}\), or a piecewise flat approximation \(S^{n}\) of \(M^{n}\), which can be decomposed into regions \(D_{j}\) on which orthonormal sets of basis vector fields \(\{\hat{e}_{i}\}\) can be defined, the integral of the mean curvature over \(B\), \[\mathbb{H}_{B}=\ \frac{1}{n}\sum_{j}\sum_{i=1}^{n}\mathbb{K}_{D_{j}}(\hat{e}_{ i}). \tag{3.5}\] ### Piecewise flat geodesic-tangent curvature integral While definition 1 above can be used to give the hinge angle \(\phi_{h}\) on a piecewise flat manifold when the path \(\gamma\) is orthogonal to the hinge \(h\), it will prove more useful to find the curvature integral for geodesic paths that intercept hinges at different angles. **Theorem 1** (Geodesic-tangent curvature integral on a piecewise flat manifold).: _For a path \(\gamma\) that is intrinsically straight within a piecewise flat approximation \(S^{n}\) of a smooth manifold \(M^{n}\subset N^{n+1}\), the geodesics-tangent curvature integral_ \[\mathbb{K}_{\gamma}=\sum_{h}\cos\theta_{h}\ \phi_{h}+O(\phi_{h}^{3}), \tag{3.6}\] _for all hinges \(h\) that intersect \(\gamma\), with \(\phi_{h}\) giving the corresponding hinge angle, and \(\theta_{h}\) the angle that \(\gamma\) makes with the normal vector to the hinge \(h\) within \(S^{n}\)._ Proof.: For a piecewise flat manifold \(S^{n}\subset\mathbb{E}^{n+1}\), the unit tangent vector \(\hat{v}\) to \(\gamma\) will remain constant within each \(n\)-simplex, so any changes will occur at the hinges. For each \(n\)-simplex \(\sigma^{n}\) containing a hinge \(h\) that intersects \(\gamma\), a set of coordinate vectors \(\{\hat{w}_{i}\}\) can be defined so that \(\hat{w}_{1}\) is orthogonal to \(h\) and tangent to \(\sigma^{n}\), and \(\hat{w}_{2}\) is aligned with the component of \(\hat{v}\) tangent to \(h\), so that \[\hat{v}=\cos\theta_{h}\ \hat{w}_{1}+\sin\theta_{h}\ \hat{w}_{2}. \tag{3.7}\] Taking two \(n\)-simplices \(\sigma^{n}_{A}\) and \(\sigma^{n}_{B}\) joined by the hinge \(h\), their corresponding vectors \(\hat{w}_{1}^{A}\) and \(\hat{w}_{1}^{B}\) will differ by the hinge angle \(\phi_{h}\). To compare the tangent vector \(\hat{v}\) on either side of \(h\), a new set of coordinate vectors \(\{\hat{e}_{i}\}\) can be defined, with \[\hat{e}_{1} =\hat{w}_{1}^{B}-\hat{w}_{1}^{A}/\left|\hat{w}_{1}^{B}-\hat{w}_{1 }^{A}\right|,\] \[\hat{e}_{2} =\hat{w}_{1}^{A}+\hat{w}_{1}^{B}/\left|\hat{w}_{1}^{A}+\hat{w}_{1 }^{B}\right|,\] \[\hat{e}_{3} =\hat{w}_{2}. \tag{3.8}\] In this bases, the unit tangent vectors to \(\gamma\) within \(\sigma^{n}_{A}\) and \(\sigma^{n}_{B}\) are \[\hat{v}_{A} =-\sin\frac{\phi_{h}}{2}\,\cos\theta_{h}\ \hat{e}_{1}+\cos\frac{\phi_{h} }{2}\,\cos\theta_{h}\ \hat{e}_{2}+\sin\theta_{h}\ \hat{e}_{3},\] \[\hat{v}_{B} =\ \ \sin\frac{\phi_{h}}{2}\,\cos\theta_{h}\ \hat{e}_{1}+\cos\frac{\phi_{h} }{2}\,\cos\theta_{h}\ \hat{e}_{2}+\sin\theta_{h}\ \hat{e}_{3}. \tag{3.9}\] The angular change \(\psi_{h}\) from \(\hat{v}_{A}\) to \(\hat{v}_{B}\) can now be found using the cross product of these, \[\sin\psi_{h} =|\hat{v}_{A}\times\hat{v}_{B}|\] \[=\left|2\sin\frac{\phi_{h}}{2}\,\cos\theta_{h}\,\sin\theta_{h}\ \hat{e}_{2}-2\sin\frac{\phi_{h}}{2}\,\cos\frac{\phi_{h}}{2}\,\cos^{2}\theta_{h }\ \hat{e}_{3}\right|\] \[=2\sin\frac{\phi_{h}}{2}\,\cos\theta_{h}\ \sqrt{\sin^{2}\theta_{h}+\cos^{2} \frac{\phi_{h}}{2}\,\cos^{2}\theta_{h}}. \tag{3.10}\] Solving for \(\psi_{h}\), and Taylor expanding the result in terms of \(\phi_{h}\) about zero, \[\psi_{h} =\cos\theta_{h}\ \phi_{h}-\frac{1}{24}\,\cos\theta_{h}\,\sin^{2} \theta_{h}\ \phi_{h}^{3}+O(\phi_{h}^{5}).\] \[=\cos\theta_{h}\ \phi_{h}+O(\phi_{h}^{3}). \tag{3.11}\] Figure 3 also shows that \(\sin\left(\frac{1}{2}\psi_{h}\right)=\cos\theta_{h}\,\sin\left(\frac{1}{2}\phi _{h}\right)\), which requires nothing more than the small angle approximation for sine to give the leading term above. When \(S^{n}\) is a piecewise flat approximation of a manifold \(M^{n}\subset N^{n+1}\), and the hinge angles are determined by extending \(S^{n}\) to produce an \((n+1)\)-dimensional piecewise flat approximation for the region around \(M^{n}\), as described in section 2.3, the same procedure can be used for the flat spaces on either side of \(S^{n}\). Averaging the resulting angles also gives (3.11) in terms of the averaged hinge angle \(\phi_{h}\). Summing the angular change \(\psi_{h}\) for each hinge \(h\) intersecting \(\gamma\) gives the total angular change in \(\hat{v}\) along the length of \(\gamma\), \[\mathbb{K}_{\gamma}=\sum_{h}\psi_{h}=\sum_{h}\cos\theta_{h}\ \phi_{h}+O(\phi_{h}^{3}), \tag{3.12}\] showing the geodesic-tangent curvature integral as a weighted sum of hinge angles. Figure 3: The two triangles on either side of a hinge are shown on the left, with a geodesic path \(\gamma\) and the unit tangent vectors \(\hat{v}_{A}\) and \(\hat{v}_{B}\) to \(\gamma\) for each triangle. The angle \(\psi\) between the two tangent vectors can then be related to the hinge angle \(\phi\) and the angle \(\theta\). ## 4 New curvature constructions ### Piecewise flat mean curvature The mean curvature has a single value at each point of a smooth manifold \(M^{n}\subset N^{n+1}\), and can be seen as an average of the directional curvature over _all_ directions on \(M^{n}\). To construct an approximation of the smooth mean curvature on a peicewise flat manifold \(S^{n}\), the \(n\)-volume regions should have the following characteristics: * the regions should tessellate \(S^{n}\); * each region should intercept hinges with a wide distribution of orientations; * the resulting curvature constructions should change continuously for small variations in the triangulation \(S^{n}\). A tessellation ensures that each point in \(S^{n}\) has a single mean curvature associated with it, a wide distribution of hinges can help to approximate an average curvature over all directions, and the final characteristic ensures that there are no jumps in a curvature value for small changes in the triangulation. The graph elements associated with the most general hinge orientations are the \(n\)-simplices and vertices. However, while the \(n\)-simplices provide a natural tessellation of \(S^{n}\), they only contain hinges at their boundaries and will generally have a smaller sampling of hinges than the vertices. Thus, a dual tessellation, which decomposes \(S^{n}\) into \(n\)-volume regions surrounding each vertex, will be used to construct approximations of the mean curvature. **Definition 4** (Vertex regions).: A decomposition of a piecewise flat manifold \(S^{n}\) into \(n\)-dimensional regions \(V_{v}\) dual to each vertex \(v\) is defined so that: 1. the vertex \(v\in V_{v}\), but no other vertices are contained within \(V_{v}\); 2. the regions \(V_{v}\) form a complete tessellation of \(S^{n}\), \[|S^{n}|=\sum_{v\in S^{n}}|V_{v}|,\qquad V_{v_{i}}\cap V_{v_{j}}=\emptyset\quad \forall\ i\neq j,\] (4.1) with \(|S^{n}|\) and \(|V_{v}|\) representing the \(n\)-volumes of \(S^{n}\) and \(V_{v}\) respectively; 3. only hinges \(h_{v}\) in the star of \(v\) (containing \(v\) in their boundary) intersect \(V_{v}\). This gives the most general definition of a dual tessellation for the constructions that follow. The third property is not strictly necessary but is deemed a reasonable requirement, with Voronoi tessellations satisfying this condition only where \(S^{n}\) gives a Delaunay triangulation for example. It also seems reasonable to require an additional property which distributes the total \(n\)-volume of \(S^{n}\) in some consistent manner over the subregions \(V_{v}\). However, there are many different methods for doing this, most of which are incompatible with one another, such as the Voronoi and barycentric tessellations. Such a property will therefore not be imposed here. **Theorem 2** (Piecewise flat mean curvature).: _Assuming the curvature integrals in definitions 1, 2 and 3, the average mean curvature over the region \(V_{v}\) dual to a vertex \(v\) on a piecewise flat manifold \(S^{n}\),_ \[H_{v}:=\frac{1}{|V_{v}|}\,\mathbb{H}_{V_{v}}=\frac{1}{n\,|V_{v}|}\sum_{h\subset \operatorname{star}(v)}|h\cap V_{v}|\phi_{h}, \tag{4.2}\] _with \(|h\cap V_{v}|\) representing the \((n-1)\)-volume of \(h\cap V_{v}\), and \(\operatorname{star}(v)\) representing all of the hinges in the star of \(v\), or containing \(v\) in their boundary._ Proof.: The vertex region \(V_{v}\) can be decomposed into subregions \(D_{h}\), each enclosing the intersection of a single hinge \(h\) with \(V_{v}\). Since each of these regions is composed of a pair of \(n\)-simplices, the subregions \(D_{h}\) must be intrinsically flat within \(S^{n}\). A Cartesian coordinate basis \(\{\hat{e}_{i}\}\) can therefore be defined on each subregion, with integral curves \(\gamma_{i}\) given by line-segments within \(S^{n}\). By definitions 3 and 2, the mean curvature integral over each subregion, \[\mathbb{H}_{D_{h}}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{K}_{D_{h}}(\hat{e}_{i})= \frac{1}{n}\sum_{i=1}^{n}\int_{D_{h}/\gamma_{i}}\mathbb{K}_{\gamma_{i}}\ dV^{n-1}. \tag{4.3}\] Choosing the coordinate vector \(\hat{e}_{1}\) to be orthogonal to the hinge \(h\), all of the line-segments except \(\gamma_{1}\) will remain within a single \(n\)-simplex, and only the geodesic-tangent curvature integral along \(\gamma_{1}\) will be non-zero, so by theorem 1, \[\mathbb{H}_{D_{h}}=\frac{1}{n}\ \int_{D_{h}/\gamma_{1}}\mathbb{K}_{\gamma_{1}}\ dV^{n-1}= \frac{1}{n}\ \int_{D_{h}/\gamma_{1}}\phi_{h}\ dV^{n-1}=\frac{1}{n}\ |h\cap V_{v}|\phi_{h}, \tag{4.4}\] with the final integral resulting in the \((n-1)\)-volume of the intersection of \(h\) with \(D_{h}\), since this is the space orthogonal to the line-segments \(\gamma_{1}\) within \(D_{h}\), and the hinge angle \(\phi_{h}\) is invariant over \(h\). The total integrated curvature over \(V_{v}\) is given by summing over all of the subregions \(D_{h}\), \[\mathbb{H}_{V_{v}}=\sum_{h\subset\operatorname{star}(v)}\mathbb{H}_{D_{h}}= \sum_{h\subset\operatorname{star}(v)}\frac{1}{n}|h\cap V_{v}|\phi_{h}, \tag{4.5}\] and the average mean curvature found by dividing this by the volume of \(V_{v}\). For continuous variations in the triangulation \(S^{n}\) of a smooth manifold \(M^{n}\), where the simplicial graph remains fixed and the triangulation well-defined, the hinge angles and edge lengths will change continuously. For a barycentric dual tessellation, the \(n\)-volumes \(|V_{v}|\) will also change continuously, and therefore so will the mean curvature \(H_{v}\). For a Voronoi dual, the triangulation must also remain Delaunay. The construction also gives the same total mean curvature expression as that of Steiner [15] and Hartle and Sorkin [16], as shown below. **Corollary** (Total mean curvature).: _The total mean curvature over a piecewise flat manifold \(S^{n}\) is_ \[\mathbb{H}_{S^{n}}=\frac{1}{n}\sum_{h\subset S^{n}}|h|\phi_{h}. \tag{4.6}\] Proof.: The total mean curvature over \(S^{n}\) is equal to the sum of the mean curvature integrals for each vertex region \(V_{v}\). Since these regions form a tessellation of \(S^{n}\), each part of a given hinge \(h\) must be contained in a single region. With the hinge angles \(\phi_{h}\) fixed over \(h\), the overall contribution from each hinge is \(\frac{1}{n}|h|\phi_{h}\). The total integral is then given by the sum of these terms for all hinges \(h\subset S^{n}\). ### Piecewise flat directed curvature There are two types of piecewise flat graph elements that have a direction associated with them, the edges and the \((n-1)\)-simplices, or hinges. For a curvature derived from the hinge angles, the hinges provide a natural choice for the construction of a piecewise flat directed curvature. An \(n\)-volume region defined around a hinge should then have the following characteristics: * the regions should intersect more than one hinge; * they should not tessellate \(S^{n}\); * they should be intrinsically flat; * the resulting curvature constructions should change continuously for small variations in the triangulation \(S^{n}\). A sampling of more than one hinge is required, as piecewise flat approximations can often lead to flat quadrilaterals, where a zero hinge on the diagonal results more from the graph structure than the curvature of the smooth manifold. See the regular triangulation of the cylinder in figure 1, for example. The hinge regions should not form a tessellation of \(S^{n}\), as each point on a smooth manifold has \(n\) principle curvatures, so points on \(S^{n}\) should also be contained in more than one hinge region. The intrinsic flatness ensures that regions can be foliated into parallel line segments orthogonal to the hinge in a unique way, and as with the mean curvature, the final characteristic ensures that there are no jumps in the curvature approximations due to small changes in the triangulation. To give a large enough sampling of hinges, a union of the vertex regions \(V_{v}\) intersecting the hinge \(h\) are used. However, since these unions will not generally be intrinsically flat, boundaries are included within each vertex region that are orthogonal to the hinge \(h\). **Definition 5** (Hinge regions).: The \(n\)-dimensional region \(V_{h}\) associated with the hinge \(h\) is defined as the union of the vertex-dual regions \(V_{v}\), for vertices \(v\) in the closure of \(h\), intersecting the geodesic extensions \(\gamma_{h}\) of the lines orthogonal to \(h\) in \(S^{n}\), \[V_{h}:=(\cup_{v}V_{v})\cap\int_{h}\gamma_{h}\,\mathrm{d}V^{n-1}, \tag{4.7}\] for the vertices \(v\) that are contained in the closure, or boundary, of the hinge \(h\). Each region \(V_{h}\) contains all of the hinge \(h\), and parts of other hinges, and will overlap with the regions associated with these other hinges. The average curvature orthogonal to each hinge \(h\) is given by integrating over these hinge regions. **Theorem 3** (Piecewise flat directed curvature).: _Assuming the curvature integrals in definitions 1 and 2, the average curvature over the region \(V_{h}\), in the direction \(\hat{v}\) orthogonal to \(h\) within \(V_{h}\),_ \[\kappa_{h}:=\frac{1}{|V_{h}|}\,\mathbb{K}_{V_{h}}(\hat{v})=\frac{1}{|V_{h}|}\sum _{i}|h_{i}\cap V_{h}|\,\cos^{2}\theta_{i}\,\phi_{i}+O(\phi_{i}^{3}), \tag{4.8}\] _with the sum taken over all of the hinges \(h_{i}\) that intersect \(V_{h}\), where \(\phi_{i}\) represents the hinge angle at \(h_{i}\), and \(\theta_{i}\) the angle between \(h_{i}\) and \(h\) within \(S^{n}\)._ Proof.: Since the region \(V_{h}\) is intrinsically flat within \(S^{n}\), it can be foliated into line-segments \(\gamma\) orthogonal to the hinge \(h\). By definition 2 and theorem 1, \[\mathbb{K}_{V_{h}}(\hat{v})=\int_{V_{h}/\gamma}\mathbb{K}_{\gamma}dV^{n-1}= \int_{V_{h}/\gamma}\left[\sum_{i}\cos\theta_{i}\,\phi_{i}+O(\phi_{i}^{3}) \right]dV^{n-1} \tag{4.9}\] with the sum taken over all hinges \(h_{i}\) that each line-segment \(\gamma\) intersects. Swapping the sum and integral, so that each hinge \(h_{i}\) can be treated separately, \[\mathbb{K}_{V_{h}}(\hat{v})=\sum_{i}\int_{V_{h}/\gamma}\left[\cos\theta_{i}\, \phi_{i}+O(\phi_{i}^{3})\right]dV^{n-1}=\sum_{i}\left(\cos\theta_{i}\,\phi_{i }+O(\phi_{i}^{3})\right)\int_{V_{h}/\gamma}dV^{n-1}, \tag{4.10}\] with \((\cos\theta_{i}\,\phi_{i}+O(\phi_{i}^{3}))\) factored outside the integral since both \(\theta_{i}\) and \(\phi_{i}\) are invariant along the hinge \(h_{i}\). The orthogonal space \(V_{h}/\gamma\) is the space of the hinge \(h\) itself, and for each hinge \(h_{i}\), the integrals above are taken over the portion of \(h\) that is intersected by line segments \(\gamma\) that also intersect \(h_{i}\) within \(V_{h}\). This is equivalent to the orthogonal projection of \(h_{i}\cap V_{h}\) onto \(h\), as shown on the right of figure 4, so \[\int_{V_{h}/\gamma}dV^{n-1}=|h_{i}\cap V_{h}|\,\cos\theta_{i}. \tag{4.11}\] Substituting this into (4.10) above, \[\mathbb{K}_{V_{h}}(\hat{v})=\sum_{i}|h_{i}\cap V_{h}|\,\cos^{2}\theta_{i}\, \phi_{i}+O(\phi_{i}^{3}), \tag{4.12}\] which can be divided by the \(n\)-volume of the region \(V_{h}\) to give an average value for the curvature orthogonal to \(h\) in \(V_{h}\) Figure 4: The hinge region for an icosahedron in \(\mathbb{E}^{3}\), with the orthogonal projection of \(h_{i}\cap V_{h}\) onto \(h\) shown on the right. As with the mean curvature, the piecewise flat directed curvature \(\kappa_{h}\) will change continuously for variations in the triangulation \(S^{n}\) of a smooth manifold \(M^{n}\), as long as the triangulation remains well-defined for a barycentric dual, and remains Delaunay for a Voronoi dual. In particular, the \(\cos^{2}\theta_{i}\) part of each term ensures that the contribution from a hinge \(h_{i}\) goes to zero as the hinge gets closer to the boundaries on the interior of a vertex region. The \(\cos^{2}\theta\) coefficient also transforms in the exact way expected of a quadratic form for changes in the angle \(\theta\). In fact, for a vertex region \(V_{v}\) that can be cut so that it is intrinsically flat, a sum of expressions of the same form as equation (4.8) for a set of orthogonal directions can be used to give the mean curvature expression \(H_{v}\) from theorem 2. In two dimensions, the piecewise flat directed curvature at the hinges (edges) completely determines the curvature within each triangle. For a triangle with edges \(\ell_{1}\), \(\ell_{2}\) and \(\ell_{3}\), \[\left|\ell_{1}\right|\hat{v}_{1}+\left|\ell_{2}\right|\hat{v}_{2}+\left|\ell_{ 3}\right|\hat{v}_{3}=0, \tag{4.13}\] for vectors \(\hat{v}_{i}\) orthogonal to \(\ell_{i}\) and tangent to the given triangle. Taking \(\hat{v}_{1}\) and \(\hat{v}_{2}\) as basis vectors, the relation above can be used to give the second fundamental form for mixed arguments. Using the bilinear nature of the second fundamental from, \[\alpha(\hat{v}_{1},\hat{v}_{2}) = \frac{\alpha(\left|\ell_{1}\right|\hat{v}_{1},\left|\ell_{2} \right|\hat{v}_{2})}{\left|\ell_{1}\right|\left|\ell_{2}\right|} \tag{4.14}\] \[= \frac{1}{2\left|\ell_{1}\right|\left|\ell_{2}\right|}\big{(}- \kappa(\left|\ell_{1}\right|\hat{v}_{1})-\kappa(\left|\ell_{2}\right|\hat{v}_{ 2})+\kappa(\left|\ell_{1}\right|\hat{v}_{1}+\left|\ell_{2}\right|\hat{v}_{2}) \big{)}\] \[= \frac{1}{2\left|\ell_{1}\right|\left|\ell_{2}\right|}\big{(}- \left|\ell_{1}\right|^{2}\kappa_{\ell_{1}}-\left|\ell_{2}\right|^{2}\kappa_{ \ell_{2}}+\left|\ell_{3}\right|^{2}\kappa_{\ell_{3}}\big{)},\] with equation (2.4) used in the second line, and (4.13) for the final expression. Using this, a complete curvature tensor can be formed for the interior of each triangle. ## 5 Computations ### Surfaces in Euclidean space To compare with existing approaches, computations have been carried out for triangulations of a modified sphere and a peanut-shaped surface in \(\mathbb{E}^{3}\), the latter having both positive and negative curvatures. The surfaces are defined in spherical-polar coordinates by the radial functions \[r_{S}(\theta,\phi)=\sqrt{\left(1+\frac{1}{4}\sin^{2}\theta\right) \left(1+\frac{1}{4}\sin^{2}\theta\cos^{2}\phi\right)}\, \tag{5.1}\] \[r_{P}(\theta,\phi)=\sqrt{\left(\frac{1}{2}+\cos^{2}\theta\right) \left(1+\frac{1}{4}\sin^{2}\theta\cos^{2}\phi\right)}. \tag{5.2}\] In order to indicate any convergence to smooth curvature values, triangulations of both surfaces were created using progressively more layers of triangles between the two poles. The resulting numbers of vertices, edges and triangles, and the mean and largest absolute values of the hinge angles are given in table 1. Along with the new curvature constructions, the cotan formula and Cohen-Steiner and Morvans anisotropic curvature have been computed for each triangulation. Voronoi, or circumcentric dual regions were used to compute both the new piecewise flat mean curvature and the cotan formula. The new piecewise flat directed curvature was computed at each edge, and the anisotropic curvature measure from Cohen-Steiner and Morvan was computed over each Voronoi dual reagion, and divided by the region area to give an average tensor at each vertex. Visual representations of the triangulations and their dual regions are shown in figure 5, with a colour-scale used to display the piecewise flat mean curvature of each region. To measure the effectiveness of each approximation type, the differences between the approximations and their corresponding smooth curvature values are computed. For the new piecewise flat mean curvature and the cotan formula, the value at a given vertex is compared with the smooth curvature at the surface point corresponding with the vertex. For the new piecewise flat directed curvature, the value at an edge is compared with the smooth curvature at the midpoint of the corresponding geodesic segment, in a direction tangent to the surface and orthogonal to the geodesic segment at that point. For the Cohen-Steiner and Morvan curvature, rather than comparing tensors directly, the principal curvatures were computed for \begin{table} \begin{tabular}{c c c c|c c|c c} & \multicolumn{2}{c|}{No. simplices} & \multicolumn{2}{c|}{Modified sphere} & \multicolumn{2}{c}{Peanut-shape} \\ \cline{2-9} & Triangles & Edges & Vertices & Mean & Largest & Mean & Largest \\ \cline{2-9} \(6\) Layers & \(96\) & \(144\) & \(50\) & \(18^{\circ}\) & \(32^{\circ}\) & \(20^{\circ}\) & \(37^{\circ}\) \\ \(10\) Layers & \(252\) & \(378\) & \(128\) & \(11^{\circ}\) & \(18^{\circ}\) & \(13^{\circ}\) & \(24^{\circ}\) \\ \(14\) Layers & \(480\) & \(720\) & \(242\) & \(8.2^{\circ}\) & \(13^{\circ}\) & \(9.7^{\circ}\) & \(18^{\circ}\) \\ \(18\) Layers & \(780\) & \(1170\) & \(392\) & \(6.4^{\circ}\) & \(9.8^{\circ}\) & \(7.6^{\circ}\) & \(14^{\circ}\) \\ \(22\) Layers & \(1152\) & \(1728\) & \(578\) & \(5.2^{\circ}\) & \(8.1^{\circ}\) & \(6.3^{\circ}\) & \(12^{\circ}\) \\ \end{tabular} \end{table} Table 1: The number of triangles, edges and vertices in each simplicial graph, and the mean and largest absolute value of the hinge angles, in degrees, for each triangulation. Figure 5: Contour plots showing the piecewise flat mean curvature over each dual region for the \(6\), \(14\) and \(22\) layer triangulations of the modified sphere on top and peanut-shaped surface on the bottom. The smooth mean curvature for each is shown on the far right. the average tensor at each vertex, and compared with the smooth principal curvatures at the corresponding smooth points. In figure 6, the mean absolute values of these differences are graphed against the mean absolute value of the hinge angles for each triangulation, with error bars denoting the standard deviations of each. To relate the errors for different curvature types, and in a scale-invariant way, numeric values for the errors are presented in table 2 as percentages of the mean absolute value of the principal curvatures for each surface. The results of the contour plots, error graphs and the table of percentage errors strongly support the effectiveness of the new curvature constructions. All show reasonable approximations for the lowest resolutions, especially when the new mean curvature is viewed as a spatial average in the contour plots. The error graphs and table also show a clear convergence to zero as the resolution is increased, with the contour plots showing the emergence of more \begin{table} \begin{tabular}{c c c c c|c c c c} & \multicolumn{3}{c|}{Modified sphere} & \multicolumn{3}{c}{Peanut-shape} \\ \cline{2-10} & Mean & \multicolumn{1}{c}{Cotan} & Directed & CSM & Mean & \multicolumn{1}{c}{Cotan} & Directed & CSM \\ \cline{2-10} 6 Layers & 1.6\% & 1.4\% & 2.2\% & 2.5\% & 12\% & 12\% & 11\% & 14\% \\ 10 Layers & 0.62\% & 0.53\% & 1.1\% & 1.8\% & 4.1\% & 3.8\% & 5.2\% & 5.6\% \\ 14 Layers & 0.32\% & 0.28\% & 0.71\% & 1.7\% & 1.9\% & 1.8\% & 3.0\% & 3.1\% \\ 18 Layers & 0.19\% & 0.18\% & 0.51\% & 1.5\% & 1.3\% & 1.4\% & 2.1\% & 2.4\% \\ 22 Layers & 0.12\% & 0.13\% & 0.40\% & 1.4\% & 0.9\% & 1.0\% & 1.6\% & 1.9\% \\ \end{tabular} \end{table} Table 2: The mean errors, as percentages of the mean absolute value of the principal curvatures for each surface. As with the graphs, the values above indicate a clear convergence to zero for the new approaches, and give comparable values to the other approaches. Figure 6: The mean absolute errors are plotted against the mean absolute hinge angles, in degrees, with the bars representing the standard deviations. The errors for the new approaches indicate a clear convergence to zero, and have similar values to the other approaches. and more of the finer details visible in the smooth mean curvature plots. Finally, the new constructions compare very favourably with the other approaches presented. The errors for the piecewise flat mean curvature are extremely close to those of the cotan formula for all triangulations, and the errors for the piecewise flat directed curvature are on a similar scale to those of the principal curvatures from Cohen-Steiner and Morvan's approach. ### Non-Euclidean embedding For a non-Euclidean embedding space, the spatial part of a Gowdy space-time model [14] is used, with metric \[ds^{2}=e^{0.1\sin(z)}dx^{2}+e^{-0.1\sin(z)}dy^{2}+dz^{2}. \tag{5.3}\] This space is homogeneous in the \(x\) and \(y\)-directions, periodic in the \(z\)-direction, and can be viewed as a plane gravitational wave, stretching and shrinking alternately in the \(x\) and \(y\)-directions as the \(z\)-value is changed. The embedded surface is defined by the tangent vectors \[\left\langle 0,\,1,\,0\right\rangle,\ \frac{1}{3}\left\langle-1,\,0,\,\pi \right\rangle, \tag{5.4}\] giving a coordinate plane tilted away from the \(z\)-axis, which was deemed to have a simple-enough form but with an interesting variation in extrinsic curvature. The positive orientation of the surface is defined as the side with the positive \(x\)-direction. The surface is triangulated using both a rectangular grid, aligned with the vectors in (5.4), and a grid that is skewed to ensure the triangulation remains Delaunay, as shown in figure 7. The vectors tangent to the graph segments on the surface are given below, \[\text{Rectangular:}\quad v_{a}^{R}=\left\langle 0,\,1,\,0 \right\rangle, v_{b}^{R}=\frac{1}{3}\left\langle-1,\,0,\,\pi\right\rangle, v_{c}^{R}=v_{a}^{R}+v_{b}^{R},\] \[\text{Skewed:}\quad v_{a}^{S}=\left\langle 0,\,1,\,0\right\rangle, v_{b}^{S}=\frac{1}{3}\left\langle-1,\,-\frac{2}{3},\,\pi \right\rangle, v_{c}^{S}=v_{a}^{S}+v_{b}^{S}. \tag{5.5}\] Since changes in both the surface and embedding space only occur as the \(z\)-value is changed, grids with 6, 12, 24 and 48 blocks over a full period in the \(z\) direction are used to give different resolution triangulations, with the grid widths re-scaled accordingly. As explained in section 2.3, the hinge angles are defined by constructing a three-dimensional triangulation around the surface, as shown in figure 7. Details of these triangulations can be found in [12] and [13], Figure 7: The rectangular grid is shown on the left and skewed grid on the right, along with the three-dimensional triangulations that are used to determine the hinge angles. where they were used to give piecewise flat approximations of the scalar and Ricci curvature in the first, and the Ricci flow of the Gowdy three-space in the latter. The mean curvature was computed over barycentric duals for the rectangular grid, since the triangulations are not Delaunay, and Voronoi or circumcentric duals for the skewed grid. As the surface and embedding space are homogeneous in the \(x\) and \(y\) directions, the curvatures can be plotted as functions of the \(z\) coordinate, with graphs of the mean curvature shown in figure 8 and the directed curvature for each edge-type in figure 9. The approximations are graphed as piecewise linear curves, with the curvature approximations represented by the points where segments join. For the mean curvature, the approximations are located at the \(z\)-value of the smooth point corresponding with the piecewise flat vertex, and for the directed curvature, the \(z\)-values represent the midpoint of the geodesic segment corresponding with the given edge. The smooth mean curvature is also graphed as a function of \(z\) in figure 8, and for each edge-type in figure 9, the smooth curvature at the midpoint of the corresponding geodesic segments, tangent to the surface and orthogonal to the corresponding vectors in (5.5), are also graphed. Figure 8: Piecewise linear curves are used to represent the piecewise flat mean curvature approximations as functions of the \(z\)-coordinate, showing a convergence to the smooth mean curvature as the resolution is increased. Figure 9: The piecewise flat directed curvatures orthogonal to each edge-type, and tangent to the surface, are shown as piecewise linear functions of the \(z\)-coordinate, clearly approaching the corresponding smooth directed curvatures as the resolution is increased. As in the previous section, numeric values for the errors have also been computed. The mean curvature approximation at each vertex was compared with the smooth mean curvature at the corresponding \(z\)-value, and the piecewise flat directed curvature at each edge was compared with the smooth curvature at the midpoint of the corresponding geodesic segment, in the direction tangent to the surface and orthogonal to the geodesic segment at the midpoint. In order to compare the different curvature types in a scale-invariant way, table 3 presents the mean of the absolute values of the errors, as percentages of the mean absolute value of the smooth principal curvatures. To compare the effectiveness of each triangulation, the mean of the absolute values of the hinge angles, in degrees, are also given. The results strongly support the effectiveness of the new constructions, and their use for surfaces embedded in non-Euclidean spaces. For the two lower resolutions, involving only 12 and 24 triangles, the graphs closely model the behaviour of the smooth curvature, with reasonable percentage errors. The graphs then show a clear convergence to the smooth curvature, and the percentage errors decrease by a factor of about a quarter each time the number of triangles is doubled, a square of the approximate factor that the hinge angles reduce by. The two triangulation types also agree closely with each other, with exceptionally similar graphs for the mean curvature, and for the directed curvature at the \(\ell_{a}\) edges, which are common to both triangulation types. Interestingly, while the hinge angles are lower for the rectangular grid, the curvature approximations have lower mean errors for the skewed grid, on the order of 10% lower in most cases. This is likely due to the triangles being more strongly Delaunay, and the resulting use of Voronoi duals, properties that seem to improve other approaches as well. However, it is reassuring to see that the errors do not increase dramatically for non-Delaunay triangulations. To demonstrate the effectiveness of the higher resolution approximations, the directed curvature at the diagonal \(\ell_{c}\) edges are graphed for the 24 and 48 block triangulations in figure 10, with the 48 block approximations almost indistinguishable from the smooth curvature. These are particularly impressive approximations, with scales that are an order of magnitude lower than the curvature values for the \(\ell_{a}\) edges, and a more complicated bahaviour. The computations for these curvatures are also dominated by the hinge angles at the \(\ell_{a}\) and \(\ell_{b}\) edges, making this behaviour impossible to achieve with a single hinge approach. For the rectangular grid, the hinge angles at \(\ell_{c}\) are multiple orders of magnitude less than at \(\ell_{a}\) (4 orders less for the 48-block triangulation), and for the skewed grid, the hinge angles at \(\ell_{c}\) approximate a cosine curve with a period of \(2\pi\), quite different to the behaviour displayed on the right of figure 10. \begin{table} \begin{tabular}{c c c c|c c c} & \multicolumn{3}{c|}{Rectangular Grid} & \multicolumn{3}{c}{Skew Grid} \\ \cline{2-7} & Hinge Ang. & Mean & Directed & Hinge Ang. & Mean & Directed \\ \cline{2-7} \(6\) Blocks & \(0.48^{\circ}\) & \(9.0\%\) & \(16\%\) & \(0.52^{\circ}\) & \(7.6\%\) & \(13\%\) \\ \(12\) Blocks & \(0.27^{\circ}\) & \(2.2\%\) & \(4.4\%\) & \(0.29^{\circ}\) & \(1.8\%\) & \(3.5\%\) \\ \(24\) Blocks & \(0.14^{\circ}\) & \(0.57\%\) & \(1.1\%\) & \(0.15^{\circ}\) & \(0.47\%\) & \(0.9\%\) \\ \(48\) Blocks & \(0.07^{\circ}\) & \(0.14\%\) & \(0.28\%\) & \(0.08^{\circ}\) & \(0.12\%\) & \(0.22\%\) \\ \end{tabular} \end{table} Table 3: The mean absolute values of the hinge angles, and the mean errors, as percentages of the mean absolute value of the smooth principal curvatures for the surface. ## 6 Conclusion Discrete forms of the mean and directed curvature have been constructed on piecewise flat approximations of smooth manifolds in a manner that aims to approximate the local smooth curvature and be applicable to manifolds embedded in non-Euclidean spaces. When dual regions \(V_{v}\) are chosen to be Voronoi or barycentric, the resulting mean and directed curvature expressions, \(H_{v}\) and \(\kappa_{h}\) respectively, take the slightly simplified form \[H_{v} = \frac{1}{n\,|V_{v}|}\sum_{i}\,\frac{1}{2}|h_{i}|\,\phi_{i}, \tag{6.1}\] \[\kappa_{h} = \frac{1}{|V_{h}|}\sum_{i}\,\frac{1}{2}|h_{i}|\,\phi_{i}\,\cos^{2} \theta_{i}, \tag{6.2}\] where \(h_{i}\) are the hinges that intersect the regions \(V_{v}\) or \(V_{h}\), \(\phi_{i}\) are the corresponding hinge angles, and \(\theta_{i}\) are the angles between the hinges \(h_{i}\) and \(h\). The hinge region \(V_{h}\) is formed by the union of dual regions \(V_{v}\), for vertices \(v\) in the closure of the hinge \(h\), bounded on their interior by regions orthogonal to \(h\) within \(S^{n}\), see figure 4 for example. The piecewise flat directed curvature \(\kappa_{h}\) approximates the smooth curvature in a direction orthogonal to the hinge \(h\) and tangent to the manifold. Computations for a pair of surfaces in Euclidean space, and two different triangulation types for a surface in a non-Euclidean space, show these new curvatures to: * reasonably approximate the smooth curvature for low resolution triangulations and converge to the corresponding smooth curvature values as the resolution is increased; * give consistent values for different triangulations of the same surface; * apply to surfaces in both Euclidean and non-Euclidean spaces; * closely match the results of other approaches for Euclidean embeddings. The convergence of the new mean curvature is clearly demonstrated by the contour plots in figure 5, and the convergence of the new directed curvature for a non-Euclidean embedding is remarkably clear for the two different triangulation types in figure 10 above. Figure 10: Piecewise flat directed curvature at the diagonal \(\ell_{c}\) edges, similar to the graphs on the right of figure 9, but re-scaled. The higher resolutions give remarkably close approximations, particularly considering the more complicated behaviour and smaller scale. For a piecewise flat approximation of a two-dimensional surface, a complete curvature tensor can be formed within each triangle, using only the directed curvature at the edges. Though there are not enough hinges on the boundary of an \(n\)-simplex in higher dimensions, there are the ideal number and orientation of edges. Unfortunately, constructing an edge-tangent curvature on a region similar to the hinge volume has not been successful. This is mostly due to the \(\cos^{2}\theta_{i}\) term having the opposite effect here, with hinges on the boundary having the highest weighting, leading to potential jumps in the curvature for slight variations in the triangulation. Work is currently underway to test different types of edge-regions. The discrete _intrinsic_ curvatures in [12] were developed following a similar approach to this paper, and the structures of the two fit together very closely, particularly when the piecewise flat manifold \(S^{n}\) acts as the boundary of an \((n+1)\)-dimensional piecewise flat manifold \(R^{n+1}\). In this situation, the hinges of \(S^{n}\) coincide with the co-dimension-two simplices of \(R^{n+1}\), where the intrinsic deficit angles are defined, a connection that has already been used by Hartle and Sorkin [16] to provide boundary terms for the Regge Calculus action using the hinge angles. On top of this, the vertex and hinge regions \(V_{v}\) and \(V_{h}\) act as boundaries for the regions used to construct the scalar and sectional curvature in [12]. This makes the new piecewise flat extrinsic curvatures well suited to extend the piecewise flat Ricci flow of [12] and [13] to manifolds with boundary. **Data Availability:** The code used to perform the computations in section 5, and the data generated, are available in the Zenodo repository, [https://doi.org/10.5281/zenodo.7787062](https://doi.org/10.5281/zenodo.7787062). ## Acknowledgment This paper is dedicated to the memory of Niall O Murchadha. Niall was a patient teacher, a supportive mentor and a good friend. I will always remember your insightful perspectives and contagious enthusiasm. You are sadly missed.
2308.16494
Locally Tomographic Shadows (Extended Abstract)
Given a monoidal probabilistic theory -- a symmetric monoidal category $\mathcal{C}$ of systems and processes, together with a functor $\mathbf{V}$ assigning concrete probabilistic models to objects of $\mathcal{C}$ -- we construct a locally tomographic probabilistic theory LT$(\mathcal{C},\mathbf{V})$ -- the locally tomographic shadow of $(\mathcal{C},\mathbf{V})$ -- describing phenomena observable by local agents controlling systems in $\mathcal{C}$, and able to pool information about joint measurements made on those systems. Some globally distinct states become locally indistinguishable in LT$(\mathcal{C},\mathbf{V})$, and we restrict the set of processes to those that respect this indistinguishability. This construction is investigated in some detail for real quantum theory.
Howard Barnum, Matthew A. Graydon, Alex Wilce
2023-08-31T06:57:20Z
http://arxiv.org/abs/2308.16494v1
# Locally Tomographic Shadows (Extended Abstract) # Locally Tomographic Shadows (Extended Abstract) Howard Barnum [email protected] Institute for Quantum Computing University of Waterloo [email protected] Matthew A. Graydon Institute for Quantum Computing University of Waterloo [email protected] Alex Wilce Susquehanna University [email protected] ###### Abstract Given a monoidal probabilistic theory -- a symmetric monoidal category \(\mathcal{C}\) of systems and processes, together with a functor \(\mathbf{V}\) assigning concrete probabilistic models to objects of \(\mathcal{C}\) -- we construct a locally tomographic probabilistic theory \(\mathrm{LT}(\mathcal{C},\mathbb{V})\) -- the _locally tomographic shadow_ of \((\mathcal{C},\mathbb{V})\) -- describing phenomena observable by local agents controlling systems in \(\mathcal{C}\), and able to pool information about joint measurements made on those systems. Some globally distinct states become locally indistinguishable in \(\mathrm{LT}(\mathcal{C},\mathbb{V})\), and we restrict the set of processes to those that respect this indistinguishability. This construction is investigated in some detail for real quantum theory. ## 1 Introduction As is well known, complex quantum theory is distinguished from its real counterpart by a property called _tomographic locality_ or _local tomography_[4, 10]: states of a bipartite system are entirely determined by the joint probabilities they assign to measurements performed separately on the two component systems. That this fails for finite-dimensional real quantum theory is evident on dimensional grounds, but it's more instructive to note that if \(\mathbf{H}\) and \(\mathbf{K}\) are real Hilbert spaces, the space \(\mathcal{L}_{s}(\mathbf{H}\otimes\mathbf{K})\) of bounded self-adjoint operators on \(\mathbf{H}\otimes\mathbf{K}\) decomposes as \(\mathcal{L}_{ss}\oplus\mathcal{L}_{aa}\), where \(\mathcal{L}_{ss}:=\mathcal{L}_{s}(\mathbf{H})\otimes\mathcal{L}_{s}(\mathbf{K})\) and \(\mathcal{L}_{aa}:=\mathcal{L}_{a}(\mathbf{H})\otimes\mathcal{L}_{a}(\mathbf{K})\), with \(L_{a}(\mathbf{H})\) the space of bounded anti-self adjoint operators on \(\mathbf{H}\), and similarly for \(\mathbf{K}\). This is an orthogonal decomposition with respect to the trace inner product. Thus, a density operator \(\rho\) on \(\mathbf{H}\otimes\mathbf{K}\) has a unique decomposition \(\rho_{ss}+\rho_{aa}\), with \(\rho_{ss}\in\mathcal{L}_{ss}\) and \(\rho_{aa}\in\mathcal{L}_{aa}\). Given effects \(a\in\mathcal{L}_{s}(\mathbf{H})\) and \(b\in\mathcal{L}_{s}(\mathbf{K})\), \(a\otimes b\) lives in \(\mathcal{L}_{ss}\), so \(\mathrm{Tr}((a\otimes b)\rho_{aa})=0\), and thus \(\mathrm{Tr}((a\otimes b)\rho)=\mathrm{Tr}((a\otimes b)\rho_{ss})\). In other words, states with the same \(\mathcal{L}_{ss}\) component but distinct \(\mathcal{L}_{aa}\) components are _locally indistinguishable_.1 Footnote 1: In contrast, in CQM, any anti-selfadjoint operator \(a\) has the form \(ib\) where \(b\) is self-adjoint. Hence, if \(a=ib\) and \(a^{\prime}=ib^{\prime}\) are anti-self adjoint, \(a\otimes a^{\prime}\) (as an element of \(\mathcal{L}_{s}(\mathbf{H}\otimes\mathbf{K})\)) coincides with \(-(b\otimes b^{\prime})\in\mathcal{L}_{s}(\mathbf{H})\otimes\mathcal{L}_{s}( \mathbf{K})\). That is: \(\mathcal{L}_{ss}=\mathcal{L}_{aa}=\mathcal{L}_{s}(\mathbf{H}\otimes\mathbf{K})\). This suggests a construction: simply "factor out" the non-locally observable \(\mathcal{L}_{aa}\) degrees of freedom to obtain a locally tomographic theory. Here, we not only show that this is feasible, but go much further. By a _monoidal probabilistic theory_ we mean a pair \((\mathcal{C},\mathbb{V})\) where \(\mathcal{C}\) is a symmetric monoidal category and \(\mathbb{V}:\mathcal{C}\rightarrow\mathbf{Prob}\) is a functor from a symmetric monoidal category \(\mathcal{C}\) to a suitable category of concrete probabilistic models, taking monoidal products in \(\mathcal{C}\) to (non-signaling) composites in \(\mathbf{Prob}\). We outline this framework in Section 2.2 Given a such a theory \((\mathcal{C},\mathbb{V})\), we construct a new, locally tomographic theory, \(\mathrm{LT}(\mathcal{C},\mathbb{V})\), that describes what the world "looks like" to local agents, at least if one restricts attention to processes that respect local indistinguishability. Taking a cue from Plato, we call this the _locally tomographic shadow_ of the original theory. This is described in Section 3. In Section 4, we return to the case of real QM, and show that our construction leads to a non-trivial theory that differs from both real and complex QM, but still allows for entangled states and effects. If one lifts the restriction on processes mentioned above, it is still possible to construct a "shadow" theory, at the price of accepting an additional layer of non-determinism on top of that built into the probabilistic structure of the original model. This is briefly discussed in the concluding section 5, along with a number of other directions for further work. This is a preliminary sketch of a longer work in progress. ## 2 Generalized Probabilistic Theories For our purposes, a _probabilistic model_[4, 6] is pair \((\mathbb{V},u)\) where \(\mathbb{V}\) is an ordered real vector space and \(u\) is a strictly positive linear functional thereon, referred to as the _unit effect_ of the model. Elements \(\alpha\) of \(\mathbb{V}_{+}\) with \(u(\alpha)=1\) are the (normalized) _states_ of the system. _Effects_ (measurement results) are positive functionals \(a\) on \(\mathbb{V}\) with \(a\leq u\); \(a(\alpha)\) is the probability of \(a\)'s of occurring in state \(\alpha\). Where we wish to keep track of several models, we label them \(A\), \(B\), etc., writing, e.g., \(\mathbb{V}(A)\), \(\mathbb{V}^{*}(A)\) for the associated ordered vector spaces and their duals, and \(u_{A}\), for the unit effects. A _process_ from a probabilistic model \((\mathbb{V}(A),u_{A})\) to a probabilistic model \((\mathbb{V}(B),u_{B})\) is a positive linear mapping \(\phi:\mathbb{V}(A)\to\mathbb{V}(B)\) with \(u_{B}(\phi(\alpha))\leq u_{A}(\alpha)\) for every \(\alpha\in\mathbb{V}(A)_{+}\) We write **Prob** for the category of probabilistic models and processes. In the broadest sense, a probabilistic theory is simply a functor \(\mathbb{V}\) from some category \(\mathcal{C}\) into **Prob**.3\(\mathcal{C}\) can be understood to consist of "actual" physical systems and processes, or of any mathematical proxies for these (classical phase spaces, Hilbert spaces, open regions of spacetime, spin networks, or what-have-you). \(\mathbb{V}\) attaches probabilistic apparatus to these systems and processes in such a way as, e.g., to describe the possible experiences of agents. Footnote 3: Note that such a functor \(\mathbb{V}\) comes with a designated unit functional \(u_{A}\in\mathbb{V}(A)^{*}\) for every object \(A\in\mathcal{C}\) with \(u_{A}\circ\mathbb{V}(\phi)\leq uB\) for all \(\phi\in\mathcal{C}(A,B)\). In what follows, we denote such a probabilistic theory as a pair \((\mathcal{C},\mathbb{V})\). We assume that \(\mathbb{V}\) is _injective on objects_, which makes \(\mathbb{V}(\mathcal{C})\) a subcategory of **Prob**. This assumption holds for all of the examples we wish to consider. Where \(\mathbb{V}\) is also injective on morphisms, we say that \((\mathcal{C},\mathbb{V})\) is _process tomographic_[8], in which case, we can simply identify \(\mathcal{C}\) with the corresponding subcategory of **Prob**. Note that the latter -- more exactly, the probabilistic theory \((\mathbb{V}(\mathcal{C}),I)\), where \(I:\mathbb{V}(\mathcal{C})\to\textbf{Prob}\) is the inclusion functor -- is automatically process tomographic, regardless of whether or not \((\mathcal{C},\mathbb{V})\) is so. The example of primary interest here takes \(\mathcal{C}=\textbf{CPM}_{\mathbb{R}}\), the category of finite-dimensional real Hilbert spaces with morphisms \(\textbf{H}\to\textbf{K}\) given by completely positive linear mappings \(\mathcal{L}(\textbf{H})\to\mathcal{L}(\textbf{K})\). In contrast to the complex case, there exist linear mappings \(\mathcal{L}_{s}(\textbf{H})\to\mathcal{L}_{s}(\textbf{K})\) that preserve positivity but not adjoints [10]. We reserve the term positive for those linear mappings that preserve both. Equivalently, \(\phi:\mathcal{L}(\textbf{H})\to\mathcal{L}(\textbf{K})\) is positive iff \(\phi\) maps positive operators to positive operators, hence mapping \(\mathcal{L}_{s}(\textbf{H})\) to \(\mathcal{L}_{s}(\mathbf{K})\), and also maps \(\mathcal{L}_{a}(\mathbf{H})\) to \(\mathcal{L}_{a}(\mathbf{K})\). The probabilistic theory we have in mind when we speak of real QM is then \((\mathbf{CPM}_{\mathbb{R}},\mathbb{V})\) where \(\mathbb{V}(\mathbf{H})=\mathcal{L}_{s}(\mathbf{H})\), the latter understood as an order unit space with \(u_{\mathbf{H}}=\operatorname{Tr}(\cdot)\). In contrast to the complex case, the restriction of \(\phi\) to the self-adjoint part, \(\mathcal{L}_{s}(\mathbf{H})\), of \(\mathcal{L}(\mathbf{H})\) does not determine \(\phi\), so \(\mathbb{R}\mathbf{Q}\mathbf{M}\) is not process tomographic. **Composite Models** A (non-signaling) _composite_ of models \(\mathbb{V}(A)\) and \(\mathbb{V}(B)\) is a model \(\mathbb{V}(AB)\), together with a pair of bilinear mappings \[m:\mathbb{V}(A)\times\mathbb{V}(B)\to\mathbb{V}(AB)\text{ and }\pi:\mathbb{V}(A)^{*} \times\mathbb{V}(B)^{*}\to\mathbb{V}(AB)^{*}\] such that (i) \(\omega\circ\pi\) is a joint state on \(A\) and \(B\) for all \(\omega\in\mathbb{V}(AB)\), and (ii) \(\pi(a,b)m(\alpha,\beta)=a(\alpha)b(\beta)\) for all \(a\in\mathbb{V}^{*}(A),b\in\mathbb{V}^{*}(B)\) and all \(\alpha\in\mathbb{V}(A)\), \(\beta\in\mathbb{V}(B)\). We think of \(\pi(a,b)\) as representing the joint outcome \((a,b)\) of a pair of experiments performed on \(A\) and \(B\), and \(m(\alpha,\beta)\) as the result of preparing states \(\alpha\) and \(\beta\) independently on \(A\) and \(B\). We refer to \(\pi(a,b)\) and \(m(\alpha,\beta)\) as _product_ effects and states, respectively. If \(\mathbf{H}_{A}\) and \(\mathbf{H}_{B}\) are two complex (finite-dimensional) Hilbert spaces, the obvious choice for a composite model is \(\mathbb{V}(AB)=\mathbb{V}(\mathbf{H}_{A}\otimes\mathbf{H}_{B})=\mathcal{L}_{s }(\mathbf{H}_{A}\otimes\mathbf{H}_{B})\), with \(\pi(a,b)(\omega)=\operatorname{Tr}((a\otimes b)\omega)\) and \(m(\alpha,\beta)=\alpha\otimes\beta\). If every state \(\omega\in\mathbb{V}(AB)\) is determined by the joint probabilities \(\omega(\pi(a,b))\), we say that \(AB\) is _locally tomographic_. Given arbitrary models \(\mathbb{V}(A)\), \(\mathbb{V}(B)\), there are two extremal locally tomographic composites, obtained by endowing \(\mathbb{V}(A)\otimes\mathbb{V}(B)\) with the _minimal_ and _maximal_ tensor cones [5, 12, 14]. The former is generated by _separable_ states, i.e., convex combinations of product states. The latter consists of all bilinear forms that are positive on products of effects. We write \(\mathbb{V}(A)\otimes_{\mbox{min}}\mathbb{V}(B)\) and \(\mathbb{V}(A)\otimes_{\mbox{max}}\mathbb{V}(B)\) for \(\mathbb{V}(A)\otimes\mathbb{V}(B)\) as ordered by these minimal and maximal cones, respectively. A composite \(\mathbb{V}(AB)\) is locally tomographic iff \(m:\mathbb{V}(A)\otimes\mathbb{V}(B)\to\mathbb{V}(AB)\) is a linear isomorphism, and in this case it's usual simply to identify the two spaces. One then has \[(\mathbb{V}(A)\otimes_{\mbox{min}}\mathbb{V}(B))_{+}\leq\mathbb{V}(AB)_{+} \leq(\mathbb{V}(A)\otimes_{\mbox{max}}\mathbb{V}(B))_{+}.\] If \(A\) and \(B\) are quantum models, the inclusions are proper: \((\mathbb{V}(A)\otimes_{\mbox{min}}\mathbb{V}(B))_{+}\) contains only separable states, while \((\mathbb{V}(A)\otimes_{\mbox{max}}\mathbb{V}(B))_{+}\) contains states corresponding to non-positive operators [11, 12]. More generally, \(\mathbb{V}(A)\otimes_{\mbox{min}}\mathbb{V}(B)\) allows only separable states, but arbitrarily entangled effects; the maximal tensor product \(\mathbb{V}(A)\otimes_{\mbox{max}}\mathbb{V}(B)\) allows the reverse. Both tensor products are naturally associative, and extending straightforwardly to tensor products of more than two factors. **Definition:** A _monoidal probabilistic theory_ is a structure \((\mathcal{C},\mathbb{V},m,\pi)\) where \(\mathcal{C}\) is a symmetric monoidal category, \(\mathbb{V}\) is a functor \(\mathcal{C}\to\mathbf{Prob}\), and \(m\) and \(\pi\) are assignments, to every pair of objects \(A,B\in\mathcal{C}\), of bilinear mappings \[m_{A,B}:\mathbb{V}(A)\times\mathbb{V}(B)\to\mathbb{V}(AB)\text{ and }\pi_{A,B}: \mathbb{V}^{*}(A)\times\mathbb{V}^{*}(B)\to\mathbb{V}^{*}(AB)^{4}\] such that 1. \(m_{A,B},\pi_{A,B}\) make \(\mathbb{V}(AB)\) a composite of \(\mathbb{V}(A)\) and \(\mathbb{V}(B)\), 2. \(\mathbb{V}(I)=\mathbb{R}\) with \[m_{I,A}:\mathbb{V}(I)\times\mathbb{V}(A)\to\mathbb{V}(A),\ \pi_{I,A}:\mathbb{V}^{*}(I)\times\mathbb{V}^{*}(A)\to\mathbb{V}^{*}(A)\] the bilinear mappings uniquely defined by \(m(1,\alpha)=\alpha\) and \(\pi_{I,A}(1,a)=a\), and similarly for \(m_{I,A}\) and \(\pi_{A,I}\); 3. \(\mathbb{V}(\sigma_{A,B})\circ\pi_{A,B}=\pi_{B,A}\), and similarly \(m_{A,B}\circ\mathbb{V}(\sigma_{A,B})=m_{B,A}\); and 4. for all morphisms \(\phi:A\to A^{\prime}\) and \(\psi:B\to B^{\prime}\) in \(\mathcal{C}\), the diagram commutes. If \(\alpha\in\mathcal{C}(I,A)\), then by condition (ii), \(\mathbb{V}(\alpha):\mathbb{R}\to\mathbb{V}(A)\) defines an element of \(\mathbb{V}(A)\), namely \(\mathbb{V}(\alpha)(1)\). We make it a standing assumption in what follows that every normalized state in \(\mathbb{V}(A)\) arises in this way. _Remark:_ On the left side of (1), \(\mathbb{V}(\phi)\otimes\mathbb{V}(\psi)\) is the usual tensor product of linear maps, while on the right, \(\phi\otimes\psi\) is the monoidal composite of the morphisms \(\phi\) and \(\psi\) in \(\mathcal{C}\). An equivalent statement is that, for all states \(\omega\in\mathbb{V}(AB)\) and all effects \(a^{\prime}\in\mathbb{V}^{*}(A^{\prime})\) and \(b^{\prime}\in\mathbb{V}^{*}(B^{\prime})\), we have \[\omega(\mathbb{V}(\phi)^{*}(a^{\prime}),\mathbb{V}(\psi)^{*}(b^{\prime}))= \mathbb{V}(\phi\otimes\psi)(\omega)(a^{\prime},b^{\prime}).\] More compactly: \(m\) and \(\pi^{*}\) are natural transformations from the functor \(\mathbb{V}\otimes_{\mbox{min}}\mathbb{V}\) to the functor \(\mathbb{V}\circ\otimes\), and from \(\mathbb{V}\circ\otimes\) to \(\mathbb{V}\otimes_{\mbox{max}}\mathbb{V}\), respectively. When no confusion seems likely, we will henceforth write \((\mathcal{C},\mathbb{V})\) for a monoidal probabilistic theory \((\mathcal{C},\mathbb{V},m,\pi)\). Evidently, \(\mathbb{R}\mathbf{Q}\mathbf{M}\), with its standard compositional structure, is an example of such a monoidal probabilistic theory. All of the foregoing applies to composites of more than two models. One can show that both \(\otimes_{\mbox{min}}\) and \(\otimes_{\mbox{max}}\) are both naturally associative. If one has an \(n\)-partite composite \(A=A_{1}\cdots A_{n}\), built up recursively so that \(A=(A_{1}\cdots A_{n-1})A_{n}\), then arguing inductively one has canonical positive mappings \(\bigotimes_{\mbox{min}}\mathbb{V}(A_{i})\xrightarrow{m}\mathbb{V}(A) \xrightarrow{\pi^{*}}\bigotimes_{\mbox{max}}\mathbb{V}(A_{i})\). We call a monoidal probabilistic theory \((\mathcal{C},\mathbb{V})\)_locally tomographic_ iff, for every pair of objects \(A,B\in\mathcal{C}\), the composite \(\mathbb{V}(AB)\) is locally tomographic. As pointed out above, as long as \(\mathbb{V}\) is injective on objects, as we are assuming here, process-tomography can be enforced by passing to the probabilistic theory. \((\mathbb{V}(\mathcal{C}),I)\) For local tomography, the situation is not so simple. Still, as we show in the next section, it is possible to construct, from a given probabilistic theory, a kind of locally tomographic quotient theory. ## 3 Locally tomographic Shadows If a monoidal probabilistic theory \((\mathcal{C},\mathbb{V})\) is not locally tomographic, one can still ask what the world it describes would "look like" to agents having access only to local measurements. As a first step, we need to assume that objects in \(\mathcal{C}\) can be carved up in a preferred way into local pieces. To ensure this, we replace \(\mathcal{C}\) with the its strictification, the category \(\mathcal{C}^{*}\) having as objects, finite lists \(\vec{A}=(A_{1},...,A_{n})\) of nontrivial (non-identity) objects \(A_{i}\in\mathcal{C}\), with the understanding that this represents the composite \(\Pi_{i=1}^{n}A_{i}\) in \(\mathcal{C}\), but with the indicated monoidal decomposition. Morphisms from \(\vec{A}=(A_{1},...,A_{n})\) to \(\vec{B}=(B_{1},...,B_{k})\) are simply be morphisms \(\Pi_{i=1}^{n}A_{i}\to\Pi_{i=1}^{k}B_{i}\) in \(\mathcal{C}\). This is a strict symmetric monoidal category, with \((A_{1},...,A_{n})(B_{1},...,B_{k})=(A_{1},...,A_{n},B_{1},...,B_{k})\), the empty sequence \(()\) serving as the monoidal unit. There is a (strong, but not strict) monoidal functor \(\Pi:\mathcal{C}^{*}\to\mathcal{C}\) taking \(\vec{A}=(A_{1},...,A_{n})\) to \(\Pi\vec{A}:=\Pi_{i=1}^{n}A_{i}\), with \(\Pi(\,)=I_{\mathcal{C}}\), and acting as the identity on morphisms. This is not generally injective on objects (for instance, \(\Pi(A,BC)=\Pi(A,B,C)\)). Composing \(\Pi\) with \(\mathbb{V}\) now gives us a probabilistic theory \((\mathcal{C}^{*},\mathbb{V}\circ\Pi)\) in which we have the desired canonical decompositions, but in which we've lost the desirable injectivity-on-objects feature. This feature will be recovered when we pass to the locally tomographic shadow, which we will now construct. ### The shadow of a composite If \(AB\) is a composite of probabilistic models \(A\) and \(B\), then a state \(\omega\in\Omega(AB)\) of the composite system determines a joint state \(\widetilde{\omega}:\mathbb{V}(A)^{*}\times\mathbb{V}(B)^{*}\to[0,1]\), given by \(\widetilde{\omega}(a,b):=\pi_{A,B}(a,b)(\omega)\). This describes the joint probabilities of measurement outcomes carried out on \(A\) and \(B\) separately. We may call \(\widetilde{\omega}\) the _local shadow_ of \(\omega\). More generally, suppose that \(\vec{A}:=(A_{1},...,A_{n})\) is a sequence in \(\mathcal{C}^{*}\) with composite \(\Pi\vec{A}:=A\) in \(\mathcal{C}\): as we've seen, there is a positive linear mapping \[\pi_{\vec{A}}^{*}:\mathbb{V}(A)\to\mathcal{L}^{n}(\mathbb{V}^{*}(A_{1}),..., \mathbb{V}^{*}(A_{n}))\] taking \(\omega\in\Omega(A)\) to the corresponding joint state \(\widetilde{\omega}\) on \(A_{1},...,A_{n}\) -- its local shadow -- given by \[\widetilde{\omega}(a_{1},...,a_{n}):=\pi_{A}^{*}(\omega)(a_{1},...,a_{n})=(a_ {1}\otimes\cdots\otimes a_{n})(\omega)\] for all \((a_{1},...,a_{n})\in\mathbb{V}^{*}(A_{1})\times\cdots\times\mathbb{V}^{*}(A_{ n})\). **Definition:** For \(\vec{A}=(A_{1},...,A_{n})\in\mathcal{C}^{*}\), let \(\widetilde{\mathbb{V}}(\vec{A})\) be the space \(\bigotimes_{i}\mathbb{V}(A_{i})\), ordered by the cone \(\pi_{\vec{A}}^{*}(\mathbb{V}(\Pi\vec{A})_{+})\) consisting of local shadows of elements of \(\mathbb{V}(\Pi\vec{A})_{+}\), and let \(\widetilde{u}_{A}=u_{A_{1}}\otimes\cdots\otimes u_{A_{n}}\). We call the probabilistic model \((\widetilde{\mathbb{V}}(\Pi\vec{A}),\widetilde{u}_{\vec{A}})\) the _locally tomographic shadow_ of \(A=\Pi\vec{A}\) with respect to the given decomposition (that is, with respect to \(\vec{A}\)). Going forward, it will be convenient to use the abbreviations \(\mathrm{LT}_{\vec{A}}\), or even just LT, for for the mapping \(\pi_{\vec{A}}^{*}:\mathbb{V}(A)\to\widetilde{\mathbb{V}}(\vec{A})\), whenever context makes it convenient and relatively unambiguous to do so. We have canonical mappings \[\pi:\mathbb{V}^{*}(A_{1})\times\cdots\times\mathbb{V}^{*}(A_{k})\to\widetilde{ \mathbb{V}}(A_{1},...,A_{k})^{*},\text{ and }\ m:\mathbb{V}(A_{1})\times\cdots\times\mathbb{V}(A_{k})\to\widetilde{ \mathbb{V}}(A_{1},...,A_{k})\] given by \[\pi(a_{1},...,a_{n})(\mu)=\mu(a_{1},...,a_{n})\ \ \text{and}\ \ m(\alpha_{1},..., \alpha_{n})(a_{1},...,a_{n})=\Pi_{i=1}^{n}a_{i}(\alpha_{i}).\] It is straightforward that \((\widetilde{\mathbb{V}}(A_{1},...,A_{n}),\pi,m)\) is a composite of \(\mathbb{V}(A_{1}),...,\mathbb{V}(A_{n})\). For any vector spaces \(\mathbb{V}_{1},...,\mathbb{V}_{k}\), let \(\mathscr{L}^{k}(\mathbb{V}_{1},...,\mathbb{V}_{k})\) denote the space of \(k\)-linear forms on \(\mathbb{V}_{1}\times\cdots\times\mathbb{V}_{k}\). Using the canonical isomorphism \(\mathscr{L}^{k}(\mathbb{V}_{1}^{*},...,\mathbb{V}_{k}^{*})\simeq\bigotimes_{i =1}^{k}\mathbb{V}_{i}\), we can identify \(\widetilde{\mathbb{V}}(A_{1},...,A_{k})\), as a vector space, with with \(\mathbb{V}(A_{1})\otimes\cdots\otimes\mathbb{V}(A_{k})\), now ordered by a cone \(\widetilde{\mathbb{V}}(A_{1},....,A_{n})_{+}\) with \[\left(\bigotimes_{\min}\mathbb{V}(A_{i})\right)_{+}\subseteq\widetilde{ \mathbb{V}}(A_{1},...,A_{n})_{+}\subseteq\left(\bigotimes_{\max}\mathbb{V}(A _{i})\right)_{+}.\] We also have an injective linear mapping from \(\widetilde{\mathbb{V}}(A_{1},...,A_{k})\simeq\bigotimes_{i}\mathbb{V}(A_{i})\) into \(\mathbb{V}(A_{1},...,A_{k})\), extending the map \(m\) taking \((\alpha_{1},...,\alpha_{n})\in\Pi_{i}\mathbb{V}(A_{i})\) to \(\alpha_{1}\otimes\cdots\otimes\alpha_{n}\in\mathbb{V}(\Pi_{i}A_{i})\). However, this mapping is, as a rule, not positive. This will be illustrated below in the case of real quantum theory. This shows that \(\widetilde{\mathbb{V}}(A_{1},...,A_{n})_{+}\) is generally larger than the minimal tensor cone. As we'll also see in the next section, it is generally strictly smaller than the maximal tensor cone. _Further Notation:_ In the case of a bipartite system, we will sometimes find it convenient to use the tensor-like notation \(\mathbb{V}(A)\boxtimes\mathbb{V}(B)\) for \(\widetilde{\mathbb{V}}(A,B)\). Thus, \(\mathbb{V}(A)\boxtimes\mathbb{V}(B)\) is the vector space \(\mathbb{V}(A)\otimes\mathbb{V}(B)\), but ordered by the cone \(\widetilde{\mathbb{V}}(A,B)\) generated by local shadows \(\widetilde{\omega}\) of states \(\omega\in\mathbb{V}(AB)\). The following identifies the effect cone of \(\widetilde{\mathbb{V}}(A_{1},...,A_{n})\). We omit the straightforward proof. **Lemma 1:**\(\widetilde{\mathbb{V}}(A_{1},...,A_{n})_{+}^{*}\simeq\mathbb{V}^{*}(\Pi_{i}A_ {i})_{+}\cap(\bigotimes_{i}\mathbb{V}^{*}(A_{i}))\)_. In the bipartite case:_ \[(\mathbb{V}(A)\boxtimes\mathbb{V}(B))_{+}^{*}\simeq\mathbb{V}(AB)^{*}\cap( \mathbb{V}(A)^{*}\otimes\mathbb{V}(B)^{*}).\] ### Shadows of Processes Given a probabilistic theory \((\mathscr{C},\mathbb{V})\), we would now like to construct a locally tomographic "shadow" theory by applying the LT construction to the objects of \(\mathbb{V}(\mathscr{C})\). In order to do this, we first need to decide what the morphisms should be in this putative "shadow" theory. In what follows, \(A=\Pi\vec{A}\) and \(B=\Pi\vec{B}\) are composite systems, with specified decompositions \(\vec{A}=(A_{1},...,A_{n})\) and \(\vec{B}=(B_{1},...,B_{k})\). The proof of the following is routine. **Lemma 2:**_Let Let \(\Phi:\mathbb{V}(A)\to\mathbb{V}(B)\) be a positive linear mapping. The following are equivalent:_ 1. \(\Phi\) _maps_ \(Ker(LT_{\vec{A}})\) _into_ \(Ker(LT_{\vec{B}})\)_._ 2. _If_ \(\omega,\omega^{\prime}\in\mathbb{V}(A)\) _are locally indistinguishable, so are_ \(\Phi(\omega),\Phi(\omega^{\prime})\) _in_ \(\mathbb{V}(B)\)_._ 3. _There exists a linear mapping_ \(\phi:\bigotimes_{i}\mathbb{V}(A_{i})\to\bigotimes_{j}\mathbb{V}(B_{j})\) _such that_ \(LT_{\vec{B}}\circ\Phi=\phi\circ LT_{\vec{A}}\)__ **Definition:** With notation as above, call a positive linear mapping \(\Phi:\mathbb{V}(A)\to\mathbb{V}(B)\) satisfying any, hence all, of these conditions _locally positive_ (with respect to the specified decompositions). The linear mapping \(\phi\) in part (c) is then uniquely determined. We call it the _shadow_ of \(\Phi\), writing \(\phi=\text{LT}(\Phi)\). As an immediate consequence, we have **Lemma 3:**_If \(\Phi:\mathbb{V}(A)\rightarrow\mathbb{V}(B)\) is locally positive, then \(\phi=\text{LT}(\Phi)\) is positive as a mapping from \(\widetilde{\mathbb{V}}(A_{1},...,A_{m})\rightarrow\widetilde{\mathbb{V}}(B_{1},...,B_{n})\)._ Locally positive maps are reasonably abundant, but do exclude some important morphisms in \(\mathbb{R}\mathbf{QM}\). For instance, if \(\sigma\) and \(\alpha\) are swap and associator morphisms in \(\mathcal{C}\), \(\mathbb{V}(\sigma)\) is locally positive, but \(\mathbb{V}(\alpha)\) need not be. For another example, if \(\alpha\) is a state on \(A=A_{1}\otimes\cdots\otimes A_{n}\), then the corresponding mapping \(\alpha:\mathbb{R}=\mathbb{V}(I)\rightarrow\mathbb{V}(A)\) given by \(\alpha(1)=\alpha\) is trivially locally positive (the kernel of \(\text{LT}_{I}\) is trivial). On the other hand, there is generally no guarantee that an effect \(f:\mathbb{V}(A)\rightarrow\mathbb{R}\) will be locally positive. Indeed, if \(\omega,\omega^{\prime}\in\mathbb{V}(A)_{+}\) are distinct, there will certainly exist some positive linear functional \(f\) with \(f(\omega)\neq f(\omega^{\prime})\), and by re-scaling if necessary, we can take \(f\) to be an effect. If \(\text{LT}(\omega)=\text{LT}(\omega^{\prime})\), then \(f\) is not locally positive. Thus, the passage from \(\mathbb{V}(A)\) to \(\text{LT}(\mathbb{V}(A))\) not only identifies previously distinct states, but also jettisons some effects. It follows from part (c) of Lemma 3.3 that if \(\Phi:\Pi_{i}A_{i}\rightarrow\Pi_{j}B_{j}\) and \(\Psi:\Pi_{j}B_{j}\rightarrow\Pi_{k}C_{k}\) are locally positive with shadows \(\phi=\text{LT}(\Phi)\) and \(\psi=\text{LT}(\Psi)\), then \(\Psi\circ\Phi\) is locally positive with shadow \(\psi\circ\phi\). **Definition:** Let \((\mathcal{C},\mathbb{V})\) be a probabilistic theory. For objects \(\vec{A}=(A_{1},...,A_{n})\) and \(\vec{B}=(B_{1},...,B_{k})\in\mathcal{C}^{*}\), call a morphism \(\Pi\vec{A}\overset{f}{\longrightarrow}\Pi\vec{B}\)_local_ iff \(\mathbb{V}(f):\mathbb{V}(\Pi\vec{A})\rightarrow\mathbb{V}(\Pi\vec{B})\) is locally positive (relative to the preferred factorizations of \(A\) and \(B\)). We write \(\text{Loc}(\mathcal{C},\mathbb{V})\) for the category having the same objects as \(\mathcal{C}^{*}\) -- finite lists of non-identity objects in \(\mathcal{C}\) -- but only local morphisms. By the remarks above, \(\text{Loc}(\mathcal{C},\mathbb{V})\) is a monoidal sub-category of \(\mathcal{C}^{*}\). By construction, the functor \(\mathbb{V}\circ\Pi:\mathcal{C}^{*}\rightarrow\mathbf{Prob}\) descends to a functor \(\text{Loc}(\mathcal{C},\mathbb{V})\rightarrow\mathbf{Prob}\). However, because \(\Pi\) is not injective on objects, neither will \(\mathbb{V}\circ\Pi\) be. To remedy this, we make the following **Definition:** For all objects \(\vec{A},\vec{B}\in\mathcal{C}^{*}\) and local morphisms \(f:\vec{A}\rightarrow\vec{B}\), let \(\widetilde{\mathbb{V}}(\vec{A}):=\text{LT}_{\vec{A}}(\mathbb{V}(\Pi\vec{A})\) and \(\widetilde{\mathbb{V}}(f):=\text{LT}(\mathbb{V}(f))\). **Lemma 4:**\(\widetilde{\mathbb{V}}\) _is a functor \(\text{Loc}(\mathcal{C},\mathbb{V})\rightarrow\mathbf{Prob}\), and is injective on objects._ _Proof:_ It is straightforward that \(\widetilde{\mathbb{V}}(f\circ g)=\widetilde{\mathbb{V}}(f)\circ\widetilde{ \mathbb{V}}(g)\) where \(f\in\mathcal{C}^{*}(A,B)\) and \(g\in\mathcal{C}^{*}(B,C)\). That \(\widetilde{\mathbb{V}}\) is injective on objects follows from the fact that there is a canonical isomorphism between \(\bigotimes\mathbb{V}(A_{i})\) and \(\mathcal{L}^{k}(\mathbb{V}(A_{1})^{*},...,\mathbb{V}(A_{n})^{*})\); the latter is literally a space of functions on \(\mathbb{V}(A_{1})^{*}\times\cdots\times\mathbb{V}(A_{n})^{*}\), from which one can read off the spaces \(\mathbb{V}(A_{1})\),... \(\mathbb{V}(A_{n})\). As \(\mathbb{V}\) is injective on objects, these in turn determine \((A_{1},...,A_{n})\). \(\square\) **Definition:** The _locally tomographic shadow_ of a monoidal probabilistic theory \((\mathcal{C},\mathbb{V})\) is the probabilistic theory \(\text{LT}(\mathcal{C},\mathbb{V}):=(\text{Loc}(\mathcal{C},\mathbb{V}), \widetilde{\mathbb{V}})\). By construction, \(\text{LT}(\mathbb{V},\mathcal{C})=(\text{Loc}(\mathcal{C},\mathbb{V}), \widetilde{\mathbb{V}})\) is locally tomographic. We have the following picture: there are two functors (probabilistic models) associated with \(\text{Loc}(\mathcal{C},\mathbb{V})\): one is given by \(\mathbb{V}\circ\Pi\), and the other by \(\widetilde{\mathbb{V}}:=\text{LT}(\mathbb{V}\circ\Pi)\), i.e., for each object \(\vec{A}=(A_{1},...,A_{n})\) in \(\text{Loc}(\mathcal{C},\mathbb{V})\), we have a positive linear mapping \(\text{LT}_{\vec{A}}:\mathbb{V}(\Pi(\vec{A}))\rightarrow\widetilde{\mathbb{V} }(\vec{A})\). By construction, LT then defines a natural transformation \(\mathbb{V}\circ\Pi\Rightarrow\widetilde{\mathbb{V}}\). The Shadow of Real Quantum Theory We'll now consider the case of finite-dimensional real quantum theory in some detail, concentrating on the bipartite case. ### The LT map Suppose \({\bf H}\) and \({\bf K}\) are two finite-dimensional real Hilbert spaces. In what follows, we identify states with the corresponding density operators, so that if \(W\) is a density operator on \({\bf H}\otimes{\bf K}\), \({\rm LT}(W)\) is the unique operator in \({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf K})\) satisfying \({\rm Tr}({\rm LT}(W)a\otimes b)={\rm Tr}(Wa\otimes b)\) for all \(a\in{\cal L}_{s}({\bf H})\) and \(b\in{\cal L}_{s}({\bf K})\). As discussed in the Introduction, \({\cal L}_{s}({\bf H}\otimes{\bf K})\) has a natural orthogonal direct sum decomposition as \[{\cal L}_{s}({\bf H}\otimes{\bf K})\ =\ ({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({ \bf K}))\oplus({\cal L}_{a}({\bf H})\otimes{\cal L}_{a}({\bf K}))\] Since \({\rm Ker}({\rm LT})\) contains \({\cal L}_{a}({\bf H})\otimes{\cal L}_{a}({\bf K})\) and \({\rm Ran}({\rm LT})\) equals \({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf H})\), we have \({\rm Ker}({\rm LT})={\cal L}_{a}({\bf H})\otimes{\cal L}_{a}({\bf K})\). Let \({\rm Sym}:{\cal L}({\bf H})\to{\cal L}_{s}({\bf H})\) be the symmetrization mapping \({\rm Sym}(a)=\frac{1}{2}(a+a^{\prime})\). Note that this is the orthogonal projection on \({\cal L}({\bf H})\) (with respect to the trace inner product) with range \({\cal L}_{s}({\bf H})\). A straightforward computation gives us **Lemma 5:**\(LT(W)=(Sym\otimes Sym)(W)\) _for all \(W\in{\cal L}({\bf H})\)._ ### The locally tomographic cone Using the notation introduced in Section 3, \({\cal L}_{s}({\bf H})\boxtimes{\cal L}_{s}({\bf H})\) stands for \({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf H})\) as ordered by the _locally tomographic cone_ \[({\cal L}_{s}({\bf H})\boxtimes{\cal L}_{s}({\bf K}))_{+}\ :=\ {\rm LT}({\cal L}_{s}({ \bf H}\otimes{\bf H})_{+}).\] Let \(({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf K}))_{+}\) stand for the cone of positive operators belonging to \({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf K})\). That is, \[({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf H}))_{+}\ :=\ ({\cal L}_{s}({\bf H}) \otimes{\cal L}_{s}({\bf H}))\cap{\cal L}_{s}({\bf H}\otimes{\bf H})_{+}.\] A priori, we have \[({\cal L}_{s}({\bf H})\otimes_{\mbox{min}}{\cal L}_{s}({\bf K})_{+} \subseteq ({\cal L}_{s}({\bf H})\otimes{\cal L}_{s}({\bf K}))_{+}\] \[\subseteq ({\cal L}_{s}({\bf H})\boxtimes{\cal L}_{s}({\bf K}))_{+} \subseteq({\cal L}_{s}({\bf H})\otimes_{\mbox{max}}{\cal L}_{s}({\bf K}))_{+}.\] We'll presently see that if \({\bf H}\) and \({\bf K}\) are of dimension two or greater, all four of these inclusions are proper. If \(x,y\in{\bf H}\), we use the standard operator-theoretic notation \(x\odot y\) (rather than \(|x\rangle\langle y|\) as in Dirac notation) for the operator on \({\bf H}\) given by \((x\odot y)z\ =\ \langle z,y\rangle x\) for all \(z\in{\bf H}\). If \(\|x\|=1\), then \(x\odot x=P_{x}\), the rank-one projection onto the span of \(x\). **Example 1:** Let \(\mathbf{H}=\mathbb{R}^{2}\) and let \(\{x,y\}\) be any orthonormal basis. Let \(z\) be the real EPR state \(z=\frac{1}{\sqrt{2}}(x\otimes y+y\otimes x)\). Direct computation shows that \[z\odot z\ =\ \tfrac{1}{2}(P_{x}\otimes P_{y}+P_{y}\otimes P_{x}+(x\odot y) \otimes(y\odot x)+(y\odot x)\otimes(x\odot y))\] Now, \(\operatorname{Sym}(x\odot y)=\tfrac{1}{2}(x\odot y+y\odot x)=\operatorname{ Sym}(y\odot x)=:S\) where \(Sx=\tfrac{1}{2}y\) and \(Sy=\tfrac{1}{2}x\). So \(W:=\operatorname{LT}(z\odot z)\ =\ \tfrac{1}{2}(P_{x}\otimes P_{y}+P_{y}\otimes P_{x})+S \otimes S\). This is not a positive operator. For instance, if \(v=x\otimes x-y\otimes y\), then \(Wv=-\tfrac{1}{4}v\). This shows that the embedding \(\mathbb{V}(A\boxtimes B)\to\mathbb{V}(AB)\) is in general not positive, as mentioned earlier. Consequently, \((\mathscr{L}_{s}\otimes\mathscr{L}_{s})_{+}\) is strictly smaller than \((\mathscr{L}_{s}(\mathbf{H})\boxtimes\mathscr{L}_{s}(\mathbf{K}))_{+}\). **Theorem 1:**_Let \(\dim\mathbf{H}\) and \(\dim\mathbf{K}\) be \(3\) or greater. The cone \((\mathscr{L}_{s}(\mathbf{H})\otimes\mathscr{L}_{s}(\mathbf{K}))_{+}\) is strictly larger than the cone \((\mathscr{L}_{s}(\mathbf{H})\otimes_{\mbox{min}}\mathscr{L}_{s}(\mathbf{K}))_{+}\), and the cone \((\mathscr{L}_{s}(\mathbf{H})\boxtimes\mathscr{L}_{s}(\mathbf{K}))_{+}\) is strictly smaller than the cone \((\mathscr{L}_{s}(\mathbf{H})\otimes_{\mbox{max}}\mathscr{L}_{s}(\mathbf{K}))_ {+}\)._ _Proof (sketch):_ To simplify notation a bit, let \(\mathbf{H}=\mathbf{K}\), and write \(\mathscr{L}_{s}(\mathbf{H})\) and \(\mathscr{L}_{a}(\mathbf{H})\) as \(\mathscr{L}_{s}\) and \(\mathscr{L}_{a}\). Lemma 1 tells us that \((\mathscr{L}_{s}\boxtimes\mathscr{L}_{s})_{+}^{*}\simeq(\mathscr{L}_{s} \otimes\mathscr{L}_{s})_{+}\). There exist well-known examples of entangled states in \(\mathscr{L}_{s}\otimes\mathscr{L}_{s}\), namely those arising from unextendible product bases; see [7].5 Hence, \((\mathscr{L}_{s}\otimes\mathscr{L}_{s})\cap\mathscr{L}_{s}(\mathbf{H}\otimes \mathbf{K})_{+}\) is strictly larger than the minimal tensor cone, which contains only unentangled states. Dualizing, we see that \((\mathscr{L}_{s}(\mathbf{H})\boxtimes\mathscr{L}_{s}(\mathbf{K}))_{+}\) must be strictly smaller than the maximal tensor cone. \(\square\) Footnote 5: We thank Giulio Chiribella for drawing our attention to these. _Remark:_ The geometry of the locally tomographic cone \((\mathscr{L}_{s}(\mathbf{H})\boxtimes\mathscr{L}_{s}(\mathbf{K}))_{+}\) appears to be quite interesting. Although the mapping \(\operatorname{LT}:\mathscr{L}_{s}(\mathbf{H}\otimes\mathbf{K})\to\mathscr{L} _{s}(\mathbf{H})\boxtimes\mathscr{L}_{s}(\mathbf{K})\) is in general many-to-one, remarkably, this is not the case for pure states: as shown by Lemma 17 of [9], any pure state of \(\mathbf{H}\otimes\mathbf{K}\) can be distinguished from any other state (pure or mixed) by product effects. Thus, if \(\omega\) is a pure state in \(\mathscr{L}_{s}(\mathbf{H}\otimes\mathbf{K})\), it is the _only_ state with local shadow \(\operatorname{LT}(\omega)\).6 Footnote 6: If \(\omega\) is an interior point in the cone \(\mathscr{L}_{s}(\mathbf{H}\otimes\mathbf{K})_{+}\), then the affine space \(\omega+\mathscr{L}_{a}(\mathbf{H})\otimes\mathscr{L}_{a}(\mathbf{K})\), whose elements \(\mu\) all satisfy \(\operatorname{Tr}(\mu(a\otimes b))=\operatorname{Tr}(\omega(a\otimes b))\) for all product effects \(a\otimes b\), will certainly intersect the boundary of the positive cone. However, this intersection will not contain pure states, but only points interior to higher-dimensional faces of the cone. ### Processes Let \(\Phi:\mathscr{L}_{s}(\mathbf{H}\otimes\mathbf{K})\to\mathscr{L}_{s}(\mathbf{H }\otimes\mathbf{K})\) be a positive linear mapping. For simplicity of notation, let's write \(\mathscr{L}_{s,s}\) for \(\mathscr{L}_{s}(\mathbf{H})\otimes\mathscr{L}_{s}(\mathbf{K})\), \(\mathscr{L}_{s,s}\) and \(\mathscr{L}_{a,s}\) for \(\mathscr{L}_{a}(\mathbf{H})\otimes\mathscr{L}_{a}(\mathbf{K})\), and so on. We then have an orthogonal decomposition \[\mathscr{L}_{s}(\mathbf{H}\otimes\mathbf{K})=\mathscr{L}_{s,s}\oplus\mathscr{L }_{s,a}\oplus\mathscr{L}_{a,s}\oplus\mathscr{L}_{a,a},\] with respect to which \(\Phi\) has an operator matrix \(\left[\begin{array}{cc}\Phi_{s,s}&\Phi_{s,a}\\ \Phi_{a,s}&\Phi_{a,a}\end{array}\right]\), where, e.g, \(\Phi_{s,s}:\mathscr{L}_{s,s}\to\mathscr{L}_{s,s}\), \(\Phi_{a,s}:\mathscr{L}_{s,s}\to\mathscr{L}_{a,a}\), etc. A straightforward translation of Lemma 2 gives us **Lemma 6:**_Let \(\Phi\) be as above. Then \(\Phi\) is locally positive iff \(\Phi_{s,a}=0\), and in this case, \(\operatorname{LT}(\Phi)=\Phi_{s,s}\)._ This provides another way to see that effects on \(\mathbf{H}\otimes\mathbf{K}\), as processes \[\mathscr{L}_{s}(\mathbf{H}\otimes\mathbf{K})\to\mathscr{L}_{s}(\mathbb{R}),\] are generally not locally positive. The following example is particularly noteworthy: **Example 2:** Consider the functional \(\epsilon:\mathcal{L}(\mathbb{R}^{2}\otimes\mathbb{R}^{2})=\mathcal{L}(\mathbb{R}^{2 })\otimes\mathcal{L}(\mathbb{R}^{2})\rightarrow\mathbb{R}\) corresponding to the trace inner product, i.e., the functional uniquely defined on pure tensors by \(\epsilon(a\otimes b)=\mathrm{Tr}(ab^{\prime})\). The subspace \(\mathcal{L}_{a,a}\leq\mathcal{L}(\mathbb{R}^{2})\) is one-dimensional, spanned by the operator \(J\otimes J=\mathbb{J}\), where \(J(x,y)=(-y,x)\). Note that \(J^{2}=-\mathbf{1}\). Hence \(\epsilon(\mathbb{J})=\mathrm{Tr}(JJ^{\prime})=-\mathrm{Tr}(J^{2})=\mathrm{Tr }(\mathbf{1})=2\). Since \(\epsilon\) does not vanish on \(\mathcal{L}_{a,a}\), \(\epsilon\) is not locally positive. ## 5 Conclusions and questions At a minimum, \(\mathrm{LT}(\mathbb{R}\mathbf{QM})\) provides us with an interesting "foil" GPT, related to but distinct from both complex and real finite-dimensional real quantum theory, and from their Jordan-algebraic relatives [3] (which, like \(\mathbb{R}\mathbf{QM}\), are not locally tomographic). Among the many questions that suggest themselves about this theory, and about the LT construction more generally, the following stand out to us as particularly interesting. _Compact Closure_ Example 2 shows that \(\mathrm{LT}(\mathbb{R}\mathbf{QM})\) does not inherit the usual compact structure from \(\mathbb{R}\mathbf{QM}\). Given a monoidal probabilistic theory, theory \((\mathscr{C},\mathbb{V})\) with \(\mathscr{C}\) compact closed, when _is_\(\mathrm{LT}(\mathscr{C},\mathbb{V})\) compact closed? _LT and Complex QM_ The functor \(R:\mathbb{C}\mathrm{QM}\rightarrow\mathbb{R}\mathbf{QM}\) given by restriction of scalars does not preserve tensor products. It would be of interest to understand the functor \(\mathrm{LT}\circ R\) from \(\mathbb{C}\mathrm{QM}\) to \(\mathrm{LT}(\mathbb{R}\mathbf{QM})\). One can ask a parallel question about complexification. _The Shadow of InvQM_ In [3], we constructed a non-locally-tomographic theory we called **InvQM**, which contains finite-dimensional real and quaternionic QM as sub-theories, and also a relative of complex QM in which the composite of two complex quantum systems comes with an extra binary superselection rule. As we will discuss elsewhere, much of what was done above for \(\mathbb{R}\mathbf{QM}\) generalizes readily to **InvQM**, but the resulting theory -- like \(\mathrm{LT}(\mathbb{R}\mathbf{QM})\) -- remains largely unexplored. _Non-deterministic shadows_ If local agents Alice and Bob agree that their joint state is \(\omega\), this is consistent with the actual, global state being any element \(\mu\in\mathrm{LT}_{A,B}^{-1}(\omega)\). If the (unknown) actual state evolves under a (global) process \(\phi:\mathbb{V}(AB)\rightarrow\mathbb{V}(CD)\), the result will be one of the states in \(\phi(\mathrm{LT}_{A,B}^{-1}(\omega))\). Unless \(\phi\) is local, these will not be confined to a single fibre of \(\mathrm{LT}_{C,D}\); parties \(C\) and \(D\) might observe any of the different states in \(\mathrm{LT}_{C,D}(\phi(\mathrm{LT}_{A,B}^{-1}(\omega)))\), giving the impression that \(\phi\) acts indeterministically. How ought this uncertainty be quantified? Note that in this situation, the states of \(AB\) act as hidden variables "explaining" this apparent lack of determinism.
2309.07868
Cosmological constraints from low redshift 21 cm intensity mapping with machine learning
The future 21 cm intensity mapping observations constitute a promising way to trace the matter distribution of the Universe and probe cosmology. Here we assess its capability for cosmological constraints using as a case study the BINGO radio telescope, that will survey the Universe at low redshifts ($0.13 < z < 0.45$). We use neural networks (NNs) to map summary statistics, namely, the angular power spectrum (APS) and the Minkowski functionals (MFs), calculated from simulations into cosmological parameters. Our simulations span a wide grid of cosmologies, sampled under the $\Lambda$CDM scenario, {$\Omega_c, h$}, and under an extension assuming the Chevallier-Polarski-Linder (CPL) parameterization, {$\Omega_c, h, w_0, w_a$}. In general, NNs trained over APS outperform those using MFs, while their combination provides 27% (5%) tighter error ellipse in the $\Omega_c-h$ plane under the $\Lambda$CDM scenario (CPL parameterization) compared to the individual use of the APS. Their combination allows predicting $\Omega_c$ and $h$ with 4.9% and 1.6% fractional errors, respectively, which increases to 6.4% and 3.7% under CPL parameterization. Although we find large bias on $w_a$ estimates, we still predict $w_0$ with 24.3% error. We also confirm our results to be robust to foreground contamination, besides finding the instrumental noise to cause the greater impact on the predictions. Still, our results illustrate the capability of future low redshift 21 cm observations in providing competitive cosmological constraints using NNs, showing the ease of combining different summary statistics.
Camila P. Novaes, Eduardo J. de Mericia, Filipe B. Abdalla, Carlos A. Wuensche, Larissa Santos, Jacques Delabrouille, Mathieu Remazeilles, Vincenzo Liccardo
2023-09-14T17:16:13Z
http://arxiv.org/abs/2309.07868v1
# Cosmological constraints from low redshift 21 cm intensity mapping with machine learning ###### Abstract The future 21 cm intensity mapping observations constitute a promising way to trace the matter distribution of the Universe and probe cosmology. Here we assess its capability for cosmological constraints using as a case study the BINGO radio telescope, that will survey the Universe at low redshifts (\(0.13<z<0.45\)). We use neural networks (NNs) to map summary statistics, namely, the angular power spectrum (APS) and the Minkowski functionals (MFs), calculated from simulations into cosmological parameters. Our simulations span a wide grid of cosmologies, sampled under the \(\Lambda\)CDM scenario, \(\{\Omega_{c},h\}\), and under an extension assuming the Chevallier-Polarski-Linder (CPL) parameterization, \(\{\Omega_{c},h,w_{0},w_{a}\}\). In general, NNs trained over APS outperform those using MFs, while their combination provides 27% (5%) tighter error ellipse in the \(\Omega_{c}-h\) plane under the \(\Lambda\)CDM scenario (CPL parameterization) compared to the individual use of the APS. Their combination allows predicting \(\Omega_{c}\) and \(h\) with 4.9% and 1.6% fractional errors, respectively, which increases to 6.4% and 3.7% under CPL parameterization. Although we find large bias on \(w_{a}\) estimates, we still predict \(w_{0}\) with 24.3% error. We also confirm our results to be robust to foreground contamination, besides finding the instrumental noise to cause the greater impact on the predictions. Still, our results illustrate the capability of future low redshift 21 cm observations in providing competitive cosmological constraints using NNs, showing the ease of combining different summary statistics. keywords: large-scale structure of Universe - cosmological parameters - methods: statistical ## 1 Introduction The production of catalogues of galaxies and clusters of galaxies through large redshift surveys is of fundamental importance to cosmology, in particular for robust measurements of the large scale density fluctuations of the Universe. In fact, the galaxy distribution is mainly driven by the dark matter distribution, and their evolution since the initial gravitational collapse is influenced by the amount of dark matter and dark energy in the Universe, making the measurements of the clustering of galaxies one of the main observational probes in cosmology. As an alternative to the detection of individual galaxies, the low resolution measurements of the redshifted 21 cm line emission of the neutral hydrogen (Hi), using intensity mapping (IM; see, e.g., Peterson et al., 2006, 2009), configures a new observational technique for tracing the large scale structure of the Universe and constraining cosmological parameters (Pritchard and Loeb, 2012). Most of the Hi in the late Universe is located inside the galaxies, and, as opposed to using the 21 cm radio emission to resolve the galaxies, the IM allows measuring its overall brightness temperature fluctuations, similar to cosmic microwave background (CMB) observations, but as a function of the redshift. The low resolution of such technique makes it relatively cheap and allow a quick survey of large volumes of the Universe (Battye et al., 2013). Aiming to explore this new observable, several radio instruments, in operation and under construction, are expected to map the large scale Universe using the Hi IM. Among them are the Square Kilometre Array1(SKA; SKA Cosmology SWG et al., 2020), the Canadian Hydrogen Intensity Mapping Experiment2(CHIME; Bandura et al., 2014), Five-Hundred-Meter Aperture Spherical Radio Telescope (FAST, Nan et al., 2011), and the Baryon Acoustic Oscillations from Integrated Neutral Gas Observations3 (BINGG; Battye et al., 2013; Bigot-Sazy et al., 2015; Abdalla et al., 2022a). In this paper, exploring the BINGG telescope as a case study, we assess the constraining power of future Hi IM measurements at low redshifts (\(0.13<z<0.45\)), evaluating the performance of the joint use of summary statistics and neural networks for this task. Footnote 3: [https://www.bingotelescope.org/en/](https://www.bingotelescope.org/en/) Inspired by the human brain, made up of connected networks of neurons, a neural network (NN) processes the information through a set of algorithms so that it can learn from examples and observation, mimicking the human learning process. These neurons are distributed into layers; a NN is classified as a deep learning algorithm according to the number of layers (or how dense it is), usually more than three (for a review on deep learning, see Schmidhuber, 2015). A sub-field of machine learning, the NNs are pattern recognition algorithms and have been widely employed for cosmological analyses over the last years following diverse approaches. Commonly employed are the convolutional neural networks (CNNs), able to extract information directly from the cosmological field, e.g., as done by Gupta et al. (2018); Fluri et al. (2019); Ribli et al. (2019); Matilla et al. (2020); Lu et al. (2022) using weak lensing maps, and by Lazanu (2021); Villaescusa-Navarro et al. (2022) using density fields. The NNs are also commonly used taking summary statistics as input data, as done, e.g., by Novaes et al. (2014, 2015) using Minkowski functionals from CMB maps, by Jennings et al. (2020) using the three-point correlation function from 21 cm signal distribution, and by Perez et al. (2022) using three different estimators, the two-point correlation function, the count-in-cells, and void probability function; a hybrid approach combining this simple NN and a CNN is employed by Ntampaka et al. (2020) using the power spectrum. In addition, deep learning has been employed for data compression, aiming to extract optimal summary statistics, showing to be an efficient way for cosmological parameters inference (Alsing et al., 2018; Alsing and Wandelt, 2019; Jeffrey et al., 2021). We note that these are only few examples, among several other machine learning and deep learning applications for cosmological analyses (for an example of a recent usage of machine learning algorithms, see von Martens et al., 2022). Moreover, NNs can be an alternative to likelihood based analyses, performing cosmological parameter inference directly from simulations. The motivation for such approach is the difficulty in theoretically modelling physical aspects, such as the signature of non-linear evolution of the structures on small scales, as well as instrumental characteristics, as noise and systematic errors, which are more easily reproduced by simulations. Another advantage of such approach is that one can avoid making assumptions, commonly required and sometimes not completely correct, for an analytical expression for the likelihood, such as Gaussianity (Jeffrey et al., 2021). In fact, the highly non-Gaussian distribution of the structures, a result of the non-linear evolution of the Universe, requires the usage of higher order statistics, which, in general, has no analytical expression and, consequently, no likelihood function. On this context, parameter inference from simulations, or their summary statistics, through NNs algorithms, without an explicit likelihood, seems to be very advantageous for cosmological constraints. Indeed, machine learning techniques, or in particular NNs, are very versatile tools and can be employed in uncountable ways and purposes. Among the several possibilities of usage, we chose to explore how much cosmological information one can extract from using a simple fully connected NN trained over two summary statistics. Our summary statistics consist of the angular power spectrum (APS), commonly used to extract cosmological information from different data sets, and the Minkowski functionals (MFs; Minkowski, 1903; Novikov et al., 1999; Komatsu et al., 2003), sensitive to higher order correlations (for an example the MFs used as input to NNs, see Novaes et al., 2014, 2015). The MFs are widely used to explore statistical properties of the two-dimensional CMB temperature field (Komatsu et al., 2003; Modest et al., 2013; Akrami et al., 2020; Novaes et al., 2015, 2016) and the two- and three-dimensional distribution of galaxies in the Universe (Saar et al., 2007; Kerscher and Tikhonov, 2010; Choi et al., 2013; Novaes et al., 2018). Beyond the cosmological information contained in the power spectrum, non-gaussian information extracted through MFs can provide additional tools to differentiate between cosmological models, constraining cosmological parameters (Shirasaki and Yoshida, 2014; Petri et al., 2015) and probing modifications of the gravity (Fang et al., 2017; Shirasaki et al., 2017). Differently from APS and bispectrum calculations, the MFs have the advantage of the additive property (Novikov et al., 1999), allowing them to be efficiently applied to small regions of the sphere, particularly useful for partial sky surveys. Given that the MFs have no analytical expression, modelling them and their correlation with the APS efficiently is not an easy task, in particular at low redshifts, where matter distribution is highly non-linear. Then, the complexity in constructing a likelihood of the data also motivates our usage of NNs in constraining cosmological parameters from APS and MFs. In this work, we generate a large set of lognormal simulations of the 21 cm signal spanning a wide grid of cosmologies under two scenarios, namely, the \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) and an extension including dark energy (DE) equation of state (EoS) parameters. We also account for foreground contamination and instrumental effects, such as noise and beam size, considering the specifications for the BINGG telescope. We train the NN algorithm over the APS and MFs calculated from these simulations, in such a way to map these summary statistics in terms of the input cosmological parameters. Using each summary statistic individually and their combination, we evaluate the performance of our method in constraining cosmological parameters from each model and how the contaminant signals affect these results. We also investigate the sensitivity of each summary statistic with the sky coverage, evaluating our results over a larger field of view, coverage similar to what the SKA instrument will survey. This paper is organised as follows: In Section 2 we describe the 21 cm mocks and the parameter space over which they are generated and how the simulations are constructed, accounting for the instrumental noise, the foreground components, and the foreground cleaning process. The summary statistics and the procedure used to obtain an optimised NN algorithm to each case investigated here, as well as the metrics used to evaluate the accuracy of our predictions, are presented in Section 3. We discuss our results in Section 4 and summarise the main conclusions in Section 5. ## 2 Simulations In this section, we describe the simulated sky maps used in our analyses to produce the training and test data sets. All the simulations are produced in the HEALPix pixelization scheme (Gorski et al., 2005) with \(N_{\text{side}}=256\). Jointly to the cosmological 21 cm signal, we account for the expected foreground contamination, thermal noise, and sky coverage, produced according to the prescriptions provided by Fornazier et al. (2022); de Mericia et al. (2023); Abdalla et al. (2022b), which we briefly describe below. We note that these effects are accounted in our simulations following the procedure described in Novaes et al. (2022), to which we refer the reader for detailed information. ### Cosmological signal We use the publicly available Full-sky Lognormal Astro-fields Simulation Kit (FLASK; Xavier et al., 2016) code to generate all the 21 cm IM simulations employed here. As input to the FLASK code we used the angular auto- and cross-power spectrum \(C_{\ell}^{ij}\), for \(i\) and \(j\) redshift bins (z-bins), calculated using the Unified Cosmological Library for \(C_{\ell}\)'s (UCLCL) code (Loureiro et al., 2019; McLeod et al., 2017), taking into account the redshift space distortion effect. A brief discussion about the theoretical \(C_{\ell}^{ij}\) can be found in Novaes et al. (2022), while a detailed description on how they are calculated appears in Loureiro et al. (2019). The input \(C_{\ell}^{ij}\) are calculated for a grid of cosmological parameters under two scenarios: * (\(i\)) the standard flat \(\Lambda\)CDM model, varying the parameters \(\{\Omega_{c},h,\Omega_{b},n_{s},A_{s}\}\), and * (\(ii\)) the extension given by the Chevallier-Polarski-Linder (CPL) parameterization (Chevallier and Polarski, 2001; Linder, 2003), \(w_{\rm cpl}(z)=w_{0}+w_{a}(z/(1+z))\), where the \(\Lambda\)CDM model is recovered for \(w_{0}=-1\) and \(w_{a}=0\), for which we vary two more parameters, \(\{\Omega_{c},h,\Omega_{b},n_{s},A_{s},w_{0},w_{a}\}\). In each of the cases, we calculate the \(C_{\ell}^{ij}\) for 800 different combinations of cosmological parameters. For an efficient sampling of the parameters space, we employ the Latin Hypercube approach (McKay et al., 1979). Although we vary all the parameters in each case, we keep \(\Omega_{b}\), \(n_{s}\), and \(A_{s}\) values within the \(1\sigma\) error bars given by Planck 2018 (Aghanim et al., 2020), while the other parameters are sampled in a broader range of values. This choice is because we are interested in constraining only \(\{\Omega_{c},h\}\) and \(\{\Omega_{c},h,w_{0},w_{a}\}\) parameters in cases _(i)_ and _(ii)_, respectively. We consider the interval of values for these parameters and their fiducial values as given in Table 1; the parameter space of interest is also shown in Figure 1 (hereafter, a given combination of parameters, i.e., a point in the grid, is simply refered as a cosmology). For each cosmology we calculate \(C_{\ell}^{ij}\) and from them generate 12 realisations of 21 cm mocks. Here, one mock refers to 30 tomographic maps, one for each z-bin in the total range of \(0.127<z<0.449\). It is worth to mention that the FLASK code, by construction, follows predefined statistical properties, such as the power spectrum and multivariate log-normal model, as stated by Xavier et al. (2016). That is to say that, although it has the great advantage of allowing the production of a large set of mocks in a short time, these simulations may not reproduce realistically the three-point information from the 21 cm cosmological signal. In this sense, the FLASK mocks are likely enough to evaluate our methodology using the APS, but may prevent us of exploring the full potential of the MFs, as will be pointed out in Section 4. ### Observational characteristics #### (\(i\)) Foreground contamination: As in de Mericia et al. (2023); Fornazier et al. (2022), the foreground components contributing in the BINGO frequency range, 980 - 1260 MHz, are generated by the Planck Sky Model software (PSM; Delabrouille et al., 2013). These include the Galactic synchrotron4 and free-free emissions, the main contaminants in this frequency band, as well as, the thermal dust and anomalous microwave emission, beside the extragalactic contribution from thermal and kinetic Sunyaev-Zel'dovich effects, and unresolved point sources. We refer the reader to Novaes et al. (2022, Section 4.1.2) and de Mericia et al. (2023), as well as references therein, for information on the specific configuration of the PSM code for simulating each foreground component. Footnote 4: Here produced by a power law with non-uniform spectral index as given by Giardino et al. (2002) model. In Section 4.4 we consider an alternative model to evaluate the robustness of our results to foreground contamination. is apodized with a cosine transition of 5 deg, in order to avoid the impact of sharp edges. The final mask, after apodization and Galactic cut, has an effective sky fraction of \(f_{\rm sky}\sim 0.09\). (_i_) **Instrument:** In its Phase 1, BINGO is expected to operate with 28 horns and a system temperature of 70 K (Wuensche et al., 2022). With 5 years of observation, the horns positions are slightly shifted each year in order to perform a more homogeneous scan of the innermost BINGO area (Abdalla et al., 2022). As detailed in Fornazier et al. (2022), such specifications allow estimating the amplitude (root-mean-square - RMS) by pixel of the BINGO thermal (white) noise, the same for all the frequency bins. Then, multiplying the RMS map by Gaussian distributions of zero mean and unitary variance, we generate as many realisations of noise maps as the number of simulations employed here. Before adding the thermal noise, we account for the instrumental beam, assumed to be approximately Gaussian with same full width half maximum \(\theta_{\rm FWHM}=40\) arcmin for all frequency bins. ### Foreground cleaning process An essential procedure before using the 21 cm observations for cosmological analyses is the removal of the foreground contamination, with amplitude of \(\sim 10^{4}\) larger than the 21 cm signal in BINGO frequencies (and even larger at lower frequencies, such as those of SKA). For this, we employ the generalized needlet internal linear combination (GNILC) method (Remazeilles et al., 2011), a non-parametric component separation technique, which has been shown to be efficient when applied to 21 cm IM simulations (Olivari et al., 2016; Liccardo et al., 2022; Fornazier et al., 2022; de Mericia et al., 2023). We note that GNILC recovers the cosmological signal plus noise maps in each frequency. For detailed explanation about GNILC, we refer the reader to Remazeilles et al. (2011); Olivari et al. (2016). Technical details about this foreground cleaning procedure implemented on BINGO simulations, similar to those employed here, are provided by Liccardo et al. (2022); Fornazier et al. (2022); de Mericia et al. (2023). As in Novaes et al. (2022), the large amount of simulations required here makes unfeasible the application of GNILC over each of them. For this reason, we estimate the residual foreground signal expected to remain after the component separation process and add it to our simulations. We apply GNILC to ten complete simulations, including all observational characteristics as discussed in the previous subsections, and from each of them we estimate the foreground residual maps. This way, our final simulations are constructed by adding the realistic foreground residual contribution to each of the 21 cm mocks (with the beam effect already accounted for), along with the thermal noise. The foreground residual maps are repeated every ten mock realisations. Unless stated otherwise, our analyses are applied to these simulations. ## 3 Methods ### Summary statistics **Angular power spectrum** Recognised as a powerful statistic commonly used to constrain cosmology, the APS is calculated as the average of the \(a_{\ell m}\) coefficients of the decomposition of a temperature fluctuation (or projected density fluctuation) field into spherical harmonics \(\hat{C}_{\ell}=\langle a_{\ell m}a_{\ell m}^{*}\rangle\). Here we employ the pseudo-\(C_{\ell}\) method as implemented in the NaMaster code (Alonso et al., 2019) to estimate the APS. This formalism relates the observed APS, \(\hat{C}_{\ell}\), to the true spectrum \(C_{\ell}\) as \[\hat{C}_{\ell}=\sum_{\ell^{\prime}}\mathcal{M}_{\ell\ell^{\prime}}C_{\ell^{ \prime}}, \tag{1}\] where the mode-coupling matrix \(\mathcal{M}_{\ell\ell^{\prime}}\) is determined by the mask geometry (Hivon et al., 2002; Brown et al., 2005). This matrix is analytically calculated, and the APS is estimated by inverting Equation 1. Given the sky fraction assumed here (\(f_{\rm sky}=0.13\)), we calculate the \(C_{\ell}\) at bins with width \(\Delta\ell=10\)(Novaes et al., 2022), which makes the mode-coupling matrix invertible. It means that we use a total of 76 data points (multipole bands) linearly spaced in \(\ell\), with \(2\leq\ell\leq 761\) (with maximum multipole as given by the pixelization of the simulations, \(N_{side}=256\), and the multipole binning, \(\Delta\ell=10\)). Here we consider only the auto-APS; the constraining power of the cross-APS between different z-bins will be assessed in future work. **Minkowski functionals** Unlike the APS, besides informing about spatial correlation of a random field, the MFs can also provide morphological information and map the shape of structures. The morphological properties of a given field in a \(N\)-dimensional space can be completely described using \(N+1\) MFs (Minkowski, 1993). Then, a two-dimensional 21 cm temperature fluctuations field, \(\delta T\), with variance \(\sigma_{0}^{2}\), would be completely described by three MFs, namely, the Area (\(V_{0}\)), Perimeter (\(V_{1}\)), and Genus (\(V_{2}\)), given by (Novikov et al., 1999; Komatsu et al., 2003; Ducout et al., 2013; Novaes et al., 2018) \[V_{0}(\nu) =\frac{1}{4\pi}\int_{\Sigma}d\Omega\,, \tag{2}\] \[V_{1}(\nu) =\frac{1}{4\pi}\frac{1}{4}\int_{\partial\Sigma}dl\,,\] (3) \[V_{2}(\nu) =\frac{1}{4\pi}\frac{1}{2\pi}\int_{\partial\Sigma}\kappa\ dl\,, \tag{4}\] where \(d\Omega\) and \(d\lambda\) are the elements of solid angle and line, respectively, and \(\kappa\) is the geodesic curvature (for details see, e.g., Ducout et al., 2013). These MFs are calculated for an excursion set defined as \(\Sigma\equiv\{\delta T>\nu\sigma_{0}\}\), i.e., the set of connected pixels exceeding a given \(\nu\) threshold, whose boundary is \(\partial\Sigma\equiv\{\delta T=\nu\sigma_{0}\}\). The first two MFs measure the area and the contour length of the excursion set, and the last one gives the difference between connected areas above the threshold \(\nu\) and below it in the excursion. It is worth to mention that analytical expressions for the MFs are known only for Gaussian and weakly non-Gaussian cases. They can be written as \(V_{k}=A_{k}\nu_{k}\), where, for Gaussian fields, \(\nu_{k}=\nu_{K}^{\alpha}=H_{k-1}(\nu)\); \(H_{k}(\nu)\) is the \(k\)-th Hermite polynomial. The amplitude \(A_{k}\) depends on the shape of the APS. For non-Gaussian fields, \(\nu_{k}\) can be expanded in a Taylor series, \(\nu_{k}=\nu_{k}^{G}+\nu_{k}^{NG}\), where the non-Gaussian correction terms, represented by \(\nu_{k}^{NG}\), depend on the higher order moments of the field (Petri et al., 2015; Ducout et al., 2013; Matsubara, 2010). Although the first order corrections to the MFs can be analytically obtained, the perturbative solution is not enough for large non-Gaussianity, as is the case of the 21 cm signal at low redshifts, and the series does not converge. Here we numerically calculate the MFs using the code provided by Ducout et al. (2013) and Gay et al. (2012). The number of \(\nu\) thresholds is fixed at 31, defined dividing the range \([\nu_{min},\nu_{max}]=[-2.5,6.0]\) in equal parts. As illustration, Figure 2 shows, for the fiducial cosmology, the average MFs from 1000 realizations of clean 21 cm simulations, as well as the theoretical APS, showing their dependence with the redshift. ### Neural network Working as a regression technique, a NN is composed by processing units, the neurons, organised in layers: one input layer, fed by the input (features) and target information, one or more hidden layers, and the output layers. In a fully connected NN, each neuron in a layer is connected to those in the next layer, allowing the information to be processed and to propagate until the output layer. Following a supervised learning, the outputs of this process are compared to the target values (true information) estimating an error (or loss) function to measure the performance. The errors are propagated to the input layer to adjust the parameters of the NN algorithm so that the error function is minimised, performing an iterative process until the error achieve a given threshold, following a backpropagation training process. A detailed description about the NNs is provided by, e.g., Choudhury et al. (2020); Jennings et al. (2020). Next subsections describe how the summary statistics are organised to feed the NN, as well as the architecture and how the hyperparameters (number of layers, number of neurons, among others) are chosen. We also present the accuracy metrics used to evaluate the performance of the NNs in constraining the cosmological parameters. #### 3.2.1 Training and test data sets For each of the scenarios we investigate here, cases _(i)_\(\{\Omega_{\mathrm{C}},h\}\) and _(ii)_\(\{\Omega_{\mathrm{C}},h,w_{0},w_{a}\}\), we generate 12 realisations of 21 cm mocks for each of the 800 cosmologies. We find that a set of \(n=12\) maps for each cosmology seems to be enough for the NN to account for the cosmic variance (CV; tests have shown that for \(n\geq 4\) the accuracy of the NN plateaus and oscillates around an average value). We calculate the APS and the MFs statistics for each of the simulations, at each z-bin. The summary statistics are the features feeding our NNs algorithms and the corresponding cosmological parameters values are the targets, or outputs, of the NNs, i.e., the quantities we want to predict using the trained NN. These data are split so that 64%, 16% and 20% of the 800 cosmologies are used for training, validation and testing of the NN, respectively. It means that APS and the MFs calculated over mocks from a given cosmology used for training or validation are not employed for testing the performance of the NN. To guarantee that our results do not depend on a particular (random) choice of train and validation sets, we use the cross-validation procedure. For this, we split the train+validation data set into 5 smaller data sets, training the NN in 4 of them and validating on the remaining set. These training and validation steps are repeated 5 times, each of them excluding one of the 5 smaller sets, and the performance value is given by the average of the values measured at each of the 5 training processes. We note that, before feeding the NN, the features are re-scaled in such a way that all the values lie in the range [0,1], a procedure commonly employed to improve the efficiency during the training process (Jenning et al., 2020). The training+validation set is defined as \(T(X,y)=T(X_{j},y_{j})\), allowing to map the features, i.e., the summary statistics, \(X\), in terms of the targets, given by the corresponding cosmological parameters, \(y\). For the \(j\)th realisation the feature appear as \[\begin{split} X_{j}&=\{(C_{\ell}^{I}),(V_{k}^{I} )\}|_{j},\\ &=\{(C_{\ell}^{I},C_{\ell}^{Z},...,C_{\ell}^{m}),(V_{k}^{1},V_{k }^{2},...V_{k}^{m})\}|_{j},\end{split} \tag{5}\] where \(V_{k}(y)\equiv(V_{0}(v),V_{1}(v),V_{2}(v))\), representing the Area, Perimeter, and Genus estimators, respectively, and the index \(i=1,2,...,m\) runs over the \(m=30\)\(z\)-bins. This means that, for each realisation, \(X_{j}\) is a vector with \(m\times[76\) multipole bands + 3 MFs \(\times\) 31 \(\nu\) thresholds] elements. Hereafter, we refer to the three MFs combined as \(V_{k}\). The features are associated to the targets, \[\begin{split} y_{j}=\{\theta_{p}\}|_{j},\end{split} \tag{6}\] where, for case _(i)_, \(p=0,1\) with \(\{\theta_{p}\}=\{\Omega_{\mathrm{C}},h\}\) and, for case _(ii)_, \(p=0\) with the target given by one of the parameters at a time6, \(\{\theta_{p}\}=\{\Omega_{\mathrm{C}}\}\), \(\{h\},\{w_{0}\},\;\mathrm{or}\;\{w_{a}\}\), i.e., the NN algorithm is trained over individual cosmological parameters. We test the efficiency of each estimator, \(C_{\ell}\) and \(V_{k}\), individually, \(X=\{(C_{\ell}^{I})\}\) and \(X=\{(V_{k}^{I})\}\) Figure 2: Average MFs (the three upper panels, from top to bottom, are the Area, Perimeter and Genus) from 1000 clean 21 cm mocks, accounting for the BINGG sky fraction and beam, and theoretical APS (bottom panel), for the fiducial cosmology (Table 1). The lines correspond to the 30 tomographic bins, coloured according to the average redshift of each z-bin. as well as their combination, \(X=\{(C_{\ell}^{\ell}),(V_{k}^{\ell})\}\), changing equation 5 accordingly. Our main analysis considers a compression of all the 30 z-bins into 5 redshift ranges, given by a simple average of the summary statistics over every 6 consecutive z-bins (the motivation for this choice is discussed in Section 4.6). Therefore, the \(i\) index runs over \(m=5\) redshift intervals, or subsets of the z-bins. #### 3.2.2 Architecture For each of the tests reported here, we define a particular architecture by searching for an optimised set of hyperparameters. For this we use the Optuna package (Akiba et al., 2019), able to automatically search the values of the hyperparameters, from a predefined search space, by minimising/maximising a given objective function. Here we use a loss function, chosen to be the commonly used mean square error (MSE), given by \[\mathcal{L}=\frac{1}{N}\sum_{j=1}^{N}(y^{\text{Pred}}-y^{\text{ Time}})^{2}, \tag{7}\] for a total of \(N\) simulations in the validation data set, with the 'True' and 'Pred' superscripts referring to the real (input) values of the cosmological parameters, used for generating the simulations, and the predicted values by the NN, respectively. Then, at each trial, i.e., after training the NN with a given set of values for the hyperparameters, a validation loss (a score) \(\mathcal{L}\) is returned, a procedure repeated in order to search for the set of hyperparameters minimising it. For all our tests, the number of trials is limited to 500, since cases _(i)_ and _(ii)_ usually does not need more than 300 and 400 trials, respectively. Our NN algorithms are constructed optimising the following main hyperparameters, considering the searching spaces as defined below: * number of neurons in the hidden layers: [1,500]; * number of layers: [1,3]; * activation function: ReLu, tanh; * optimiser: Adam, SGD; * learning rate: \([10^{-4},10^{-2}]\) and \([10^{-5},10^{-1}]\) for Adam and SGD optimisers, respectively; * momentum (only for SGD): \([10^{-4},10^{-2}]\); * batch size: \([50,500]\); * number of epochs: \([50,500]\); * weight decay: \([10^{-4},10^{-2}]\). In the case of Adam optimiser, the beta parameters are fixed to the default values \(\beta_{1},\beta_{2}=0.9,0.999\). The optimisation processes usually finds 1 to 4 hidden layers with no more than 300 neurons, taking nine to fifteen hours to perform the 500 trials on 56 cores of a processor Intel Xeon Gold 5120 2.20 GHz and 512 GB of RAM. #### 3.2.3 Accuracy metrics To evaluate the performance of the NNs constructed from the training process, we quantify how accurate are the predicted values, \(y^{\text{Pred}}\), with respect to the true, or input, values, \(y^{\text{True}}\), from the test set, \(t\{X\}\). For this we used four different metrics to characterise the accuracy of the predictions, for each cosmological parameter, evaluating different aspects of the results. The first metric is the root-mean-squared error, defined by the square root of Equation 7, RMSE = \(\sqrt{\mathcal{L}}\), for the \(N\) simulations in the test set. It quantifies the overall accuracy (total error) of the predictions, regardless of the origin, i.e., whether it is inherent to the NN algorithm or it is introduced by the CV of the cosmological signal. The second metric is the standard deviation of the predicted values averaged for all the \(n_{c}=160\) different cosmologies, \[\langle\sigma\rangle=\frac{1}{n_{c}}\sum_{c=1}^{n_{c}}\left[\sqrt{\frac{1}{n} \sum_{i=1}^{n}(y_{i}^{c}-\langle y^{c}\rangle)^{2}}\right], \tag{8}\] for a total of \(n\) simulations from each \(c\) cosmology; this metric quantifies the uncertainty associated to the CV. The third metric is the slope of the straight line fitted over the data points, relating predictions and real values, \(y^{\text{Pred}}\times y^{\text{True}}\). It measures the bias appearing in the predictions when the NN is not able to effectively learn the relation between the summary statistics and the cosmological parameters. A slope near to 1 indicates a small bias and accurate predictions, while a slope near to 0 reflects a large bias and indicates that the NN is predicting values close to the mean of the parameters ranges. Such large bias is a common behaviour of the NN when it is not able to efficiently map the features in terms of the targets (see, e.g., Lu et al., 2022); the NN predicts values near to the mean of the parameter range because, in doing so, it artificially reduces the loss function. The last metric quantifies also the overall accuracy of our results, but considering the relationship between pairs of cosmological parameters. This metric is the area of the \(1\sigma\) error ellipses obtained from the distribution of \(\Delta y=y^{\text{True}}-y^{\text{Pred}}\), the difference between true and predicted values, in the \(\Delta y_{P}-\Delta y_{P}\) plane, where \(p\) and \(p^{\prime}\) indicate two different cosmological parameters. ## 4 Results and discussions Here we present the results obtained evaluating the performance of the method in constraining \(\{\Omega_{c},h\}\) and \(\{\Omega_{c},h,w_{0},w_{a}\}\) sets of parameters using the respective NNs trained over our simulations (cosmological 21 cm signal + thermal noise + foreground residual). We evaluate the performance of the method using different features, the summary statistic, \(V_{k}\) and \(C_{\ell}\), individually and their combination, \(V_{k}+C_{\ell}\). We also investigate how our results are affected by each contaminant signal, thermal noise and foregrounds, and their robustness to the foreground characteristics. For the clean 21 cm simulations, we investigate the sensitivity of the results to the sky coverage and to specific redshift ranges. We employ an optimised set of hyperparameters for each of these tests. ### Parameters prediction under the \(\Lambda\)CDM model For each of the 800 cosmologies sampling the two-parameter space, case _(i)_\(\{\Omega_{c},h\}\), we generate 12 realisations of the cosmological 21 cm signal, then include the effect of the instrumental beam and add the contribution of thermal noise and foreground residual. We use the APS and MFs calculated from these simulations to train and validate the NN. Applying the trained NNs to the respective test data sets, we obtain the predicted values of the \(\{\Omega_{c},h\}\) cosmological parameters. These predictions are compared to the true values in Figure 3, for each summary statistic individually and combined, showing also the best linear fit over the data points, whose slope values are summarised in Table 2. The standard deviation over the \(n=12\) predicted values within each cosmology, represented by the error bars in Figure 3, averaged over the cosmologies from the test sets, \(\langle\sigma\rangle\), are also presented in Table 2, as well as the RMSE values. The RMSE, a measurement of the overall error, is also calculated for the training data set (appearing in parentheses in the tables) so that we can assess overfitting. A significantly larger RMSE value from test data compared to that from training data would indicate overfitting. For all our analyses these values seem to be in reasonable concordance, then we consider that no overfitting occurs. Although omitted here, the other metrics obtained from training data sets also lead to the same conclusion. From the third part of Table 2, one can see that, regardless of the features, \(h\) is always better estimated than the \(\Omega_{c}\) parameter. For both parameters, Figure 3 show that, given the error bars, the predictions are consistent with the true values, showing that the NNs are able to learn the relations between the summary statistics and cosmological parameters, specially when using \(C_{\ell}\) or \(V_{k}\)+\(C_{\ell}\). The results from the accuracy metrics \(\langle\sigma\rangle\), slope, and RMSE show that the APS outperforms the MFs in estimating both \(\Omega_{c}\) and \(h\) parameters. Their combination, however, does not seem to provide a significant improvement on the results, showing slightly smaller (larger) RMSE values for \(h\) (\(\Omega_{c}\)) parameter with respect to that from \(C_{\ell}\), while the slope values are slightly smaller for both cosmological parameters. Still, for \(V_{k}\)+\(C_{\ell}\), the RMSE for \(\Omega_{c}\) and \(h\) parameter are 0.013 and 0.011, respectively, representing 4.9% and 1.6% fractional errors about the fiducial values \(\Omega_{c}=0.265\) and \(h=0.6727\). As pointed by Perez et al. (2022), for \(\Omega_{c}\) and \(h\), a RMSE value close to 0.025 and 0.030, respectively, would indicate inaccuracy of the method in predicting these parameters, since the error would amount to half of the sampling interval (Table 1). In Costa et al. (2022), the BINGO collaboration employs the Fisher matrix formalism in a cosmological forecast using the APS and find a 1\(\sigma\) constraint of 19% fractional error for \(h\), under the \(\Lambda\)CDM scenario, which is reduced to 0.9% when adding Planck measurements. Although the authors explore a different set of cosmological parameters and prior ranges, so the comparison with our results is not straightforward7, the significantly smaller error found here for the Hubble parameter indicates how competitive our constraints can be and suggests that the error bars can be reduced by combining different data sets to feed NN algorithms. Also, although one can say that our analyses combine BINGO and Planck results, given that we use Planck constraints to define the sampling interval for \(\Omega_{b}\), \(n_{s}\), and \(A_{s}\), the parameters of interest in case _(i)_ are sampled in a significantly broader range. Footnote 7: For examples comparing the two type of analyses and error estimates, see Villaescusa-Navarro et al. (2022). To help understand our results, in Appendix A we investigate how each of the summary statistics are modified by varying each cosmological parameter. Figure A1, illustrating the effect of changing the cosmological parameters amplitude in 1%, show that all three MFs and APS seem to be more sensitive to the Hubble parameter \(h\) than to the dark matter density parameter \(\Omega_{c}\), given the amplitude of the curves. This explains the better accuracy metric for \(h\) compared to \(\Omega_{c}\). Also, the higher sensitivity of the MFs to the Hubble parameter help explaining why the combination \(V_{k}\)+\(C_{\ell}\) improves, even if just slightly, only the \(h\) parameter predictions. However, it remains to be better understood the reason why the combinations of the summary statistics does not provide more significant improvements with respect to their individual usage. In order to enlighten such finding, one should remember the dependency among the two summary statistics (see Subsection 3.1). This relationship is given by an integration over several scales of the \(C_{\ell}\)(Novikov et al., 1999), possibly losing information, in such a way that, even when applied to Gaussian fields, the two statistics may not carry the same information. For the case of the non-Gaussian 21 cm field, although our simulations may not reproduce a realistic three-point information (see discussion in Subsection 2.1), the MFs still describe their log-normal characteristic, unlike the APS. In this sense, even if the log-normal information is not enough to significantly improve the cosmological predictions from MFs with respect to APS, one cannot guarantee that the two summary statistics would have similar performances or that their combination would lead to expressive improvements. In terms of the \(\Delta\Omega_{c}\)-\(\Delta h\) plane (or just \(\Omega_{c}\)-\(h\) plane, for simplicity), presented in Figure 4, where \(\Delta\) represents the difference between true and predicted parameters, the error ellipses for the \(V_{k}\), \(C_{\ell}\), and \(V_{k}\)+\(C_{\ell}\) statistics show that the last one seems to provide the smallest contour area. This is confirmed by the results summarised in Table 3, showing that the area of the error ellipse obtained using the \(C_{\ell}\) are reduced by \(\sim 27\%\) when using the combination \(V_{k}\)+\(C_{\ell}\), while \(V_{k}\) are again the least restrictive statistic. However, Table 3 also shows that, differently from the BINGO simulations, when analysing the (unrealistic) clean 21 cm realisations the MFs provide better constraints than the APS; their combination provides reduction of \(\sim 19\%\) in the contour area with respect to MFs only. Such result has two main indicatives. First, the combination of the summary statistics seems to be even more advantageous in the presence of contaminant sig Figure 3: Predicted versus the true cosmological parameters values, showing results from case _(i)_, for \(\{\Omega_{c},\,h\}\) parameters. Each point and error bar correspond to the average and standard deviation of the \(n=12\) predictions within the same cosmology. The different colours, as labelled in the first panel of each row, represent a type of simulation, the clean 21 cm mocks, these mocks contaminated by thermal noise (+WN), and contaminated by foreground residual, along with the noise (+WN+FG). The dot-dashed, dotted and dashed lines are the linear fitting of the corresponding coloured data points. From top to bottom, each row shows results using the MFs, the \(C_{\ell}\), and their combination, \(V_{k}\)+\(C_{\ell}\). nals. Second, a possible imprecision of three-point information of our 21 cm log-normal realisations is not the only explanation for the best performance of the \(C_{\ell}\) over BINGO simulations, since the constraints from clean 21 cm simulations rely on the same log-normal realisations. This second point suggests that the better constraining power of the APS for BINGO simulations could also be a consequence of a more severe impact of the contaminants over the MFs predictions compared to the APS. The impact of each contaminant to the summary statistics is assessed later in this section, where we get back to this discussion. ### Parameters prediction under the CPL parameterization Reproducing the same analyses as discussed in the previous subsection (same number of cosmologies and realisations), now considering the four-parameter space, case _(ii)_ {\(\Omega_{c}\), \(h,w_{0},w_{a}\) }, we find the predicted values as shown in Figure 5. These results show a greater difficulty of the NN in mapping the summary statistics into the cosmological parameters, which is expected given the larger parameter space. Also, we can see that the constraining power of the method in estimating \(h\) parameter is the most affected by the inclusion of the CPL parameters, \(w_{0}\) and \(w_{a}\), likely due to the degeneracy among them. Still, from the \(1\sigma\) error bars we still see a reasonable concordance of the predictions with the true values, apart from \(w_{a}\) parameter. In fact, from Figure A1 in the appendix, we can see that \(w_{a}\) is the parameter whose variation least modifies the summary statistics. The predictions for this parameter from both APS and MFs are clearly shifted to the mean of the respective sampling interval, artificially reducing the loss function. This behaviour is quantified by the slope values presented in the third part of Table 4. The same table also summarises \(\langle\sigma\rangle\) and RMSE metrics resulting from the four-parameter constraints. Similarly to the two-parameter constraints, the metrics show that the APS outperforms the MFs in predicting all four parameters, while their combination allows only slight improvements of the metrics from using the APS alone. The RMSE value for \(\Omega_{c}\), \(h\), and \(w_{a}\) obtained using \(V_{k}+C_{\ell}\) decrease with respect to those obtained using only \(C_{\ell}\), but slightly increase for \(w_{0}\). The slope values indicate a smaller (larger) bias for \(\Omega_{c}\) and \(h\) (\(w_{0}\) and \(w_{a}\)) parameters for \(V_{k}+C_{\ell}\) statistics. Again, these results can be interpreted taking into account the relationship between the two summary statistics, as we discussed in the previous section. Still, we should also account for the fact that the NN training and the hyperparameter optimisation are stochastic processes. This means that, training a NN will lead to different outputs for multiple evaluations, even fixing the set of hyperparameters, because of the random choice of some parameters of the NN algorithm and the random split of the training and validation sets. Also, the hyperparameter selection can lead to better optimised architectures for specific cases. For such reasons, a slight change in the accuracy metrics may simply indicate that there has been neither improvement nor degradation of the predictions, which would explain the few cases with slightly worse metrics from the combination \(V_{k}+C_{\ell}\). The NN fed with the combination of the summary statistics provide predictions for \(\Omega_{c}\), \(h\), \(w_{0}\) and \(w_{a}\) parameters with accuracy of RMSE \(\sim 0.017,0.025\), \(0.243\), and \(1.096\), respectively, which gives, for the first three, \(6.4\%\), \(3.7\%\), and \(24.3\%\) fractional error about their fiducial values (Table 1). We note that a RMSE value close to half of the sampling interval, namely, \(0.5\) and \(1.375\), for \(w_{0}\) and \(w_{a}\) parameters, respectively, would indicate inaccuracy of the method in predicting these parameters; \(w_{a}\) predictions have the higher RMSE value relative to the sampling interval. In fact, under the CPL parameterization, the Fisher matrix analysis by Costa et al. (2022) also finds \(w_{a}\) parameter to be the most difficult to constrain, obtaining an error amplitude of \(2.8\) (\(1.2\)), for BINGO (BINGO+Planck). For \(h\) and \(w_{0}\) parameters8 they find fractional errors of \(20\%\) (\(2.9\%\)) and \(55\%\) (\(30\%\)), respectively. Again, although a comparison between these results and ours is not straightforward, the significantly lower error estimates found here still indicates our methodology as very promising, in particular to improve Planck constraint on \(w_{0}\). Footnote 8: The fiducial values employed in Costa et al. (2022) are \(h=0.6732\), \(w_{0}=-1\), and \(w_{a}=0\). From the error ellipses shown in Figure 6, we find that, for the \(\Omega_{c}-h\) plane, the area of the \(1\sigma\) contour from the predictions obtained using the APS is reduced by \(\sim 5\%\) when the summary statistics are combined, \(V_{k}+C_{\ell}\), as shown in Table 3. For \(\Omega_{c}-w_{0}\) and \(h-w_{0}\) planes, the \(1\sigma\) error contours shrink by \(\sim 0.1\%\) and \(\sim 2\%\), respectively. The combination \(V_{k}+C_{\ell}\) shrank the error contours on the \(\Omega_{c}-w_{a}\), \(h-w_{a}\), and \(w_{0}-w_{a}\) planes, obtained using the APS alone, by \(\sim 10\%\), \(\sim 12\%\), and \(\sim 15\%\), respectively. Moreover, similarly to the findings from the \(\Lambda\)CDM scenario, when analysing the clean 21 cm simulations the individual results from the two summary statistics (Table 4) are quite consistent for \(h\) and \(\Omega_{b}\) parameters, while their ellipse error show the MFs with better constraining power than the APS. For the other parameters and ellipse errors, the APS seems to outperform the MFs also when no contaminant signal is accounted for. ### Impact of individual systematic effects Here we evaluate how {\(\Omega_{c},h\)} and {\(\Omega_{c},h,w_{0},w_{a}\) } parameters predictions are impacted by the presence of noise and foreground residual individually by adding one at a time to the cosmological signal. Figure 4: Error ellipses for the {\(\Omega_{c},h\)} parameter space predictions, showing the distribution of the difference between true and predicted cosmological parameter, \(\Delta h=h^{\rm{True}}-h^{\rm{Pred}}\) and \(\Delta\Omega_{c}=\Omega_{c}^{\rm{True}}-\Omega_{c}^{\rm{Pred}}\). In all cases, the solid and dashed lines represent the \(1\sigma\) and \(2\sigma\) contours, respectively. The corresponding projected 1D distributions are also shown. The different colours show results from different summary statistic and their combination. The different colours appearing in each panel of Figures 3 and 5 represent a type of simulation, namely, the 21 cm only (with BINGO beam), including the thermal noise, and, along with it, adding the foreground residual. The metrics obtained from all these type of simulations and summary statistic are summarised in Tables 2 and 4. These results indicate that the thermal noise causes the greatest impact on the prediction, while the presence of foreground residual, although not negligible, leads to a much less degradation of the predictions. For the RMSE metric, for example, considering results for \(V_{k}+C_{\ell}\), in case _(i)_ we find 0.009 error for both \(\Omega_{c}\) and \(h\) parameters for clean 21 cm shocks, which increases by 30% and 18%, respectively, when adding noise and another 8% and 4% when including also the foreground residual. For case _(ii)_, the presence of thermal noise leads to RMSE values 21%, 9%, 44%, and 9% greater, for \(\Omega_{c},h,w_{0}\), and \(w_{a}\) parameters, with respect to that obtained from predictions over clean 21 cm shocks, while including the foreground residual has negligible impact (less than 1%). As can be seen from these values, the \(w_{0}\) parameter is the most affected by the noise contamination. The comparison of the error ellipses from each type of simulation, using \(V_{k}+C_{\ell}\) statistics, is shown in Figures 7, for case _(i)_ {\(\Omega_{c},h\)}, and 8, for case _(ii)_ {\(\Omega_{c},h,w_{0},w_{a}\)}. Table 5 shows the impact on the 1\(\sigma\) area when accounting for the thermal noise and, along with it, the foreground residual (BINGO simulations). For all planes of parameters pairs is evident the significant impact of the thermal noise and the almost negligible effect introduced by the foreground residual. Evaluating the summary statistics individually, we also observe the thermal noise as the most important contaminant, while the MFs seem to be the most impacted by it. All metrics from predictions for case _(i)_ show this same behaviour for both cosmological parameters. In this case, the RMSE calculated for the MFs increases by 42 and 40% for \(\Omega_{c}\) and \(h\) parameters, respectively, due to the inclusion of thermal noise, while these percentages are 20 and 10% using the \(C_{\ell}\). For case _(ii)_ the increase in the RMSE due to thermal noise are 40, 8, 53, and 11% for \(\Omega_{c}\), \(h\), \(w_{0}\), and _w\({}_{a}\), respectively, using MFs, which, using \(C_{\ell}\), are 46% for \(w_{0}\) and 13% for the other three parameters. Therefore, regardless of the summary statistic or their combination, \(w_{0}\) is the parameter most affected when taking the thermal noise into account. As pointed out earlier in this section, the more severe impact of the contaminants over the MFs predictions compared to the APS may be one of the reasons why the constraints on \(h-\Omega_{c}\) plane by \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Parameter & \multicolumn{3}{c}{\(V_{k}\)} & \multicolumn{3}{c}{\(C_{\ell}\)} & \multicolumn{3}{c}{\(V_{k}+C_{\ell}\)} \\ \cline{2-7} & \((\sigma)\) & slope & RMSE & \((\sigma)\) & slope & RMSE & \((\sigma)\) & slope & RMSE \\ \hline \multicolumn{7}{c}{21 cm} \\ \hline \(\Omega_{c}\) & 0.007 & 0.874 & 0.014 (0.009) & 0.006 & 0.890 & 0.010 (0.009) & 0.007 & 0.910 & 0.009 (0.008) \\ \hline \(h\) & 0.006 & 0.964 & 0.010 (0.008) & 0.006 & 0.954 & 0.010 (0.008) & 0.006 & 0.943 & 0.009 (0.007) \\ \hline \multicolumn{7}{c}{21 cm + WN} \\ \hline \(\Omega_{c}\) & 0.010 & 0.524 & 0.020 (0.019) & 0.009 & 0.812 & 0.012 (0.012) & 0.008 & 0.808 & 0.012 (0.011) \\ \hline \(h\) & 0.007 & 0.833 & 0.014 (0.013) & 0.007 & 0.901 & 0.011 (0.010) & 0.007 & 0.922 & 0.011 (0.009) \\ \hline \multicolumn{7}{c}{21 cm + WN + FG (BINGO simulations)} \\ \hline \(\Omega_{c}\) & 0.010 & 0.533 & 0.020 (0.019) & 0.009 & 0.827 & 0.013 (0.011) & 0.009 & 0.784 & 0.013 (0.012) \\ \hline \(h\) & 0.008 & 0.854 & 0.015 (0.014) & 0.008 & 0.898 & 0.011 (0.010) & 0.007 & 0.890 & 0.011 (0.010) \\ \hline \multicolumn{7}{c}{21 cm \(\varpi\)\(f_{aby}=0.52\)} \\ \hline \(\Omega_{c}\) & 0.003 & 0.967 & 0.004 (0.004) & 0.004 & 0.878 & 0.007 (0.007) & 0.004 & 0.968 & 0.004 (0.004) \\ \hline \(h\) & 0.003 & 0.968 & 0.004 (0.004) & 0.003 & 0.929 & 0.006 (0.006) & 0.003 & 0.968 & 0.004 (0.004) \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the accuracy metrics evaluating the performance of the NNs trained over the MFs, the APS, and their combination, \(V_{k}+C_{\ell}\), in predicting \(\{\Omega_{c},h\}\) parameters. All the metrics are calculated from the test data sets; the RMSE is also estimated from the training data sets, appearing in parentheses. Given the sampling interval of the parameters (Table 1), we remember that RMSE values around 0.025 and 0.030 for \(\Omega_{c}\) and \(h\), respectively, indicate inaccuracy of the predictions. The first three parts of the table show the resulting metrics from analysing the clean cosmological signal (with the beam effect; 21 cm), these same mocks contaminated with thermal noise (21 cm+WN), and the BINGO simulations (21 cm+WN+FG). The last part of the table corresponds to analyses of the same clean 21 cm realisations, but considering a larger sky coverage, \(f_{aby}=0.52\). See text for details. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Parameter & \multicolumn{3}{c}{21 cm} & \multicolumn{3}{c}{21 cm+WN+FG} \\ \cline{2-7} & \(V_{k}+C_{\ell}\) & \(C_{\ell}\) & \(V_{k}\) & \(V_{k}+C_{\ell}\) & \(C_{\ell}\) & \(V_{k}\) \\ \hline \multicolumn{7}{c}{2 parameters constraint} \\ \hline \(\Omega_{c}-h\) & 1 & 1.194 & 1.188 & 1 & 1.267 & 1.686 \\ \hline \multicolumn{7}{c}{4 parameters constraint} \\ \hline \(\Omega_{c}-h\) & 1. & 1.337 & 1.137 & 1.1 & 1.053 & 1.385 \\ \hline \(\Omega_{c}-w_{0}\) & 1. & 1.224 & 1.329 & 1. & 1.001 & 1.909 \\ \hline \(h-w_{0}\) & 1. & 1.016 & 1.315 & 1. & 1.017 & 1.352 \\ \hline \(\Omega_{c}-w_{a}\) & 1. & 1.084 & 1.138 & 1. & 1.100 & 1.301 \\ \hline \(h-w_{a}\) & 1. & 0.970 & 1.052 & 1. & 1.124 & 1.062 \\ \hline \(w_{0}-w_{a}\) & 1. & 0.978 & 1.300 & 1. & 1.148 & 1.352 \\ \hline \hline \end{tabular} \end{table} Table 3: Area of the 1\(\sigma\) error ellipse for each summary statistic relative to that from the combination \(V_{k}+C_{\ell}\), considering the different parameters planes for \(\{\Omega_{c},h\}\) and \(\{\Omega_{c},h,w_{0},w_{a}\}\) constraints. The columns 2 to 4 show results from analysing the clean 21 cm realisations, and columns 5 to 7 are obtained from BINGO simulations. the APS are more restrictive than those by the MFs when analysing the BINGO simulations and the opposite is observed for clean 21 cm simulations. It is also worth reminding that both foregrounds and thermal noise have a non-Gaussian distribution over the sky, which could also contribute for the MFs to have a less constraining power than the APS for BINGO simulations. Also, the distinct angular scales (or multipoles) at which the APS is calculated could also be helping the NN to discriminate between the cosmological signal, foreground residual and thermal noise, since each of them has a characteristic dependence with the angular scale. ### Robustness to foregrounds We also evaluate the robustness of our results from case _(ii)_\(\{\Omega_{c},h,w_{0},w_{a}\}\) to the foreground contamination. For this we test the previously trained NNs over three different test data sets, namely, the summary statistics obtained from simulations accounting for: (1) 50% and (2) 100% higher amplitude foreground contamination, by multiplying the same foreground residual maps by the factors of 1.5 and 2.0 before adding them to the cosmological signal (mimicking a non-optimal usage of GNILC), and (3) using the foreground residual obtained as explained in section 2.3, but using a different synchrotron emission model. In this last case, the synchrotron component is produced considering a power law with a non-uniform spectral index over the sky as given by the Miville-Deschenes et al. (2008) model (indicated by MD in Figure 9), replacing the Giardino et al. (2002) Figure 5: Same as Figure 3, but for case _(ii)_, showing predicted values for \(\{\Omega_{c},h,w_{0},w_{a}\}\) parameters. model (GD in Figure 9) considered for the training data set (see de Mericia et al., 2023, for details). We emphasize that, to obtain the last test data set, we again apply GNILC over ten complete simulations, now accounting for the synchrotron MD model, so that the foreground residual maps are different from those used to produce the training data set. The results for parameters constraints using each of these three test data sets are shown in Figure 9, along with the previous results (indicated by +WN+FG in the last row of Figure 5). Comparing them, we find no significant degradation of the results. From all three test data sets, the \(\langle\sigma\rangle\) and slope metrics are highly consistent with previous predictions, presented in Table 4 (21 cm + WN + FG), regardless of the cosmological parameter. Similarly, the RMSE increases only up to 2% for \(\Omega_{c}\), \(h\) and \(w_{a}\). The larger impact is observed for \(w_{0}\), with RMSE increasing by 11% and 19% for test data sets (2) and (3), respectively, which corresponds to a fractional error increasing from 24.3% to 27.1% and 29.1%. For test data set (1), no degradation is observed from any metric or cosmological parameter. Therefore, even with a larger impact on \(w_{0}\) when using a different synchrotron model, our results still corroborate the applicability of the NNs outside the training data set, showing no significant additional bias on the cosmological parameters constraints. This suggests our predictions to be robust against foreground contamination and our method promising to be applied to future data. ### Sensitivity to survey area In order to evaluate how the sky coverage influences our results, we tested our methodology over a larger sky fraction, \(f_{\rm sky}=0.52\). We chose to assume approximately the same region that SKA is intended to observe. We emphasise that we are not reproducing the SKA observational specifications, but only increasing the sky fraction to that expected to be covered by it, i.e., still using the same 21 cm mocks generated in BINGO frequency range. The cut sky mask in this case is constructed assuming the observed region in the declination \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Parameter} & \multicolumn{2}{c}{\(V_{k}\)} & \multicolumn{3}{c}{\(C_{\ell}\)} & \multicolumn{3}{c}{\(V_{k}\)+\(C_{\ell}\)} \\ \cline{2-9} & \(\langle\sigma\rangle\) & slope & RMSE & \(\langle\sigma\rangle\) & slope & RMSE & \(\langle\sigma\rangle\) & slope & RMSE \\ \hline \multicolumn{9}{c}{21 cm} \\ \hline \(\Omega_{c}\) & 0.006 & 0.692 & 0.015 (0.016) & 0.006 & 0.713 & 0.015 (0.015) & 0.007 & 0.761 & 0.014 (0.014) \\ \hline \(h\) & 0.004 & 0.528 & 0.024 (0.024) & 0.004 & 0.524 & 0.023 (0.023) & 0.003 & 0.524 & 0.023 (0.024) \\ \hline \(w_{0}\) & 0.125 & 0.872 & 0.214 (0.210) & 0.098 & 0.913 & 0.168 (0.159) & 0.103 & 0.919 & 0.166 (0.151) \\ \hline \(w_{a}\) & 0.337 & 0.303 & 1.023 (1.036) & 0.313 & 0.380 & 0.967 (0.960) & 0.311 & 0.335 & 1.001 (1.005) \\ \hline \multicolumn{9}{c}{21 cm + WN} \\ \hline \(\Omega_{c}\) & 0.010 & 0.464 & 0.021 (0.021) & 0.007 & 0.633 & 0.017 (0.016) & 0.007 & 0.626 & 0.017 (0.017) \\ \hline \(h\) & 0.005 & 0.412 & 0.026 (0.026) & 0.009 & 0.514 & 0.026 (0.025) & 0.005 & 0.443 & 0.025 (0.024) \\ \hline \(w_{0}\) & 0.153 & 0.679 & 0.328 (0.341) & 0.137 & 0.813 & 0.246 (0.237) & 0.133 & 0.803 & 0.240 (0.221) \\ \hline \(w_{a}\) & 0.159 & 0.142 & 1.138 (1.168) & 0.436 & 0.265 & 1.102 (0.931) & 0.336 & 0.232 & 1.092 (1.065) \\ \hline \multicolumn{9}{c}{21 cm + WN + FG (BINGO simulations)} \\ \hline \(\Omega_{c}\) & 0.010 & 0.444 & 0.022 (0.021) & 0.008 & 0.638 & 0.017 (0.017) & 0.008 & 0.632 & 0.017 (0.016) \\ \hline \(h\) & 0.002 & 0.419 & 0.026 (0.027) & 0.009 & 0.486 & 0.026 (0.025) & 0.006 & 0.454 & 0.025 (0.024) \\ \hline \(w_{0}\) & 0.155 & 0.656 & 0.329 (0.339) & 0.142 & 0.827 & 0.242 (0.230) & 0.143 & 0.843 & 0.243 (0.221) \\ \hline \(w_{a}\) & 0.167 & 0.143 & 1.134 (1.163) & 0.609 & 0.221 & 1.214 (1.181) & 0.286 & 0.203 & 1.096 (1.096) \\ \hline \multicolumn{9}{c}{21 cm \(\oplus\)\(f_{\rm sky}=0.52\)} \\ \hline \(\Omega_{c}\) & 0.005 & 0.805 & 0.012 (0.013) & 0.005 & 0.797 & 0.012 (0.011) & 0.004 & 0.866 & 0.011 (0.012) \\ \hline \(h\) & 0.003 & 0.494 & 0.022 (0.023) & 0.003 & 0.587 & 0.022 (0.022) & 0.005 & 0.612 & 0.020 (0.020) \\ \hline \(w_{0}\) & 0.087 & 0.935 & 0.145 (0.140) & 0.060 & 0.954 & 0.119 (0.115) & 0.069 & 0.963 & 0.129 (0.124) \\ \hline \(w_{a}\) & 0.264 & 0.458 & 0.893 (0.917) & 0.285 & 0.517 & 0.840 (0.816) & 0.240 & 0.464 & 0.895 (0.903) \\ \hline \hline \end{tabular} \end{table} Table 4: Same as Table 2, but for \(\{\Omega_{c},\,h,\,w_{0},\,w_{a}\,\}\) parameters predictions. We remind that RMSE values around 0.025, 0.030, 0.5, and 1.375 for \(\Omega_{c}\), \(h\), \(w_{0}\), and \(w_{a}\), respectively, indicate inaccuracy of the predictions, since the \(1\sigma\) error would amount to half of the sampling interval (Table 1). \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter space & 21 cm & + WN & + WN + FG (BINGO simulations) \\ \hline \multicolumn{4}{c}{2 parameters constraint} \\ \hline \(\Omega_{c}-h\) & 1 & 1.522 & 1.702 \\ \hline \multicolumn{4}{c}{4 parameters constraint} \\ \hline \(\Omega_{c}-h\) & 1. & 1.625 & 1.648 \\ \hline \(\Omega_{c}-w_{0}\) & 1. & 1.999 & 2.053 \\ \hline \(h-w_{0}\) & 1. & 1.560 & 1.610 \\ \hline \(\Omega_{c}-w_{a}\) & 1. & 1.400 & 1.402 \\ \hline \(h-w_{a}\) & 1. & 1.195 & 1.234 \\ \hline \(w_{0}-w_{a}\) & 1. & 1.547 & 1.574 \\ \hline \hline \end{tabular} \end{table} Table 5: Same as Table 3 but using only the combination \(V_{k}\)+\(C_{\ell}\) and showing the area of the error ellipses obtained analysing noisy 21 cm simulation (+ WN) and BINGO simulations (+ WN + FG) relative to those from clean 21 cm simulations. range \((-75^{\circ},28^{\circ})\). In addition to this cut, we remove the region where Galactic foreground emission would contribute and apodize the mask, following the same procedure described in Subsection 2.2. Although we apply a Galactic cut, this test is performed only over the clean 21 cm simulations, including the 40 arcmin beam effect. The last part of Tables 2 and 4 show the accuracy metrics, for cases _(i)_\(\{\Omega_{\rm c},h\}\) and _(ii)_\(\{\Omega_{\rm c},h,w_{0},w_{\alpha}\}\) parameters predictions, respectively, evaluating the performance of the method over a larger sky area. Compared to the results shown in the first part of these tables, one can clearly see the improvement in all the metrics when using a larger area, as expected. Increasing the \(f_{\rm sky}\) from 0.09 to 0.52, the RMSE from case _(i)_ can decrease by a factor 2.25, for both parameters, when using \(V_{k}+C_{\ell}\), with an even more significant improvement when using only the MFs. When constraining two cosmological parameters, the improvement on predictions from MFs are high enough to make them outperform the \(C_{\ell}\). In fact, this is an advantageous aspect, since the MFs are the summary statistic most affected by thermal noise, and indicates that using a larger fraction of the sky may allow the combination \(V_{k}+C_{\ell}\) to provide better results compared to individual statistics when accounting for the contaminant signals. Similarly, although the improvements are not so expressive for case _(ii)_, all the metrics improve when using a larger fraction of the sky (for both summary statistics and their combination), with smaller RMSE values by a fact at most 1.3 for \(\Omega_{\rm c}\) and \(w_{0}\) using \(V_{k}+C_{\ell}\). The error ellipses for \(f_{\rm sky}=0.09\) (BINGO coverage) and 0.52 are compared in Figures 10 and 11 for cases _(i)_ and _(ii)_, respectively. The effect of the sky fraction over the area of the \(1\sigma\) regions are summarised in Table 6, showing a significant reduction of this area from all planes when using the combination \(V_{k}+C_{\ell}\) over a larger \(f_{\rm sky}\), mainly favouring \(\Omega_{\rm c}-h\) plane from case _(i)_ and \(\Omega_{\rm c}-w_{0}\) from case _(ii)_. Besides, the results confirm the MFs outperforming the \(C_{\ell}\) for \(\Omega_{\rm c}-h\) plane in clean 21 cm maps, for both sky fractions (see columns 2 to 4 in Table 3) and cases _(i)_ and _(ii)_. For case _(ii)_, the larger sky fraction allows the \(1\sigma\) area for \(\Omega_{\rm c}-h\) plane obtained using the MFs to be reduced in \(\sim\)22% when combing the summary statistics, \(V_{k}+C_{\ell}\). A similar reduction is also found in \(\Omega_{\rm c}-w_{0}\) plane. Also, using \(f_{\rm sky}=0.52\) leads to almost 5% reduction of the \(1\sigma\) area from both \(\Omega_{\rm c}-w_{\alpha}\) and \(h-w_{\alpha}\) planes using \(V_{k}+C_{\ell}\) with respect to their individual usage, while \(w_{0}-w_{\alpha}\) plane is slightly better constrained by the \(C_{\ell}\) alone. For \(f_{\rm sky}=0.09\), as shown in Table 3, only \(\Omega_{\rm c}-w_{\alpha}\) has the ellipse area reduced by the combination \(V_{k}+C_{\ell}\). Such results show that increasing the sky coverage improves the predictions of all cosmological parameters, as expected, as well as allows the combination of summary statistics to provide more expressive amelioration of the predictions compared to their individual usage. ### Sensitivity to redshift ranges We also investigate the sensitivity of each cosmological parameter to the redshift. For this, we perform the same training procedure described before, using as features the summary statistics from a given range of redshift. In addition to the whole set of 30 redshift bins, we also test using three subsets, the lowest, intermediate and highest ten redshift bins, which correspond to \(z_{1-10}\in[0.127,0.234]\), \(z_{11-20}\in[0.234,0.342]\), and \(z_{21-30}\in[0.342,0.449]\), respectively. The RMSE metric for each of these tests, for cases _(i)_ and _(ii)_, are presented in Figure 12 showing results for each cosmological parameter and summary statistic. Since using two different summary statistics (or 5, given that the MFs include the Area, Perimeter, and Genus) from 30 redshift bins constitute a very large set of features, we also evaluate the effect of compressing the summary statistics. This compression corresponds to a simple average of the statistics from each 6 consecutive bins, so that we have, effectively, 5 redshift Figure 8: Same as Figure 7, but for case _(ii)_, showing predicted values for \(\{\Omega_{\rm c},h,w_{0},w_{\alpha}\}\) parameters. Figure 7: Same as Figure 4, but showing results from using different simulations: the clean 21 cm simulations (21 cm), after including thermal noise to them (21 cm + WN), and adding foreground residual along with the noise (21 cm + WN + FG). All predictions shown here results from the combination \(V_{k}+C_{\ell}\). bins instead of \(30\lx@math@degree\). The RMSE values resulting from employing this compression is also included in Figure 12. Comparing the results, we can see that, for both cases _(i)_ and _(ii)_, the \(\Omega_{c}\) parameter is better predicted at lower (higher) redshifts when using the \(V_{k}\) (\(C_{\ell}\)), with the RMSE increasing (decreasing) with redshift. For the \(h\) parameter we find that, for both \(V_{k}\) and \(C_{\ell}\) in case _(i)_, the higher the redshift the better the predictions. In case _(ii)_, however, the predictions do not have a clear dependence with redshift, regardless of the summary statistic. Also, in this particular case, the combination of all redshift bins or their compression into 5 bins does not allow a very expressive improvement of the predictions. This behaviour confirms that when sampling also the CPL parameters, although still able to reasonably predict the \(h\), as we show before, the mapping between the summary statistics and cosmological parameters is not able to keep the same efficiency as in case _(i)_. Finally, for \(w_{0}\) parameter we find a similar behaviour for both summary statistics, with increasing RMSE metric with redshift. For \(w_{a}\), in contrast, we see no clear dependence with the redshift range. In fact, apart from \(h\) and \(w_{a}\) parameters in case _(ii)_, all other results shown in Figure 12 show that the better predictions are obtained using all the 30 redshift bins, or their compression. This motivated us to employ the compressed summary statistics in the analyses presented in previous sections. In addition, we can again use Figure 13 to interpret these results. This figure shows that, although the differences observed in the MFs due to the 1% variation of the cosmological parameter have large Figure 11: Same as Figure 10, but for case _(ii)_, showing predicted values for \(\{\Omega_{c},h,w_{0},w_{a}\}\) parameters. Figure 10: Error ellipses for the \(\{\Omega_{c},h\}\) parameter space, showing the distribution of the difference of predicted and true values, as in Figure 4, but comparing two different sky coverages, \(f_{sky}=0.09\) (BINGO coverage) and \(0.52\). Both analyses use clean \(21\,\mathrm{cm}\) simulations and the combination of the summary statistics, \(V_{k}\)+\(C_{\ell}\). Figure 9: Same as last row of Figure 5, but comparing results obtained using different training data sets. The red line and dots repeat +WN+FG results from Figure 5, indicated here by GD (synchrotron model from Giardino et al., 2002). The lines and dots in orange and pink correspond to the cases using 50% and 100% higher residual foreground, respectively. The violet line and dots correspond to the results using a different synchrotron model (Miville-Deschênes et al., 2008, MD model). The error bars are not shown to help visualize the results. See text for details. amplitudes at smaller redshifts, the 1\(\sigma\) regions significantly decrease at higher redshifts, no longer overlapping, for the first two MFs, Area and Perimeter, and the APS. This means that, at higher redshifts, the cosmic variance would have a smaller impact on the predictions. This can help explain the improvements of \(h\) parameter predictions with redshift, regardless of the summary statistic, and of \(\Omega_{c}\) for the \(C_{\ell}\). The opposite is observed from \(\Omega_{c}\) predictions using the MFs, as well as for \(w_{0}\) with both statistics, whose RMSE metrics increase with redshift, which indicate that the prediction of these two parameters may rely more significantly on other aspect of the summary statistics. ## 5 Conclusions We evaluate the performance of NNs in learning the relationship among summary statistics (APS and MFs) from the redshifted Hi emission line and cosmological parameters, preventing the use of analytical likelihoods and assumptions for their constructions. For this we employ a large set of 21 cm signal simulations, spanning a grid of 800 cosmologies, generated using the FLASK code, largely sampling the parameter space, using as a case study the BINGO telescope; our BINGO simulations account for the sky coverage and instrumental beam, besides the contamination by thermal noise and foreground residual. Such data set is split so that 64%, 16%, and 20% of the cosmologies are used for training, validation and test, respectively. We evaluate the constraining power of our method under two scenarios, the \(\Lambda\)CDM model, predicting the dark matter and Hubble parameters (\(\Omega_{c}\), \(h\)) (case (_i_)), and the CPL parameterization, accounting also for the DE EoS parameters, i.e., predicting \(\{\Omega_{c},h,w_{0},w_{a}\}\) (case (_ii_)). Although we are interested in predicting, in each scenario, the aforementioned parameters, our grid of cosmologies is defined varying also \(\Omega_{b}\), \(n_{s}\), and \(A_{s}\) parameters in a more restrictive range of values, given by the 1\(\sigma\) confidence region from Planck 2018. As far as we know, no other work from the literature accounts for the uncertainty of parameters other than those of interest. In fact, since the exact values of the cosmological parameters are not known, but only their most likely values and respective error bars, sampling other parameters of the cosmological model seems more appropriate, avoiding underestimating errors. The relatively fast production of our lognormal simulations allows us to better explore the parameter space. We find that, under the \(\Lambda\)CDM scenario, both \(\Omega_{c}\) and \(h\) parameters can be quite well predicted by the trained NN, obtaining 4.9% and 1.6% fractional errors, respectively, using the combination of APS and MFs statistics. Such constrains, in particular for \(h\), are significantly impacted by the inclusion of the CPL parameters; we find predictions for \(\Omega_{c}\), \(h\), and \(w_{0}\) with 6.4%, 3.7%, and 24.3% errors. The Hubble parameter is still reasonably well constrained by our method. Predictions of \(w_{a}\), in contrast, have a large bias, being poorly constrained. Investigating the constraining power of each summary statistic individually, we find the APS outperforming the MFs in most of our tests over BINGO simulations, which can be partially explained by a possible imprecision of the three-point information of our lognormal simulations. Comparing such results to those obtained analysing clean 21 cm simulation, we find that the MFs are the most impacted by the presence of these contaminants, which may help explain the less effective constraining power of the MFs. We note that the greater impact of the presence of contaminants over the MFs predictions could possibly be associated to their non-Gaussian characteristic or even because the cosmological signal, foregrounds and thermal noise are \begin{table} \begin{tabular}{c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{\(f_{sky}=0.52\)} & \multicolumn{2}{c}{BINGO coverage} \\ \cline{2-5} space & \(V_{k}\)+\(C_{\ell}\) & \(C_{\ell}\) & \(V_{k}\) & \(V_{k}\)+\(C_{\ell}\) \\ \hline \multicolumn{5}{c}{2 parameters constraint} \\ \hline \(\Omega_{c}-h\) & 1 & 2.902 & 1.010 & 3.383 \\ \hline \multicolumn{5}{c}{4 parameters constraint} \\ \hline \(\Omega_{c}-h\) & 1. & 1.605 & 1.223 & 1.615 \\ \hline \(\Omega_{c}-w_{0}\) & 1. & 1.218 & 1.254 & 1.659 \\ \hline \(h-w_{0}\) & 1. & 1.012 & 1.233 & 1.479 \\ \hline \(\Omega_{c}-w_{a}\) & 1. & 1.048 & 1.101 & 1.404 \\ \hline \(h-w_{a}\) & 1. & 1.046 & 1.084 & 1.296 \\ \hline \(w_{0}-w_{a}\) & 1. & 0.831 & 1.132 & 1.362 \\ \hline \hline \end{tabular} \end{table} Table 6: Area of the 1\(\sigma\) error ellipse for individual summary statistics relative to that from the combination \(V_{k}\)+\(C_{\ell}\), but analysing a larger fraction of the sky, \(f_{sky}=0.52\) (columns 2 and 3). The last column shows the relative area of the ellipse when analysing BINGO coverage (\(f_{sky}=0.09\)) using \(V_{k}\)+\(C_{\ell}\). These results correspond to clean 21 cm simulations. Figure 12: Sensitivity of the parameters predictions with the redshift. The upper and lower panels show the RMSE metric obtained from cases (_i_) and (_ii_), respectively, as a function of the range of redshift bins from which the summary statistics are calculated and used to feed the NNs. The RMSE values for \(w_{a}\) parameter are re-scaled by a factor of 0.5. In both panels, the right most data points show the RMSE resulting from predictions performed using the summary statistics from all the 30 redshift bins averaged into 5. These results are obtained using BINGO simulations. See text for details. more easily distinguishable in terms of angular scales, as given by the APS, than \(\nu\) thresholds. We assess the contribution of each contaminant signal individually to the results and find that the accuracy of the predictions is mainly affected by thermal noise, while foreground residual has a minor impact. The robustness of our methodology and results to the foreground contamination is also evaluated by applying the trained NNs to test data sets accounting for different contributions of foreground residual, confirming that our NNs can be efficiently applied outside the training data. Furthermore, we find that the accuracy of the cosmological parameters predictions is sensitive to the redshift range, which is also determined by the summary statistic feeding the NN. The usage of the full range of redshift is confirmed to provide the better predictions. Using clean 21 cm simulations, we also investigate how a four times large sky coverage than BINGO's, e.g., as the SKA footprint, can improve cosmological constrains. We find that, under the \(\Lambda\)CDM scenario, the \(\Omega_{c}-h\) plane has its \(1\sigma\) error ellipse reduced by a factor of 3. Also, increasing the sky fraction seems to allow the combination of the summary statistics to provide a tighter \(1\sigma\) contour compared to their individual usage, as observed for most of the parameters planes evaluated here, in particular for \(\Omega_{c}-h\) predictions. Moreover, although the suboptimal cosmological simulations employed here may also have contributed to the less effective constraining power of the MFs compared to the APS, or their very similar performance in some cases, the FLASHK mocks have proved to be enough for our purpose. They allowed us to assess the cosmological constraining power of our method, showing the usage of NN fed by summary statistics from 21 cm simulations to be a very promising alternative to likelihood based analysis. In addition, they have also allowed to demonstrate how simple can be the combination of summary statistics to feed the NN and predict cosmological parameters, preventing technical problems commonly faced by standard analysis, as, e.g., when calculating covariance matrices. In fact, such simulations, although enough to evaluate the role and efficiency of the APS as features, are suboptimal and may be preventing us of exploring the full potential of the MFs, expected to carry a larger amount of information with respect to the APS due to their sensitivity to higher order information. Fore this reason, we believe to be able to improve our results by using, for example, N-body or hydrodynamic simulations, which we leave for future work. More suitable simulations are expected to allow for better predictions using the MFs and more expressive improvements from the combination with the APS and other summary statistics. It is worth emphasising that, in addition to the 21 cm simulations, some aspects of our analyses also deserve improvements, left for a future extension of this work. In particular, the improvement of BINGO simulations, making them more realistic by accounting for possible signal remaining after cleaning the \(1/f\) noise (Bigot-Sazy et al., 2015) and the contamination by polarisation leakage (Cunnington et al., 2021). Even using suboptimal simulations, our results report the success of the NNs in mapping the summary statistics into the cosmological parameters. In particular, they indicate that future low redshift 21 cm data should be able to provide important contributions, in special to help investigate the Hubble tension and to study the dark sector. Finally, we emphasise that the methodology described here is not restricted to the BINGO case, but can be explored and optimised to use different data sets, such as from other 21 cm experiments, galaxy surveys, CMB, or even a given combination of them. ## Acknowledgements C.P.N. thanks Serrapilheira and Sao Paulo Research Foundation (FAPESP; grant 2019/06040-0) for financial support. M.R. acknowledges support from the CSIC programme 'Ayuda a la Incorporacion de Cientificos Titulares' provided under the project 2022501159. This research made use of astropy (Price-Whelan et al., 2018), healpy (Zonca et al., 2019), numpy (Van Der Walt et al., 2011), scipy (Virtanen et al., 2020) and matplotlib (Hunter, 2007). ## Data Availability The data underlying this article, as well as NN models and algorithm, are available upon reasonable request to the corresponding author.
2309.09845
Recovery of a time-dependent potential in hyperbolic equations on conformally transversally anisotropic manifolds
We study an inverse problem of determining a time-dependent potential appearing in the wave equation in conformally transversally anisotropic manifolds of dimension three or higher. These are compact Riemannian manifolds with boundary that are conformally embedded in a product of the real line and a transversal manifold. Under the assumption of the attenuated geodesic ray transform being injective on the transversal manifold, we prove the unique determination of time-dependent potentials from the knowledge of a certain partial Cauchy data set.
Boya Liu, Teemu Saksala, Lili Yan
2023-09-18T14:59:56Z
http://arxiv.org/abs/2309.09845v1
Recovery of a time-dependent potential in hyperbolic equations on conformally transversally anisotropic manifolds ###### Abstract. We study an inverse problem of determining a time-dependent potential appearing in the wave equation in conformally transversally anisotropic manifolds of dimension three or higher. These are compact Riemannian manifolds with boundary that are conformally embedded in a product of the real line and a transversal manifold. Under the assumption of the attenuated geodesic ray transform being injective on the transversal manifold, we prove the unique determination of time-dependent potentials from the knowledge of a certain partial Cauchy data set. ## 1. Introduction and Statement of Results Let \((M,g)\) be a smooth, compact, oriented Riemannian manifold of dimension \(n\geq 3\) with smooth boundary \(\partial M\). Throughout this paper we denote \(Q=(0,T)\times M^{int}\) with \(0<T<\infty\), \(\overline{Q}\) being the closure of \(Q\), and \(\Sigma=(0,T)\times\partial M\) the lateral boundary of \(Q\). We introduce the Laplace-Beltrami operator \(\Delta_{g}\) of the metric \(g\), and for a given smooth and strictly positive function \(c(x)\) on \(M\), we consider the wave operator \[\square_{c,g}=c(x)^{-1}\partial_{t}^{2}-\Delta_{g} \tag{1.1}\] with time-independent coefficients. In this paper we study an inverse problem for the linear hyperbolic partial differential operator \[\mathcal{L}_{c,g,q}=\square_{c,g}+q(t,x),\quad(t,x)\in Q, \tag{1.2}\] with a time-dependent coefficient \(q\in C(\overline{Q})\) called the _potential_. We shall make two geometric assumptions, of which the first one is the following: **Definition 1.1**.: _A Riemannian manifold \((M,g)\) of dimension \(n\geq 3\) with boundary \(\partial M\) is called conformally transversally anisotropic (CTA) if \(M\) is a compact subset of a manifold \(\mathbb{R}\times M_{0}^{int}\) and \(g=c(e\oplus g_{0})\). Here \((\mathbb{R},e)\) is the real line, \((M_{0},g_{0})\) is a smooth compact \((n-1)\)-dimensional Riemannian manifold with smooth boundary called the transversal manifold, and \(c\in C^{\infty}(\mathbb{R}\times M_{0})\) is a strictly positive function._ Examples of CTA manifolds include precompact smooth proper subsets of Euclidean, spherical, and hyperbolic spaces, see [10] for some more examples of CTA manifolds. The global product structure of \(M\) allows us to write every point \(x\in M\) as \(x=(x_{1},x^{\prime})\), where \(x_{1}\in\mathbb{R}\) and \(x^{\prime}\in M_{0}\). In particular, the projection \(\varphi(x)=x_{1}\) is a _limiting Carleman weight_. The existence of a limiting Carleman weight implies that a conformal multiple of the metric \(g\) admits a parallel unit vector field, and the converse holds for simply connected manifolds, see [9, Theorem 1.2]. The latter condition holds if and only if the manifold \((M,g)\) is locally isometric to the product of an interval and some \((n-1)\)-dimensional Riemannian manifold \((M_{0},g_{0})\). In addition to the product structure of the ambient space \(\mathbb{R}\times M_{0}\) of the manifold \((M,g)\), we need to also assume the injectivity of certain geodesic ray transforms on the transversal manifold \((M_{0},g_{0})\). This type of assumption has been implemented to solve many important inverse problems on CTA manifolds, see for instance [7, 10, 20, 23, 34] and the references therein. Let us now recall some definitions related to geodesic ray transforms on Riemannian manifolds with boundary. Geodesics of \((M_{0},g_{0})\) can be parametrized (non-uniquely) by points on the unit sphere bundle \(SM_{0}=\{(x,\xi)\in TM_{0}:|\xi|=1\}\). We denote \[\partial_{\pm}SM_{0}=\{(x,\xi)\in SM_{0}:x\in SM_{0},\,\pm\langle\xi,\nu(x) \rangle>0\}\] the incoming (-) and outgoing (+) boundaries of \(SM_{0}\) corresponding to the geodesics touching the boundary. Here \(\langle\cdot,\cdot\rangle\) is the Riemannian inner product of \((M_{0},g_{0})\), and \(\nu\) is the outward unit normal vector to \(\partial M_{0}\) with respect to the metric \(g_{0}\). For any \((x,\xi)\in\partial_{-}SM_{0}\), we let \(\gamma=\gamma_{x,\xi}\) be a geodesic of \(M_{0}\) with initial conditions \((\gamma(0),\dot{\gamma}(0))=(x,\xi)\). Then \(\tau_{\rm exit}(x,\xi)>0\) stands for the first time when \(\gamma\) meets \(\partial M_{0}\) with the convention that \(\tau_{\rm exit}(x,\xi)=+\infty\) if \(\gamma(\tau)\in M_{0}^{\rm int}\) for all \(\tau>0\). We say that a unit speed geodesic segment \(\gamma:[0,\tau_{\rm exit}(x,\xi)]\to M_{0}\), \(0<\tau_{\rm exit}(x,\xi)<\infty\), is _non-tangential_ if \(\dot{\gamma}(0)\) and \(\dot{\gamma}(\tau_{\rm exit}(x,\xi))\) are non-tangential vectors to \(\partial M_{0}\), and \(\gamma(\tau)\in M_{0}^{\rm int}\) for all \(0<\tau<\tau_{\rm exit}(x,\xi)\). Given a continuous function \(\alpha\) on \(M_{0}\), the attenuated geodesic ray transform of a function \(f\colon M_{0}\to\mathbb{R}\) is given by \[I^{\alpha}(f)(x,\xi)=\int_{0}^{\tau_{\rm exit}(x,\xi)}\exp\bigg{[}\int_{0}^{t} \alpha(\gamma_{x,\xi}(s))ds\bigg{]}f(\gamma_{x,\xi}(t))dt,\quad(x,\xi)\in \partial_{-}SM_{0}\setminus\Gamma_{-}, \tag{1.3}\] where \(\Gamma_{-}=\{(x,\xi)\in\partial_{-}SM_{0}:\tau_{\rm exit}(x,\xi)=+\infty\}\). The attenuated geodesic ray transform is the mathematical basis for the medical imaging method SPECT (single-photon emission computed tomography), which is commonly used to diagnose and monitor heart problems as well as bone and brain disorders. Inversion of an attenuated geodesic ray transform is a crucial part of solving the Calderon problem on CTA manifolds [9]. The second geometric assumption we make in this paper is as follows. **Assumption 1.** There exists \(\varepsilon>0\) such that for any smooth attenuation \(\alpha\) on \(M_{0}\) with \(\|\alpha\|_{L^{\infty}(M_{0})}<\varepsilon\), the respective attenuated geodesic ray transform \(I^{\alpha}\) on \((M_{0},g_{0})\) is injective over continuous functions \(f\) in the sense that if \(I^{\alpha}(f)(x,\xi)=0\) for all \((x,\xi)\in\partial_{-}SM_{0}\setminus\Gamma_{-}\) such that \(\gamma_{x,\xi}\) is a non-tangential geodesic, then \(f=0\) in \(M_{0}\). Injectivity of the attenuated geodesic ray transform on _simple_ manifolds for small attenuations \(\alpha\) was established in [9, Theorem 7.1]. A compact, simply connected Riemannian manifold with smooth boundary is said to be simple if its boundary is strictly convex, and no geodesic has conjugate points. When \(\alpha=0\), injectivity of the geodesic ray transform on simple manifolds is well-known, see [24, 31]. The attenuated geodesic ray transform \(I^{\alpha}\) is also known to be injective when some other geometric conditions are imposed. For instance, it was established in [8, Theorem 29] that \(I^{\alpha}\) is injective on spherically symmetric manifolds satisfying the Herglotz condition when the attenuation \(\alpha\) is radially symmetric and Lipschitz continuous. The attenuation is a constant in this paper. The Herglotz condition is a special case of a manifold satisfying a convex foliation condition, and in [25] the injectivity of \(I^{\alpha}\) is verified on this type of manifolds of dimension \(n\geq 3\). Some examples of manifolds satisfying the global foliation condition are the punctured Euclidean space \(\mathbb{R}^{n}\setminus\{0\}\) and the torus \(\mathbb{T}^{n}\). We refer readers to [25, Section 2] for more examples. The convex foliation condition does not forbid the existence of conjugate points in general. Finally, we discuss the "measurements" made in this paper before presenting our main result. We observe that the limiting Carleman weight \(\varphi(x)\) gives us a canonical way to define the front and back faces of \(\partial M\) and \(\partial Q\). Let \(\nu\) be the outward unit normal vector to \(\partial M\) with respect to the metric \(g\). We denote \(\partial M_{\pm}=\{x\in\partial M:\pm\partial_{\nu}\varphi(x)\geq 0\}\) and \(\Sigma_{\pm}=(0,T)\times\partial M_{\pm}^{\text{int}}\). Then we define \(U=(0,T)\times U^{\prime}\) and \(V=(0,T)\times V^{\prime}\), where \(U^{\prime},V^{\prime}\subset\partial M\) are open neighborhoods of \(\partial M_{+}\), \(\partial M_{-}\), respectively. The goal of this paper is to prove the unique determination of the time-dependent potential \(q(t,x)\), which appears in (1.2), from the following set of partial Cauchy data \[\mathcal{C}_{g,q}=\{(u|_{U},u|_{t=T},\partial_{t}u|_{t=0},\partial_{\nu}u|_{V} ):u\in L^{2}(Q),\,\mathcal{L}_{c,g,q}u=0,u|_{t=0}=0,\,\text{supp}\,\,u|_{ \Sigma}\subset U\}. \tag{1.4}\] The wellposedness of this set has been established in [18, Section 3]. From a physical perspective, as introduced in [17], the inverse problem considered in this paper can be interpreted as the determination of physical properties such as the time-evolving density of an inhomogeneous medium by probing it with disturbances generated on some parts of the boundary and at initial time, and by measuring the response on some parts of the boundary and at the end of the experiment. We highlight that in \(\mathcal{C}_{g,q}\) the Dirichlet value is measured and supported only on roughly half of the lateral boundary \(U\), and the Neumann data is measured on approximately the other half of the lateral boundary \(V\). Measurements are also made at the initial time \(t=0\) and the end time \(t=T\). It follows from the domain of dependence arguments given in [17, Subsection 1.1] that we can only hope to recover general time-dependent coefficients in the optimal set \[\mathcal{D}:=\{(t,x)\in Q:\text{dist}(x,\partial M)<t<T-\text{dist}(x,\partial M )\}\] when only the lateral boundary data \[\mathcal{C}_{g,q}^{\text{lat}}=\{(u|_{\Sigma},\partial_{\nu}u|_{\Sigma}):u\in L ^{2}(Q),\,\mathcal{L}_{g,q}u=0,\,u|_{t=0}=\partial_{t}u|_{t=0}=0\} \tag{1.5}\] is given. Hence, even for a large measurement time \(T>0\), global unique recovery of general time-dependent coefficients of the hyperbolic operator (1.2) requires information at the beginning \(\{t=0\}\) and at the end \(\{t=T\}\) of the measurement. The main result of this paper is as follows. **Theorem 1.2**.: _Suppose that \((M,g)\) is a CTA manifold of dimension \(n\geq 3\) and that Assumption 1 holds for the transversal manifold \((M_{0},g_{0})\). Let \(T>0\) and \(q_{i}\in C(\overline{Q})\), \(i=1,2\). If \(q_{1}=q_{2}\) on \(\partial Q\), then \(\mathcal{C}_{g,q_{1}}=\mathcal{C}_{g,q_{2}}\) implies that \(q_{1}=q_{2}\) in \(Q\)._ **Remark 1.3**.: _Theorem 1.2 can be viewed as an extension of [17] from the Euclidean space, as well as [18] from CTA manifolds with a simple transversal manifold \(M_{0}\), to more general CTA manifolds. Due to the partial Dirichlet data assumption in \(\mathcal{C}_{g,q}\), Theorem 1.2 does not follow from our recent work [23]._ **Remark 1.4**.: _Theorem 1.2 states the unique determination of continuous potentials from the set of partial Cauchy data \(\mathcal{C}_{g,q}\). This is attributed to the technique presented in this paper since the concentration property of Gaussian beam quasimodes (Proposition 2.2) requires continuity._ **Remark 1.5**.: _Assumption 1 of this paper is different from the literature concerning inverse problems for elliptic operators on CTA manifolds, see for instance [10, 20]. These works assume the invertibility of the geodesic ray transform. In the case of elliptic operators, where there is only one Euclidean direction \(x_{1}\), the authors reduced the problem to the geodesic ray transform and recovered the Taylor expansion of the unknown function by differentiating an expression similar to (4.8) with respect to the variable \(\lambda\) at zero. However, this approach is not applicable in our case as the mapping \((\lambda,\beta)\mapsto-\lambda(\beta,1)\), appearing in (4.8), is a diffeomorphism only if \(\lambda\neq 0\). Thus, computing \(\lambda\) and \(\beta\)-derivatives of (4.8) at \(\lambda=0\) will not give us the Taylor expansion of the unknown potential at the origin._ ### Previous literature In this section we only review some literature concerning the recovery of time-dependent coefficients appearing in hyperbolic equations from boundary measurements. There is also a vast amount of literature about the time-independent case, which has been discussed for instance in [23]. Most of the time-dependent results rely on the use of geometric optics (GO) solutions to the hyperbolic equation. This approach was first implemented in [32] to determine time-dependent coefficients of hyperbolic equations from the knowledge of scattering data by using properties of the light-ray transform. In the Euclidean setting, recovery of a time-dependent potential \(q\) from the full lateral boundary data \(\mathcal{C}^{\rm lat}_{q}\) on the infinite cylinder \(\mathbb{R}\times\Omega\), where \(\Omega\) is a bounded domain, was established in [29]. On a finite cylinder \((0,T)\times\Omega\) with \(T>{\rm diam}(\Omega)\), it was proved in [26] that \(\mathcal{C}^{\rm lat}_{q}\) determines \(q\) uniquely in the optimal subset \(\mathcal{D}\) of \((0,T)\times\Omega\). A uniqueness result for determining a general time-dependent potential \(q\) from the set of partial Cauchy data \(\mathcal{C}_{g,q}\) was established in [17]. Going beyond the Euclidean setting, global unique determination of a time-dependent potential \(q\) from both full and partial boundary measurements was proved in [18] on a CTA manifold \((M,g)\) with a simple transversal manifold \(M_{0}\). In other classes of manifolds, it was recently established in [1] that a set of full Cauchy data determines \(q\) uniquely in Lorentzian manifolds that satisfy certain two-sided curvature bounds and some other geometric assumptions. This curvature bound was weakened in [2] near Minkowski geometry. The proof of [1] is based on a new optimal unique continuation theorem and a generalization of the Boundary Control Method, originally developed in [5], to the cases when the dependence of coefficients on time is not analytic. Indeed, the Boundary Control Method, which is a powerful tool to prove uniqueness results for time-independent coefficients appearing in hyperbolic equations [5, 6, 13, 14, 21], is not applicable to recover time-dependent coefficients in general since it relies on an application of the unique continuation theorem analogous to [33], which may fail without the aforementioned real analyticity assumption, see [3, 4]. Aside from uniqueness results concerning only the potential, there is also some literature about determining time-dependent first order perturbations appearing in hyperbolic equations from boundary measurements as well. It was established in [16] that boundary data \(\mathcal{C}_{g,q}\), with \(U=\Sigma\), determines time-dependent damping coefficients and potentials uniquely in the Euclidean setting. Very recently the authors extended this result to the setting of CTA manifolds in [23]. If a full time-dependent vector field perturbation appears in the hyperbolic equation, similar to the magnetic Schrodinger operator, it is only possible to recover the vector field up to a differential of a test function in \(Q\). A global uniqueness result was proved in [11] when the dependence of coefficients on the time variable is real-analytic. This analyticity assumption was removed in [30], which proved a uniqueness result on an infinite cylinder \(\mathbb{R}\times\Omega\), where \(\Omega\) is a bounded domain in \(\mathbb{R}^{n}\). We refer readers to [19] for a global uniqueness result from partial Dirichlet-to-Neumann map on a finite cylinder \([0,T]\times\Omega\) with \(T>{\rm diam}\ (\Omega)\). In the Riemannian setting, it was established in [12] that the lateral boundary data \(\mathcal{C}^{\text{lat}}_{g,q}\) determines the first order and the zeroth order perturbations up to the described gauge invariance on a certain non-optimal subset of \(Q\). This result was obtained by reducing the problem to the inversion of the light-ray transform of the Lorentzian metric \(-dt^{2}+g(x)\). The authors of [12] also showed that the light-ray transform is invertible whenever the respective geodesic ray transform on the spatial manifold is invertible. To the best of our knowledge, the global (optimal) recovery of a one-form and a potential function, appearing in a hyperbolic operator, from a set of partial Cauchy data \(\mathcal{C}_{g,q}\) (1.4) (lateral boundary data \(\mathcal{C}^{\text{lat}}_{g,q}\) (1.5)) is still an open problem. ### Outline for the proof of Theorem 1.2 The two main ingredients of the proof are the integral identity (4.2), which was derived in [17, 18] from the set of partial Cauchy data \(\mathcal{C}_{g,q}\), and the construction of complex geometric optics (CGO) solutions. Specifically, we shall construct a family of exponentially decaying solutions \(u_{1}\) to the equation \(\mathcal{L}^{*}_{c,g,q}u_{1}=0\) of the form \[u_{1}(t,x)=e^{-s(\beta t+\varphi(x))}(v_{s}(t,x)+r_{1}(t,x)),\quad(t,x)\in Q.\] On the other hand, due to the restrictions \(\text{supp }u|_{\Sigma}\subset U\) and \(u|_{t=0}=0\) in \(\mathcal{C}_{g,q}\), we need to construct a family of exponentially growing solutions \(u_{2}\) to the equation \(\mathcal{L}_{c,g,q}u_{2}=0\), which look like \[u_{2}(t,x)=e^{s(\beta t+\varphi(x))}(w_{s}(t,x)+r_{2}(t,x)),\quad(t,x)\in Q,\] and satisfy these two boundary conditions. Here \(s=\frac{1}{h}+i\lambda\) is a complex number, \(h\in(0,1)\) is a semiclassical parameter, \(\lambda\in\mathbb{R}\) and \(\beta\in(\frac{1}{\sqrt{3}},1)\) are some fixed numbers, \(v_{s}\) and \(w_{s}\) are Gaussian beam quasimodes, \(r_{1}\) and \(r_{2}\) are correction terms that vanish in the limit \(h\to 0\), and the function \(\varphi(x)=x_{1}\) is a limiting Carleman weight on \(M\). We choose the values of \(\beta\) as above because the construction of \(r_{1}\) relies on an application of an interior Carleman estimate [23, Proposition 3.6]. This is derived from a boundary Carleman estimate [23, Proposition 3.1], which is valid for \(\beta\in(\frac{1}{\sqrt{3}},1)\). For the construction of \(r_{2}\), we may take \(\beta\in[\frac{1}{2},1]\), see [18, Theorem 4.1]. Since the transversal manifold \((M_{0},g_{0})\) is not necessarily simple, the approach based on global GO solutions is not applicable. In Proposition 2.1 we construct Gaussian beam quasimodes for every non-tangential geodesic in the transversal manifold \(M_{0}\) by using techniques originally developed in solving inverse problems for elliptic operators, see for instance [7, 10, 20, 34], as well as [23] for hyperbolic operators. The quasimodes concentrate on the geodesic in the semiclassical limit \(h\to 0\), as we shall explain in Proposition 2.2. The construction of the remainder terms \(r_{1}\) and \(r_{2}\) are given in Section 3. Here \(r_{1}\) needs to have a stronger decay property, namely, being \(\mathcal{O}(h^{1/2})\) with respect to the semiclassical \(H^{1}\)-norm. This is achieved with an interior Carleman estimate [23, Proposition 3.6]. In order to find a remainder \(r_{2}\) such that \(u_{2}\) satisfies the required boundary conditions, we follow a different approach, which was developed in [17, 18]. To complete the proof of Theorem 1.2, we shall substitute the CGO solutions (3.1) and (3.5) into the integral identity (4.2) and pass to the limit \(h\to 0\). Lemma 4.1 implies that the right-hand side of (4.2) vanishes in the limit \(h\to 0\). The proof of this lemma requires a decay in \(H^{1}\)-norm (3.2) for \(r_{1}\). Meanwhile, estimates (2.5), (3.2), and (3.6), in conjunction with the concentration property of Gaussian beam quasimodes Proposition 2.2, yield that the left-hand side of (4.2) converges to the attenuated geodesic ray transform involving the function \(q_{1}-q_{2}\) in the limit \(h\to 0\). This is the reason why we need Assumption 1 to complete the proof. The paper is organized as follows. We begin with the construction of Gaussian beam quasimodes in Section 2. In Section 3 we construct both exponentially decaying and growing CGO solutions. Finally, we present the proof of Theorem 1.2 in Section 4. ### Acknowledgments We would like to express our gratitude to Katya Krupchyk, Joonas Ilmavirta, and Hanming Zhou for valuable discussions and suggestions. T.S. is partially supported by the National Science Foundation (DMS 2204997). L.Y. is partially supported by the National Science Foundation (DMS 2109199). ## 2. Construction of Gaussian Beam Quasimodes Let \((M,g)\) be a CTA manifold given by Definition 1.1 and \(T>0\). The goal of this section is to construct Gaussian beam quasimodes with desirable concentration properties. Gaussian beam quasimodes have been utilized extensively to solve inverse problems in Riemannian manifolds. We refer readers to [7, 9, 10, 20, 34] for applications in elliptic operators and [12, 14, 23] in hyperbolic operators. To streamline the construction, we first note that due to the conformal properties of the Laplace-Beltrami operator explained in [10], we have \[c^{\frac{n+2}{4}}(-\Delta_{g})(c^{-\frac{n-2}{4}}u)=-(\Delta_{ \widetilde{g}}-\big{(}c^{\frac{n+2}{4}}\Delta_{g}(c^{-\frac{n-2}{4}})\big{)})u. \tag{2.1}\] Also, since \(c\) is independent of the time variable \(t\), we get \[c^{\frac{n+2}{4}}\partial_{t}^{2}(c^{-\frac{n-2}{4}}u)=c \partial_{t}^{2}u. \tag{2.2}\] Thus, equations (2.1) and (2.2) yield the following identity for the operator \(\mathcal{L}_{c,g,q}\): \[c^{\frac{n+2}{4}}\circ\mathcal{L}_{c,g,q}\circ c^{-\frac{n-2}{4}}=\mathcal{L} _{\widetilde{g},\widetilde{q}}, \tag{2.3}\] where \[\widetilde{g}=e\oplus g_{0}\quad\text{and}\quad\widetilde{q}=c(q-c^{\frac{n-2 }{4}}\Delta_{g}(c^{-\frac{n-2}{4}})). \tag{2.4}\] Hence, by replacing the metric \(g\) and coefficient \(q\) with \(\widetilde{g}\) and \(\widetilde{q}\), respectively, we may assume that the conformal factor \(c=1\). In this section we shall use this assumption and consider the leading order wave operator \(\square_{e\oplus g_{0}}=\partial_{t}^{2}-\Delta_{e\oplus g_{0}}\). For simplicity, let us write \(\mathcal{L}_{g,q}\) for \(\mathcal{L}_{c,g,q}\) with \(c=1\). Also, throughout the rest of this paper we shall denote \(\mathcal{L}_{g,q}^{*}=\mathcal{L}_{g,\overline{q}}\) the formal \(L^{2}\)-adjoint of the operator \(\mathcal{L}_{g,q}\). We are now ready to state and prove the main result of this section. **Proposition 2.1**.: _Let \((M,g)\) be a smooth CTA manifold with boundary, \(T>0\), and let \(s=\frac{1}{h}+i\lambda\), \(0<h\ll 1\), \(\lambda\in\mathbb{R}\), and \(\beta\in(0,1)\) fixed. Let \(q\in C(\overline{Q})\). Then for every unit speed non-tangential geodesic \(\gamma\) of the transversal manifold \((M_{0},g_{0})\), there exist a one-parameter family of Gaussian beam quasimodes \(v_{s}\in C^{\infty}(M_{0})\) such that the estimates_ \[\begin{split}&\|v_{s}\|_{L^{2}(M_{0})}=\mathcal{O}(1),\quad\|e^{s (\beta t+x_{1})}h^{2}\mathcal{L}_{g,q}^{*}e^{-s(\beta t+x_{1})}v_{s}\|_{L^{2}( Q)}=\mathcal{O}(h^{3/2}),\\ &\|e^{-s(\beta t+x_{1})}h^{2}\mathcal{L}_{g,q}e^{s(\beta t+x_{1} )}v_{s}\|_{L^{2}(Q)}=\mathcal{O}(h^{3/2})\end{split} \tag{2.5}\] _hold as \(h\to 0\)._ Proof.: Let \(L>0\) be the length of the geodesic \(\gamma=\gamma(\tau)\). By [22, Example 9.32], we may embed \((M_{0},g_{0})\) into a larger closed manifold \((\widehat{M}_{0},g_{0})\) of the same dimension. Also, we extend \(\gamma\) as a unit speed geodesic in \(\widehat{M}_{0}\). Since \(\gamma\) is non-tangential, we can choose \(\varepsilon>0\) such that \(\gamma(\tau)\in\widehat{M}_{0}\setminus M_{0}\) and does not self-intersect for \(\tau\in[-2\varepsilon,0)\cup(L,L+2\varepsilon]\). Our goal is to construct Gaussian beam quasimodes near \(\gamma([-\varepsilon,L+\varepsilon])\). We start by fixing a point \(z_{0}=\gamma(\tau_{0})\) on \(\gamma([-\varepsilon,L+\varepsilon])\) and construct the quasimode locally near \(z_{0}\). Let \((\tau,y)\in\Omega:=\{(\tau,y)\in\mathbb{R}\times\mathbb{R}^{n-2}:|\tau-\tau_{0 }|<\delta,\,|y|<\delta^{\prime}\}\), \(\delta,\delta^{\prime}>0\), be Fermi coordinates near \(z_{0}\), see [15, Lemma 7.4]. We may assume that the coordinates \((\tau,y)\) extend smoothly to a neighborhood of \(\overline{\Omega}\). We observe that near \(z_{0}=\gamma(\tau_{0})\) the trace of the geodesic \(\gamma\) is given by the set \(\Gamma=\{(\tau,0):|\tau-\tau_{0}|<\delta\}\), and in these Fermi coordinates we have \[g_{0}^{jk}(\tau,0)=\delta^{jk}\quad\text{and}\quad\partial_{y_{l}}g_{0}^{jk}( \tau,0)=0. \tag{2.6}\] Hence, it follows from Taylor's theorem that for small \(|y|\) we can write \[g_{0}^{jk}(\tau,y)=\delta^{jk}+\mathcal{O}(|y|^{2}). \tag{2.7}\] We first construct quasimodes \(v_{s}\) for the conjugated operator \(e^{s(\beta t+x_{1})}\mathcal{L}_{g,q}^{*}e^{-s(\beta t+x_{1})}\). To that end, let us consider a Gaussian beam ansatz \[v_{s}(\tau,y;h)=e^{is\Theta(\tau,y)}b(\tau,y;h). \tag{2.8}\] Compared to [23], the quasimode \(v_{s}\) constructed in this paper is independent of the Euclidean variables \((t,x_{1})\). Our goal is to find a phase function \(\Theta\in C^{\infty}(\Omega,\mathbb{C})\) such that \[\text{Im}\Theta\geq 0,\quad\text{Im}\Theta|_{\Gamma}=0,\quad\text{Im}\Theta( \tau,y)\sim|y|^{2}, \tag{2.9}\] as well as an amplitude \(b\in C^{\infty}(\Omega,\mathbb{C})\) such that \(\text{supp }(b(\tau,\cdot))\subset\{|y|<\delta^{\prime}/2\}\). We shall follow the ideas originally presented in [14, 28]. Since \(\Theta\) is independent of the Euclidean variables \((t,x_{1})\) and \(g=e\oplus g_{0}\), we have \[e^{-is\Theta}\partial_{t}^{2}(e^{is\Theta}b)=0 \tag{2.10}\] and \[e^{-is\Theta}(-\Delta_{g})e^{is\Theta}b=-\Delta_{g_{0}}b-is[2\langle\nabla_{g_ {0}}\Theta,\nabla_{g_{0}}b\rangle_{g_{0}}+(\Delta_{g_{0}}\Theta)b]+s^{2} \langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}\Theta\rangle_{g_{0}}b. \tag{2.11}\] Therefore, we obtain from (2.10) and (2.11) that \[\begin{split} e^{s(\beta t+x_{1})}h^{2}\mathcal{L}_{g,q}^{*}e^{-s (\beta t+x_{1})}v_{s}=& h^{2}e^{is\Theta}[s^{2}(\langle\nabla_{g_{0}} \Theta,\nabla_{g_{0}}\Theta\rangle_{g_{0}}-(1-\beta^{2}))b\\ &+s(-2i\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}b\rangle_{g_{0} }-i(\Delta_{g_{0}}\Theta)b)\\ &+(-\Delta_{g_{0}}+\overline{q})b].\end{split} \tag{2.12}\] From the computation above, we see that in order to verify the estimates in (2.5), we need to find a phase function \(\Theta\) and an amplitude \(b\) such that they approximately solve the eikonal and transport equations appearing on right-hand side of (2.12) as multipliers of the terms \(s^{2}\) and \(s\), respectively. Following similar arguments in [10, 14, 20, 27, 28], we aim to find \(\Theta(\tau,y)\in C^{\infty}(\Omega,\mathbb{C})\) such that \[\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}\Theta\rangle_{g_{0}}-(1-\beta^{2}) =\mathcal{O}(|y|^{3}),\quad y\to 0, \tag{2.13}\] and \[\text{Im}\Theta\geq d|y|^{2} \tag{2.14}\] for some constant \(d>0\) that depends on \(\beta\). As in [10, 27, 28], we choose \[\Theta(\tau,y)=\sqrt{1-\beta^{2}}(\tau+\frac{1}{2}H(\tau)y\cdot y), \tag{2.15}\] where the smooth complex-valued symmetric matrix \(H(\tau)\) is the unique solution of the initial value problem for the matrix Riccati equation \[\dot{H}(\tau)+H(\tau)^{2}=F(\tau),\quad H(\tau_{0})=H_{0},\quad\text{ for }\tau\in\mathbb{R}. \tag{2.16}\] Here \(\text{Im}H(\tau)\) positive definite, \(H_{0}\) is a complex symmetric matrix such that \(\text{Im}(H_{0})\) is positive definite, and \(F(\tau)\) is a suitable symmetric matrix. We refer readers to [14, Lemma 2.56] for details. We next look for an amplitude \(b\) of the form \[b(\tau,y;h)=h^{-\frac{n-2}{4}}b_{0}(\tau)\chi(y/\delta^{\prime}), \tag{2.17}\] where \(b_{0}\in C^{\infty}([\tau_{0}-\delta,\tau_{0}+\delta])\) depends on only the travel time \(\tau\) and satisfies the approximate transport equation \[-2i\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}b_{0}\rangle_{g_{0}}-i(\Delta_{g _{0}}\Theta)b_{0}=\mathcal{O}(|y|), \tag{2.18}\] and the cut-off function \(\chi\in C^{\infty}_{0}(\mathbb{R}^{n-2})\) is such that \(\chi=1\) for \(|y|\leq 1/4\) and \(\chi=0\) for \(|y|\geq 1/2\). In order to find \(b_{0}\) that satisfies (2.18), we first compute \(\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}b_{0}\rangle_{g_{0}}\). To this end, we deduce from (2.15) that \[\partial_{\tau}\Theta(\tau,y)=\sqrt{1-\beta^{2}}+\mathcal{O}(|y|^{2}). \tag{2.19}\] Therefore, we get from (2.7) that \[\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}b_{0}\rangle_{g_{0}}=\sqrt{1-\beta^ {2}}\partial_{\tau}b_{0}+\mathcal{O}(|y|^{2})\partial_{\tau}b_{0}. \tag{2.20}\] We next compute \(\Delta_{g_{0}}\Theta\) near the geodesic \(\gamma\). To that end, we deduce from (2.7) and (2.15) that \[(\Delta_{g_{0}}\Theta)(\tau,0)=\sqrt{1-\beta^{2}}\delta^{jk}H_{jk}=\sqrt{1- \beta^{2}}\text{tr}\,H(\tau).\] This implies that \[(\Delta_{g_{0}}\Theta)(\tau,y)=\sqrt{1-\beta^{2}}\text{tr}\,H(\tau)+\mathcal{ O}(|y|). \tag{2.21}\] To achieve (2.18), we require that \(b_{0}(\tau)\) satisfies \[\partial_{\tau}b_{0}=-\frac{1}{2}\text{tr}\,H(\tau)b_{0}. \tag{2.22}\] Hence, we have \[b_{0}(\tau)=e^{f_{1}(\tau)},\quad\text{where}\quad\partial_{\tau}f_{1}(\tau)= -\frac{1}{2}\text{tr}\,H(\tau).\] Finally, we get (2.18) from (2.19)-(2.22) due to the \(y\)-independence of \(b_{0}\). We next prove the estimates in (2.5) for the quasimode \[v_{s}(\tau,y;h)=e^{is\Theta(\tau,y)}b(\tau,y;h)=e^{is\Theta(\tau,y)}h^{-\frac {n-2}{4}}b_{0}(\tau)\chi(y/\delta^{\prime}) \tag{2.23}\] locally in \(\Omega\), where \(\Omega\subset M_{0}\) is the domain of Fermi coordinates near the point \(z_{0}=\gamma(\tau_{0})\). To proceed, we shall need the following estimate for any \(k\in\mathbb{R}\): \[\begin{split}\|h^{-\frac{n-2}{4}}|y|^{k}e^{-\frac{\text{Im} \Theta}{h}}\|_{L^{2}(|y|\leq\delta^{\prime}/2)}&\leq\|h^{-\frac {n-2}{4}}|y|^{k}e^{-\frac{d}{h}|y|^{2}}\|_{L^{2}(|y|\leq\delta^{\prime}/2)}\\ &\leq\bigg{(}\int_{\mathbb{R}^{n-2}}h^{k}|z|^{2k}e^{-2d|z|^{2}}dz \bigg{)}^{1/2}=\mathcal{O}(h^{k/2}),\quad h\to 0.\end{split} \tag{2.24}\] Here we applied estimate (2.14) and the change of variable \(z=h^{-1/2}y\). Then it follows from (2.14) and (2.24) with \(k=0\) that \[\begin{split}\|v_{s}\|_{L^{2}(\Omega)}&\leq\|b_{0} \|_{L^{\infty}([\tau_{0}-\delta,\tau_{0}+\delta])}\|e^{is\Theta}h^{-\frac{n-2}{ 4}}\chi(y/\delta^{\prime})\|_{L^{2}(\Omega)}\\ &\leq\mathcal{O}(1)\|h^{-\frac{n-2}{4}}e^{-\frac{d}{\hbar}|y|^{2} }\|_{L^{2}(|y|\leq\delta^{\prime}/2)}=\mathcal{O}(1),\quad h\to 0.\end{split} \tag{2.25}\] We now proceed to estimate \(\|e^{s(\beta t+x_{1})}\mathcal{L}_{g,\overline{q}}e^{-s(\beta t+x_{1})}v_{s}\| _{L^{2}(\Omega)}\), which requires estimating each term on the right-hand side of (2.12). For the first term, by utilizing (2.13), (2.14), and (2.24) with \(k=3\), we obtain \[\begin{split}& h^{2}\|e^{is\Theta}s^{2}(\langle\nabla_{g_{0}} \Theta,\nabla_{g_{0}}\Theta\rangle_{g_{0}}-(1-\beta^{2}))b\|_{L^{2}(\Omega)} \\ &=h^{2}\|e^{is\Theta}s^{2}h^{-\frac{n-2}{4}}(\langle\nabla_{g_{0} }\Theta,\nabla_{g_{0}}\Theta\rangle_{g_{0}}-(1-\beta^{2}))b_{0}\chi(y/\delta^{ \prime})\|_{L^{2}(\Omega)}\\ &\leq\mathcal{O}(1)\|h^{-\frac{n-2}{4}}|y|^{3}e^{-\frac{d}{\hbar} |y|^{2}}\|_{L^{2}(|y|\leq\delta^{\prime}/2)}=\mathcal{O}(h^{3/2}),\quad h\to 0. \end{split} \tag{2.26}\] We next estimate the second term on the right-hand side of (2.12). From a direct computation, we get \[|e^{is\Theta}|=e^{-\frac{1}{\hbar}\text{Im}\Theta}e^{-\lambda\text{Re}\Theta} =e^{-\frac{\sqrt{1-\beta^{2}}}{2\hbar}\text{Im}H(\tau)y\cdot y}e^{-\lambda \sqrt{1-\beta^{2}}\tau}e^{-\lambda\mathcal{O}(|y|^{2})}.\] We observe that \(e^{-\frac{1}{\hbar}}=\mathcal{O}(h^{\infty})\). Here we say that \(f=\mathcal{O}(h^{\infty})\) if \(f=\mathcal{O}(h^{n})\) for every \(n\in\mathbb{N}\). Therefore, it follows from (2.14) that on the support of \(\nabla_{g_{0}}\chi(y/\delta^{\prime})\) we have \[|e^{is\Theta}|\leq e^{-\frac{\widetilde{d}}{\hbar}}\quad\text{for some }\widetilde{d}>0.\] Thus, using equation (2.18), estimate (2.24) with \(k=1\), along with the triangle inequality, we have \[\begin{split}& h^{2}\|e^{is\Theta}s(-2i\langle\nabla_{g_{0}} \Theta,\nabla_{g_{0}}b\rangle_{g_{0}}-i(\Delta_{g_{0}}\Theta)b)\|_{L^{2}( \Omega)}\\ &\leq\mathcal{O}(h)\|e^{is\Theta}h^{-\frac{n-2}{4}}[|y|\chi(y/ \delta^{\prime})-2i\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}\chi(y/\delta^{ \prime})\rangle_{g_{0}}]\|_{L^{2}(\Omega)}\\ &\leq\mathcal{O}(h)\|h^{-\frac{n-2}{4}}|y|e^{-\frac{d}{\hbar}|y|^{ 2}}\|_{L^{2}(|y|\leq\delta^{\prime}/2)}+\mathcal{O}(e^{-\frac{\widetilde{d}}{ \hbar}})\\ &=\mathcal{O}(h^{3/2}),\quad h\to 0.\end{split} \tag{2.27}\] Lastly, we estimate the third term on the right-hand side of (2.12). Since the amplitude \(b\) is independent of \(t\), it suffices to estimate the term involving \(\Delta_{g}\) and the lower order term. To that end, we apply estimate (2.24) with \(k=0\) to get \[h^{2}\|e^{is\Theta}(-\Delta_{g}b)\|_{L^{2}(\Omega)}\leq\mathcal{O}(h^{2})\|h^{ -\frac{n-2}{4}}e^{-\frac{d}{\hbar}|y|^{2}}\|_{L^{2}(|y|\leq\delta^{\prime}/2)} =\mathcal{O}(h^{2}),\quad h\to 0. \tag{2.28}\] For the lower order term, it follows from (2.24) with \(k=0\) that \[h^{2}\|e^{is\Theta}\overline{q}b\|_{L^{2}(\Omega)}=\mathcal{O}(h^{2}),\quad h \to 0. \tag{2.29}\] Therefore, by combining estimates (2.26)-(2.29), we conclude from (2.12) that \[\|e^{s(\beta t+x_{1})}h^{2}\mathcal{L}_{g,q}^{*}e^{-s(\beta t+x_{1})}v_{s}\|_{ L^{2}(\Omega)}=\mathcal{O}(h^{3/2}),\quad h\to 0. \tag{2.30}\] This completes the verification of (2.5) locally in the set \(\Omega\). To complete the construction of the quasimode \(v_{s}\) on the transversal manifold \(M_{0}\), we glue together the quasimodes defined along small pieces of the geodesic \(\gamma\). Since \(\widehat{M}_{0}\) is compact and \(\gamma(\tau):(-2\varepsilon,L+2\varepsilon)\to\widehat{M}_{0}\) is a unit speed non-tangential geodesic that is not a loop, we get from [15, Lemma 7.2] that \(\gamma|_{(-2\varepsilon,L+2\varepsilon)}\) self-intersects at times \(\tau_{j}\), where \(j\in\{1,\ldots,N\}\), and \[-\varepsilon=\tau_{0}<\tau_{1}<\cdots<\tau_{N}<\tau_{N+1}=L+\varepsilon.\] By [15, Lemma 7.4], there exists an open cover \(\{(\Omega_{j},\kappa_{j})_{j=0}^{N+1}\}\) of \(\gamma([-\varepsilon,L+\varepsilon])\) consisting of coordinate neighborhoods that have the following properties: 1. \(\kappa_{j}(\Omega_{j})=I_{j}\times B\), where \(I_{j}\) are open intervals and \(B=B(0,\delta^{\prime})\) is an open ball in \(\mathbb{R}^{n-2}\). Here \(\delta^{\prime}>0\) can be taken arbitrarily small and the same for each \(\Omega_{j}\). 2. \(\kappa_{j}(\gamma(\tau))=(\tau,0)\) for \(r\in I_{j}\). 3. \(\tau_{j}\) only belongs to \(I_{j}\) and \(\overline{I_{j}}\cap\overline{I_{k}}=\emptyset\) unless \(|j-k|\leq 1\). 4. \(\kappa_{j}=\kappa_{k}\) on \(\kappa_{j}^{-1}((I_{j}\cap I_{k})\times B)\). As explained in [15, Lemma 7.4], the intervals \(I_{j}\) can be chosen as \[I_{0}=(-2\varepsilon,\tau_{1}-\widetilde{\delta}),\quad I_{j}=(\tau_{j}-2 \widetilde{\delta},\tau_{j+1}-\widetilde{\delta}),\,j=1,\ldots,N,\quad I_{N+1 }=(\tau_{N+1}-2\widetilde{\delta},L+2\varepsilon)\] for some \(\widetilde{\delta}>0\) small enough. When \(\gamma\) does not self-intersect, there is a single coordinate neighborhood of \(\gamma|_{[-\varepsilon,L+\varepsilon]}\) such that (1) and (2) are satisfied. We proceed as follows to construct the quasimode \(v_{s}\). Suppose first that \(\gamma\) does not self-intersect at \(\tau=0\). By following the arguments from the earlier part of this proof, we find a quasimode \[v_{s}^{(0)}(\tau,y;h)=h^{-\frac{n-2}{4}}e^{is\Theta^{(0)}(\tau,y)}e^{f_{1}( \tau)}\chi(y/\delta^{\prime})\] in \(\Omega_{0}\) with some fixed initial conditions at \(\tau=-\varepsilon\) for the Riccati equation (2.16) determining \(\Theta^{(0)}\). Next we choose some \(\tau_{0}^{\prime}\) such that \(\gamma(\tau_{0}^{\prime})\in\Omega_{0}\cap\Omega_{1}\) and let \[v_{s}^{(1)}(\tau,y;h)=h^{-\frac{n-2}{4}}e^{is\Theta^{(1)}(\tau,y)}e^{f_{1}( \tau)}\chi(y/\delta^{\prime})\] be a quasimode in \(\Omega_{1}\) by choosing the initial conditions for (2.16) such that \(\Theta^{(1)}(\tau_{0}^{\prime})=\Theta^{(0)}(\tau_{0}^{\prime})\). Here we have used the same function \(f_{1}\) in both \(v_{s}^{(0)}\) and \(v_{s}^{(1)}\) since \(f_{1}\) is globally defined for all \(\tau\in(-2\varepsilon,L+2\varepsilon)\) and does not depend on \(y\). On the other hand, since the equations determining the phase functions \(\Theta^{(0)}\) and \(\Theta^{(1)}\) have the same initial data in \(\Omega_{0}\) and in \(\Omega_{1}\), and the local coordinates \(\kappa_{0}\) and \(\kappa_{1}\) coincide on \(\kappa_{0}^{-1}((I_{0}\cap I_{1})\times B)\), we get \(\Theta^{(1)}=\Theta^{(0)}\) in \(I_{0}\cap I_{1}\). Therefore, we conclude that \(v_{s}^{(0)}=v_{s}^{(1)}\) in the overlapped region \(\Omega_{0}\cap\Omega_{1}\). Continuing in this way, we obtain quasimodes \(v_{s}^{(2)},\ldots,v_{s}^{(N+1)}\) such that \[v_{s}^{(j)}=v_{s}^{(j+1)}\quad\text{in}\quad\Omega_{j}\cap\Omega_{j+1} \tag{2.31}\] If \(\gamma\) self-intersects at \(\tau=0\), we start the construction from \(v^{(1)}\) by fixing initial conditions for (2.16) at \(\tau=\tau_{0}^{\prime}\in I_{1}\) and find \(v^{(0)}\) by going backwards. Let \(\chi_{j}(\tau)\) be a partition of unity subordinate to \(\{I_{j}\}_{j=1}^{N+1}\). We denote \(\widetilde{\chi_{j}}(\tau,y)=\chi_{j}(\tau)\) and define \[v_{s}=\sum_{j=0}^{N+1}\widetilde{\chi}_{j}v_{s}^{(j)}.\] Then we get \(v_{s}\in C^{\infty}(M_{0})\). Let \(z_{1},\ldots,z_{R}\in M_{0}\) be distinct self-intersection points of \(\gamma\), and let \(0\leq\tau_{1}<\cdots<\tau_{N}\), \(R\leq N\), be the times of self-intersections. Let \(V_{j}\) be a small neighborhood in \(\widehat{M}_{0}\) centered at \(z_{j},j=1,\ldots,R\). Following the steps in [15], for \(\delta^{\prime}\) sufficiently small we can pick a finite cover \(W_{1},\ldots,W_{S}\) of remaining points on the geodesic such that \(W_{k}\subset\Omega_{l(k)}\) for some index \(l(k)\) and \[\text{supp }v_{s}\cap M_{0}\subset(\cup_{j=1}^{R}V_{j})\cup(\cup_{k=1}^{S}W_{k}).\] Moreover, the quasimode restricted on \(V_{j}\) and \(W_{k}\) is of the form \[v_{s}|_{V_{j}}=\sum_{l:\gamma(\tau_{l})=z_{j}}v_{s}^{(l)} \tag{2.32}\] and \[v_{s}|_{W_{k}}=v_{s}^{l(k)}, \tag{2.33}\] respectively. Since \(v_{s}\) is a finite sum of \(v_{s}^{(l)}\) in each case, the first and second estimate in (2.5) follow from corresponding local considerations (2.25) and (2.30) for each of \(v_{s}^{(l)}\), respectively. We next construct a Gaussian beam quasimode for the operator \(e^{-s(\beta t+x_{1})}\mathcal{L}_{g,q}e^{s(\beta t+x_{1})}\) of the form \(w_{s}(\tau,y;h)=e^{is\Theta(\tau,y)}B(\tau,y;h)\) with the same phase function \(\Theta\in C^{\infty}(\Omega,\mathbb{C})\) that satisfies (2.9), and \(B(\tau,y)\in C^{\infty}(\Omega)\) is supported near \(\Gamma\). By similar computations as in (2.12), we have \[\begin{split} e^{-s(\beta t+x_{1})}\mathcal{L}_{g,q}e^{s(\beta t +x_{1})}w_{s}=& e^{is\Theta}[s^{2}((\nabla_{g_{0}}\Theta,\nabla_{g_{ 0}}\Theta)_{g_{0}}-(1-\beta^{2}))B\\ &+s(-2i\langle\nabla_{g_{0}}\Theta,\nabla_{g_{0}}B\rangle_{g_{0} }-i(\Delta_{g_{0}}\Theta)B)\\ &+(-\Delta_{g_{0}}+q)B].\end{split} \tag{2.34}\] Notice that the eikonal equation and transport equation for \(B\) coincide with the corresponding equation for \(b\). Therefore, we get \(B(\tau,y;h)=b(\tau,y;h)\). Furthermore, since the phase function \(\Theta\) is the same for \(v_{s}\) and \(w_{s}\), we have \(w_{s}=v_{s}\). Finally, we obtain the third estimate in (2.5) by arguing similarly as in the verification of the second estimate in (2.5). This completes the proof of Proposition 2.1. We want the Gaussian beam quasimodes to concentrate along the geodesic as \(h\to 0\). By following the same arguments as in the proof of [10, Proposition 3.1], we have the following result. **Proposition 2.2**.: _Let \(s=\frac{1}{h}+i\lambda\), \(0<h\ll 1\), \(\lambda\in\mathbb{R}\) fixed, and \(\beta\in(\frac{1}{\sqrt{3}},1)\). Let \(\gamma\colon[0,L]\to M_{0}\) be a non-tangential geodesic in \((M_{0},g_{0})\) as in Proposition 2.1. Let \(v_{s}\) be the quasimode from Proposition 2.1. Then for each \(\psi\in C(M_{0})\) and \((t^{\prime},x_{1}^{\prime})\in[0,T]\times\mathbb{R}\) we have_ \[\lim_{h\to 0}\int_{\{t^{\prime}\}\times\{x_{1}^{\prime}\}\times M_{0}}|v_{s}|^{2 }\psi dV_{g_{0}}=\int_{0}^{L}e^{-2\sqrt{1-\beta^{2}}\lambda\tau}\psi(\gamma( \tau))d\tau. \tag{2.35}\] ## 3. Construction of Complex Geometric Optics Solutions In this section we construct a family of exponentially decaying solutions \(u_{1}\in H^{1}(Q)\) of the form \[u_{1}(t,x)=e^{-s(\beta t+\varphi(x))}(v_{s}(x^{\prime})+r_{1}(t,x)),\quad(t,x )\in Q,\] as well as a family of exponentially growing solutions \(u_{2}\in H_{\square_{c,g}}(Q):=\{u\in L^{2}(Q):\square_{c,g}u\in L^{2}(Q)\}\) given by \[u_{2}(t,x)=e^{s(\beta t+\varphi(x))}(v_{s}(x^{\prime})+r_{2}(t,x)),\quad(t,x )\in Q,\] satisfying \(\text{supp }u_{2}|_{\Sigma}\subset U\) and \(u_{2}|_{t=0}=0\). Here \(s=\frac{1}{h}+i\lambda\) with \(\lambda\in\mathbb{R}\) fixed, \(\varphi(x)=x_{1}\) is a limiting Carleman weight, \(v_{s}\) is the Gaussian beam quasimodes given in Proposition 2.1, and \(r_{j}\), \(j=1,2\), are correction terms that vanish as \(h\to 0\). These two types of solutions will play different roles in the proof of Theorem 1.2, and the proofs for their existence are also somewhat different. The construction of the exponentially decaying solution \(u_{1}\) follows from an interior Carleman estimate [23, Proposition 3.6]. For the exponentially growing solution \(u_{2}\), we shall follow the approach introduced in [17, 18]. In the following proposition, which shows the existence of exponentially decaying solutions \(u_{1}\), we equip \(Q\) with a semiclassical Sobolev norm \[\|u\|_{H^{1}_{\mathrm{scl}}(Q)}^{2}=\|u\|_{L^{2}(Q)}^{2}+\|h\partial_{t}u\|_{L ^{2}(Q)}^{2}+\|h\nabla_{g}u\|_{L^{2}(Q)}^{2}.\] **Proposition 3.1**.: _Let \(q\in C(\overline{Q})\), \(\beta\in(\frac{1}{\sqrt{3}},1)\), and let \(s=\frac{1}{h}+i\lambda\) with \(\lambda\in\mathbb{R}\) fixed. For all \(h>0\) sufficiently small, there exists a solution \(u_{1}\in H^{1}(Q)\) to \(\mathcal{L}^{*}_{c,g,q}u_{1}=0\) of the form_ \[u_{1}=e^{-s(\beta t+x_{1})}c^{-\frac{n-2}{4}}(v_{s}+r_{1}), \tag{3.1}\] _where \(v_{s}\in C^{\infty}(M_{0})\) is the Gaussian beam quasimode given in Proposition 2.1, and \(r_{1}\in H^{1}_{\mathrm{scl}}(Q^{\mathrm{int}})\) satisfies the estimate_ \[\|r_{1}\|_{H^{1}_{\mathrm{scl}}(Q^{\mathrm{int}})}=\mathcal{O}(h^{1/2}),\quad h \to 0. \tag{3.2}\] Proof.: Since \((M,g)\) is a CTA manifold, the computations in Section 2 yield \[c^{\frac{n+2}{4}}\circ\mathcal{L}_{c,g,q}\circ c^{-\frac{n-2}{4}}=\mathcal{L}_ {\widetilde{g},\widetilde{q}},\] where \(\widetilde{g}=e\oplus g_{0}\) and \(\widetilde{q}=c(q-c^{\frac{n-2}{4}}\Delta_{g}(c^{-\frac{n-2}{4}}))\). Hence, we see that if \(\widetilde{u}\) is a solution to the equation \(\mathcal{L}^{*}_{\widetilde{g},\widetilde{q}}\widetilde{u}=0\), then the function \(u=c^{-\frac{n-2}{4}}\widetilde{u}\) solves \(\mathcal{L}^{*}_{c,g,q}u=0\). Thus, it suffices to look for solutions to the equation \(\mathcal{L}^{*}_{\widetilde{g},\widetilde{q}}\widetilde{u}=0\) of the form \(\widetilde{u}=e^{-s(\beta t+x_{1})}(v_{s}+r_{1}).\) This is equivalent to finding a function \(r_{1}\) that solves the equation \[e^{s(\beta t+x_{1})}h^{2}\mathcal{L}^{*}_{\widetilde{g},\widetilde{q}}e^{-s( \beta t+x_{1})}r_{1}=-e^{s(\beta t+x_{1})}h^{2}\mathcal{L}^{*}_{\widetilde{g},\widetilde{q}}e^{-s(\beta t+x_{1})}v_{s}. \tag{3.3}\] From here we use estimate (2.5) and apply an interior Carleman estimate [23, Proposition 3.6] to deduce that there exists \(r_{1}\in H^{1}_{\mathrm{scl}}(Q^{\mathrm{int}})\) that solves (3.3) and satisfies estimate (3.2). Lastly, we would like to recall that the interior Carleman estimate we utilized in this proof needs the required assumption for the parameter \(\beta\). This completes the proof of Proposition 3.1. We now turn to the construction of exponentially growing solutions \(u_{2}\) vanishing on part of \(\partial Q\). We emphasize that the earlier construction of \(u_{1}\) requires an extension of the domain due to the interior Carleman estimate. Therefore, we have no control over the traces of the solutions to the wave equation on \(\partial Q\) if we consider solutions on the extended domain. Thus, we need to have a different approach. For every \(\varepsilon>0\) we set \[\partial M_{\varepsilon,-}=\{x\in\partial M:\partial_{\nu}\varphi(x)<- \varepsilon\},\quad\partial M_{\varepsilon,+}=\{x\in\partial M:\partial_{ \nu}\varphi(x)\geq-\varepsilon\},\] and \(\Sigma_{\varepsilon,\pm}=(0,T)\times\partial M_{\varepsilon,\pm}\). To find \(u_{2}\), we use the following result, which was originally proved in [18, Theorem 5.4]. For the convenience of the reader, we re-prove this result. **Proposition 3.2**.: _Let \(q\in C(\overline{Q})\), \(\beta\in[\frac{1}{2},1]\), and let \(s=\frac{1}{h}+i\lambda\) with \(\lambda\in\mathbb{R}\) fixed. For all \(h>0\) small enough, there exists a solution \(u_{2}\in H_{\square_{c,g}}(Q)\) to the initial boundary value problem_ \[\begin{cases}\mathcal{L}_{c,g,q}u_{2}=0\quad\text{in}\quad Q,\\ u_{2}(0,x)=0\quad\text{in}\quad M,\\ u_{2}=0\quad\text{on}\quad\Sigma_{\varepsilon,-},\end{cases} \tag{3.4}\] _of the form_ \[u_{2}=e^{s(\beta t+x_{1})}c^{-\frac{n-2}{4}}(v_{s}+r_{2}). \tag{3.5}\] _Here \(v_{s}\in C^{\infty}(M_{0})\) is the Gaussian beam quasimode given in Proposition 2.1, and \(r_{2}\in L^{2}(Q)\) satisfies the estimate_ \[\|r_{2}\|_{L^{2}(Q)}=\mathcal{O}(h^{1/2}),\quad h\to 0. \tag{3.6}\] Since \(U^{\prime}\subset\partial M\) is an open neighborhood of \(\partial M_{+}=\{x\in\partial M:\partial_{\nu}\varphi(x)\geq 0\}\), we can choose \(\varepsilon>0\) sufficiently small such that \(\Sigma_{\varepsilon,+}\subset U=(0,T)\times U^{\prime}\). Thus, the claims \(\operatorname{supp}\,u_{2}|_{\Sigma}\subset U\) and \(u_{2}|_{t=0}=0\) follow from Proposition 3.2. Proof of Propostion 3.2.: Arguing similarly as in the proof of Proposition 3.1, we may assume that the conformal factor \(c=1\) and look for solutions to the equation \(\mathcal{L}_{\widetilde{g},\widetilde{q}}\widetilde{u}=0\) of the form \(\widetilde{u}=e^{s(\beta t+x_{1})}(v_{s}+r_{2})\) such that \(\widetilde{u}(0,x)=0\) in \(M\) and \(\widetilde{u}=0\) on \(\Sigma_{\varepsilon,-}\). Here \(\widetilde{g}\) and \(\widetilde{q}\) are given by (2.4). This is equivalent to finding a function \(r_{2}\) that satisfies \[\begin{cases}e^{-s(\beta t+x_{1})}h^{2}\mathcal{L}_{\widetilde{g},\widetilde{ q}}e^{s(\beta t+x_{1})}r_{2}=-e^{-s(\beta t+x_{1})}h^{2}\mathcal{L}_{ \widetilde{g},\widetilde{q}}e^{s(\beta t+x_{1})}v_{s}=:f,\quad\text{in}\quad Q,\\ r_{2}(0,x)=-v_{s}(x^{\prime})=:r_{0}\quad\text{in}\quad M,\\ r_{2}(t,x)=-v_{s}(x^{\prime})=:r_{-}\quad\text{on}\quad\Sigma_{\varepsilon,- }.\end{cases} \tag{3.7}\] This can be achieved by solving the following initial boundary value problem \[\begin{cases}e^{-\frac{1}{h}(\beta t+x_{1})}\mathcal{L}_{\widetilde{g}, \widetilde{q}}e^{\frac{1}{h}(\beta t+x_{1})}\widetilde{r}=-e^{i\lambda(\beta t +x_{1})}e^{-s(\beta t+x_{1})}\mathcal{L}_{\widetilde{g},\widetilde{q}}e^{s( \beta t+x_{1})}v_{s}=:\widetilde{f}\quad\text{in}\quad Q,\\ \widetilde{r}(0,x)=-e^{i\lambda x_{1}}v_{s}(x^{\prime})=:\widetilde{r}_{0} \quad\text{in}\quad M,\\ \widetilde{r}(t,x)=-e^{i\lambda(\beta t+x_{1})}v_{s}(x^{\prime})\psi(x)=: \widetilde{r}_{-}\quad\text{on}\quad\Sigma_{-},\end{cases} \tag{3.8c}\] and setting \(r_{2}:=e^{-i\lambda(\beta t+x_{1})}\widetilde{r}\). Here \(\psi\in C^{\infty}_{0}(M)\) is a cut-off function such that \(0\leq\psi\leq 1\), \(\operatorname{supp}\,\psi\cap\partial M\subset\partial M_{\varepsilon/2,-}\), and \(\psi=1\) on \(\partial M_{\varepsilon,-}\). In order to verify the existence of \(\widetilde{r}\), we need to derive a boundary Carleman estimate for the operator \(\mathcal{L}_{\widetilde{g},\widetilde{q}}\). To that end, let us introduce the space \[\mathcal{D}:=\left\{u\in C^{\infty}(\overline{Q}):u|_{\Sigma}=u|_{t=T}=\partial _{t}u|_{t=T}=u|_{t=0}=0\right\}.\] By replacing \(t\) by \(T-t\) and \(x_{1}\) by \(-x_{1}\) in the boundary Carleman estimate of [18, Lemma 4.2], we see that following estimate for the wave operator \(\Box_{\widetilde{g}}\) is valid for any \(u\in\mathcal{D}\) \[\begin{split}& h^{1/2}\|\partial_{t}u(0,\cdot)\|_{L^{2}(M)}+h^{1/2 }\|\sqrt{|\partial_{\nu}\varphi|}\partial_{\nu}u\|_{L^{2}(\Sigma_{-})}+\|u\|_{L ^{2}(Q)}\\ &\leq\mathcal{O}(h)\|e^{-\frac{1}{h}(\beta t+x_{1})}\Box_{\widetilde {g}}e^{\frac{1}{h}(\beta t+x_{1})}u\|_{L^{2}(Q)}+\mathcal{O}(h^{1/2})\|\sqrt{ \partial_{\nu}\varphi}\partial_{\nu}u\|_{L^{2}(\Sigma_{+})}.\end{split} \tag{3.9}\] To establish a boundary Carleman estimate for the oprtator \(\mathcal{L}_{\widetilde{g},\widetilde{q}}\), we first apply the triangle inequality to obtain \[\|e^{-\frac{1}{h}(\beta t+x_{1})}\Box_{\widetilde{g}}e^{\frac{1}{h}(\beta t+x_{ 1})}u\|_{L^{2}(Q)}\leq\|e^{-\frac{1}{h}(\beta t+x_{1})}\mathcal{L}_{\widetilde{ g},\widetilde{q}}e^{\frac{1}{h}(\beta t+x_{1})}u\|_{L^{2}(Q)}+\|\widetilde{q}u\|_{L^{2}(Q)}.\] Furthermore, we have \[\|\widetilde{q}u\|_{L^{2}(Q)}\leq\|\widetilde{q}\|_{L^{\infty}(Q)}\|u\|_{L^{2} (Q)}.\] Therefore, by absorbing the term \(h\|\widetilde{q}\|_{L^{\infty}(Q)}\|u\|_{L^{2}(Q)}\) into the left-hand side of (3.9), we obtain the following boundary Carleman estimate for \(\mathcal{L}_{\widetilde{g},\widetilde{q}}\) \[\begin{split}& h^{1/2}\|\partial_{t}u(0,\cdot)\|_{L^{2}(M)}+h^{ 1/2}\|\sqrt{|\partial_{\nu}\varphi|}\partial_{\nu}u\|_{L^{2}(\Sigma_{-})}+\|u \|_{L^{2}(Q)}\\ &\leq\mathcal{O}(h)\|e^{-\frac{1}{h}(\beta t+x_{1})}\mathcal{L}_ {\widetilde{g},\widetilde{q}}e^{\frac{1}{h}(\beta t+x_{1})}u\|_{L^{2}(Q)}+ \mathcal{O}(h^{1/2})\|\sqrt{\partial_{\nu}\varphi}\partial_{\nu}u\|_{L^{2}( \Sigma_{+})}.\end{split} \tag{3.10}\] Let us now recall the following estimates, which follow immediately from Proposition 2.1: \[\|\widetilde{f}\|_{L^{2}(Q)}=\mathcal{O}(h^{-1/2}),\quad\|\widetilde{r}_{0}\| _{L^{2}(M)}=\mathcal{O}(1). \tag{3.11}\] Also, by utilizing the same arguments as in the proof of [23, Lemma 5.1], we obtain the estimate \(\|v_{s}\|_{L^{2}(\Sigma_{\varepsilon/2,-})}=\mathcal{O}(1)\), which implies that \[\||\partial_{\nu}\varphi|^{-1/2}\widetilde{r}_{-}\|_{L^{2}(\Sigma_{-})}\leq \||\partial_{\nu}\varphi|^{-1/2}e^{i\lambda(\beta t+x_{1})}v_{s}\|_{L^{2}( \Sigma_{\varepsilon/2,-})}=\mathcal{O}(\varepsilon^{-1/2}). \tag{3.12}\] We shall now closely follow the proof of [17, Lemma 5.1] to verify the existence of \(\widetilde{r}\in H_{\square_{\widetilde{g}}}(Q)\) satisfying (3.8a)-(3.8c). We introduce the space \[\mathcal{M}:=\left\{(e^{\frac{1}{h}(\beta t+x_{1})}\mathcal{L}_{\widetilde{g}, \widetilde{q}}^{*}e^{-\frac{1}{h}(\beta t+x_{1})}u,\partial_{\nu}u|_{\Sigma_{+ }}):u\in\mathcal{D}\right\}\] and equip it with the norm \[\|(g_{1},g_{2})\|_{\mathcal{M}}=\|g_{1}\|_{L^{2}(Q)}+\|h^{-1/2}|\partial_{\nu} \varphi|^{1/2}g_{2}\|_{L^{2}(\Sigma_{+})}.\] Then \(\mathcal{M}\) can be viewed as a subspace of \(L^{2}(Q)\times L^{2}_{\varphi,h,+}(\Sigma_{+})\), where \[L^{2}_{\varphi,h,+}(\Sigma_{+}):=\{u:\|h^{-1/2}|\partial_{\nu}\varphi|^{1/2}u \|_{L^{2}(\Sigma_{+})}=\mathcal{O}(1)\}.\] If \(u\in\mathcal{D}\), by the boundary Carleman estimate (3.10) and the Cauchy-Schwartz inequality, we have \[\begin{split}&\left|\langle u,f\rangle_{L^{2}(Q)}-\langle \partial_{t}u(0,\cdot),\widetilde{r}_{0}\rangle_{L^{2}(M)}-\langle\partial_{ \nu}u,\widetilde{r}_{-}\rangle_{L^{2}(\Sigma_{-})}\right|\\ &\leq\|u\|_{L^{2}(Q)}\|f\|_{L^{2}(Q)}+\|\partial_{t}u(0,\cdot)\|_{ L^{2}(M)}\|\widetilde{r}_{0}\|_{L^{2}(M)}+\||\partial_{\nu}\varphi|^{1/2} \partial_{\nu}u\|_{L^{2}(\Sigma_{-})}\||\partial_{\nu}\varphi|^{-1/2} \widetilde{r}_{-}\|_{L^{2}(\Sigma_{-})}\\ &\leq\mathcal{O}(1)\|(e^{\frac{1}{h}(\beta t+x_{1})}\mathcal{L}_{ \widetilde{g},\widetilde{q}}^{*}e^{-\frac{1}{h}(\beta t+x_{1})}u,\partial_{\nu }u|_{\Sigma_{+}})\|_{\mathcal{M}}\\ &\qquad\times\left(h\|f\|_{L^{2}(Q)}+h^{1/2}\|\widetilde{r}_{0}\| _{L^{2}(M)}+h^{1/2}\|\partial_{\nu}\varphi|^{-1/2}\widetilde{r}_{-}\|_{L^{2}( \Sigma_{-})}\right).\end{split}\] Hence, we may define a linear functional \(\mathcal{S}\) on \(\mathcal{M}\) by setting \[\mathcal{S}(e^{\frac{1}{h}(\beta t+x_{1})}\mathcal{L}_{\widetilde{g}, \widetilde{q}}^{*}e^{-\frac{1}{h}(\beta t+x_{1})}u,\partial_{\nu}u|_{\Sigma_{+ }}):=\langle u,f\rangle_{L^{2}(Q)}-\langle\partial_{t}u(0,\cdot),\widetilde{r }_{0}\rangle_{L^{2}(M)}-\langle\partial_{\nu}u,\widetilde{r}_{-}\rangle_{L^{2}( \Sigma_{-})}\,,u\in\mathcal{D}.\] By the Hahn-Banach theorem, we can extend the operator \(\mathcal{S}\) to a continuous linear form \(\widetilde{\mathcal{S}}\) on \(L^{2}(Q)\times L^{2}_{\varphi,h}(\Sigma_{+})\) without increasing the norm. Hence, it follows from estimates (3.11) and (3.12) that \[\|\widetilde{\mathcal{S}}\|=\|\mathcal{S}\|\leq\mathcal{O}(1)\left(h\|f\|_{L^{2 }(Q)}+h^{1/2}\|\widetilde{r}_{0}\|_{L^{2}(M)}+h^{1/2}\||\partial_{\nu}\varphi|^{ -1/2}\widetilde{r}_{-}\|_{L^{2}(\Sigma_{-})}\right)=\mathcal{O}(h^{1/2}). \tag{3.13}\] By the Riesz representation theorem, there exists \((\widetilde{r},\widetilde{r}_{+})\in L^{2}(Q)\times L^{2}_{\varphi,h,-}(\Sigma_ {+})\), where \(L^{2}_{\varphi,h,-}(\Sigma_{+}):=\{u:\|h^{1/2}|\partial_{\nu}\varphi|^{-1/2}u\|_ {L^{2}(\Sigma_{+})}=\mathcal{O}(1)\}\), such that \[\|\widetilde{r}\|_{L^{2}(Q)}+\|h^{1/2}|\partial_{\nu}\varphi|^{-1/2} \widetilde{r}_{+}\|_{L^{2}(\Sigma_{+})}=\|\mathcal{S}\|=\mathcal{O}(h^{1/2}),\] and \[\widetilde{\mathcal{S}}(g_{1},g_{2})=\langle g_{1},\widetilde{r}\rangle_{L^{2} (Q)}+\langle g_{2},\widetilde{r}_{+}\rangle_{L^{2}(\Sigma_{+})}\,,\quad(g_{1},g_ {2})\in L^{2}(Q)\times L^{2}_{\varphi,h,+}(\Sigma_{+}).\] Therefore, for all \(u\in\mathcal{D}\), we get \[\begin{split}&\left\langle e^{\frac{1}{h}(\beta t+x_{1})}\mathcal{L} _{\widehat{g},\widehat{q}}^{*}e^{-\frac{1}{h}(\beta t+x_{1})}u,\widetilde{r} \right\rangle_{L^{2}(Q)}+\left\langle\partial_{\nu}u|_{\Sigma_{+}},\widetilde{ r}_{+}\right\rangle_{L^{2}(\Sigma_{+})}\\ &=\langle u,f\rangle_{L^{2}(Q)}-\left\langle\partial_{t}u(0, \cdot),\widetilde{r}_{0}\right\rangle_{L^{2}(M)}-\left\langle\partial_{\nu}u|_ {\Sigma_{-}},\widetilde{r}_{-}\right\rangle_{L^{2}(\Sigma_{-})}.\end{split} \tag{3.14}\] We now verify that \(\widetilde{r}\) satisfies equations (3.8a)-(3.8c). By taking \(u\in C_{0}^{\infty}(Q)\) in (3.14) and using the fact that \(C_{0}^{\infty}(Q)\) is dense in \(L^{2}(Q)\), we see that equation (3.8a) holds. Furthermore, (3.8a) implies \(\widetilde{r}\in H_{\square_{g}}(Q)\). Thus, by [17, Proposition A.1], we can define the trace \(\widetilde{r}|_{\Sigma}\in H^{-3}(0,T;H^{-1/2}(\partial M))\) and \(\widetilde{r}|_{t=0}\in H^{-2}(M)\). Furthermore, the density result [17, Theorem A.1], in conjunction with integration by parts, implies that for all \(u\in\mathcal{D}\), we have \[\begin{split}&\left\langle e^{\frac{1}{h}(\beta t+x_{1})}\mathcal{ L}_{\widehat{g},\widehat{q}}^{*}e^{-\frac{1}{h}(\beta t+x_{1})}u,\widetilde{r} \right\rangle_{L^{2}(Q)}+\left\langle\partial_{\nu}u|_{\Sigma_{+}},\widetilde{ r}|_{\Sigma_{+}}\right\rangle_{H^{3}(0,T;H^{1/2}(\Sigma_{+})),H^{-3}(0,T;H^{-1/2}( \Sigma_{+}))}\\ &=\left\langle u,f\right\rangle_{L^{2}(Q)}-\left\langle\partial_{ t}u(0,\cdot),\widetilde{r}|_{t=0}\right\rangle_{H^{2}(M),H^{-2}(M)}-\left\langle \partial_{\nu}u,\widetilde{r}|_{\Sigma_{-}}\right\rangle_{H^{3}(0,T;H^{1/2}( \Sigma_{-})),H^{-3}(0,T;H^{-1/2}(\Sigma_{-}))}.\end{split}\] Finally, we compare the equality above with (3.14) and take \(u\in\mathcal{D}\) to be arbitrary to conclude that \(\widetilde{r}=\widetilde{r}_{0}\) and \(\widetilde{r}|_{\Sigma_{-}}=\widetilde{r}_{-}\). Therefore, we have verified (3.8b) and (3.8c). This completes the proof of Proposition 3.2. ## 4. Proof of Theorem 1.2 Let \(u_{1}\in H^{1}(Q)\) be an exponentially decaying CGO solution of the form (3.1) satisfying \(\mathcal{L}_{c,g,q_{1}}^{*}u_{1}=0\) in \(Q\), and let \(u_{2}\in H_{\square_{c,g}}(Q)\) be an exponentially growing CGO solution of \(\mathcal{L}_{c,g,q_{2}}u_{2}=0\) in \(Q\) given by (3.5) such that \(u_{2}|_{t=0}=0\) and \(\operatorname{supp}\,u_{2}|_{\Sigma}\subset U\). Due to the assumption \(\mathcal{C}_{g,q_{1}}=\mathcal{C}_{g,q_{2}}\), by [18, Proposition 3.1], there exists a function \(v\in H_{\square_{c,g}}(Q)\) that satisfies the equations \(\mathcal{L}_{c,g,q_{1}}v=0\) and \[(u_{2}-v)|_{U}=(u_{2}-v)|_{t=0}=(u_{2}-v)|_{t=T}=\partial_{t}(u_{2}-v)|_{t=0}= \partial_{\nu}(u_{2}-v)|_{V}=0.\] Then the function \(u:=u_{2}-v\in H_{\square_{c,g}}(Q)\) is a solution to the equation \[\mathcal{L}_{c,g,q_{1}}u=qu_{2}\quad\text{in}\quad Q,\quad u|_{\Sigma}=u|_{t= 0}=u|_{t=T}=\partial_{t}u|_{t=0}=\partial_{\nu}u|_{V}=0. \tag{4.1}\] Here, and in what follows, we used the notation \(q:=q_{1}-q_{2}\). After arguing similarly as in [18, Section 6], we obtain the following integral identity \[\int_{Q}qu_{2}\overline{u_{1}}dV_{g}dt=\int_{M}c^{-1}\partial_{t}u(T,x) \overline{u_{1}(T,x)}dV_{g}-\int_{\Sigma\setminus V}\partial_{\nu}u\overline{ u_{1}}dS_{g}dt. \tag{4.2}\] We next substitute the CGO solutions (3.1) and (3.5) into (4.2) and pass to the limit \(h\to 0\). To analyze the limit of the terms on right-hand side of (4.2), we have the following lemma, which states that both terms on the right-hand side of (4.2) vanish as \(h\to 0\). Its proof is same as the proof of [23, Lemma 5.1], which mainly relies on an application of a boundary Carleman estimate [18, Theorem 4.1], as well as estimates (2.5) and (3.2). This Lemma is the reason why we need the \(H^{1}\)-norm decay (3.2) for \(r_{1}\). **Lemma 4.1**.: _Let \(u_{1}\) and \(u\) be the functions described above. Then the following estimates hold as \(h\to 0\):_ \[\int_{M}c^{-1}\partial_{t}u(T,x)\overline{u_{1}(T,x)}dV_{g}=\mathcal{O}(h^{1/ 2}) \tag{4.3}\] _and_ \[\int_{\Sigma\setminus V}\partial_{\nu}u\overline{u_{1}}dS_{g}dt=\mathcal{O}(h^ {1/2}). \tag{4.4}\] To investigate the left-hand side of the integral identity (4.2), we compute from the respective CGO solution (3.1) and (3.5) for \(u_{1}\) and \(u_{2}\) that \[u_{2}\overline{u_{1}}=e^{2i\lambda(\beta t+x_{1})}c^{-\frac{n-2}{2}}(|v_{s}|^{2 }+\overline{v_{s}}r_{2}+\overline{r_{1}}v_{s}+\overline{r_{1}}r_{2}).\] By estimates (2.5), (3.2), (3.6), and the Cauchy-Schwartz inequality, we obtain the estimate \[\bigg{|}\int_{Q}e^{2i\lambda(\beta t+x_{1})}c^{-\frac{n-2}{2}}(\overline{v_{s} }r_{2}+\overline{r_{1}}v_{s}+\overline{r_{1}}r_{2})dV_{g}dt\bigg{|}=\mathcal{ O}(h^{1/2}),\quad h\to 0. \tag{4.5}\] On the other hand, since \(q_{1},q_{2}\in C(\overline{Q})\) and \(q_{1}=q_{2}\) on the boundary \(\partial Q\), we may continuously extend \(q\) on \((\mathbb{R}^{2}\times M_{0})\setminus Q\) by zero and denote the extension by the same letter. Then from the equality \(dV_{g}=c^{\frac{n}{2}}dV_{g_{0}}dx_{1}\), Fubini's theorem, the dominated convergence theorem, and the concentration property (2.35), we obtain the following limit as \(h\to 0\): \[\int_{Q}e^{2i\lambda(\beta t+x_{1})}c^{-\frac{n-2}{2}}q|v_{s}|^{2 }dV_{g}dt =\int_{\mathbb{R}}\int_{\mathbb{R}}\int_{M_{0}}e^{2i\lambda(\beta t +x_{1})}cq|v_{s}|^{2}dV_{g_{0}}dx_{1}dt \tag{4.6}\] \[\to\int_{0}^{L}\int_{\mathbb{R}}\int_{\mathbb{R}}e^{2i\lambda( \beta t+x_{1})-2\sqrt{1-\beta^{2}}\lambda\tau}(cq)(t,x_{1},\gamma(\tau))dx_{1 }dtd\tau.\] Hence, by replacing \(2\lambda\) with \(\lambda\), we deduce from (4.2), (4.5), (4.6), and Lemma 4.1 that the identity \[\int_{0}^{L}\int_{\mathbb{R}}\int_{\mathbb{R}}e^{i\lambda(\beta t+x_{1})- \sqrt{1-\beta^{2}}\lambda\tau}(cq)(t,x_{1},\gamma(\tau))dx_{1}dtd\tau=0 \tag{4.7}\] holds for every non-tangential geodesic \(\gamma\) in the transversal manifold \((M_{0},g_{0})\). We are ready to utilize Assumption 1, the invertibility of the attenuated geodesic ray transform on \((M_{0},g_{0})\). To that end, we denote \(\mathcal{F}_{(t,x_{1})\to(\xi_{1},\xi_{2})}\) the Fourier transform in the two Euclidean variables \((t,x_{1})\) and define \[f(x^{\prime},\beta,\lambda):=\int_{\mathbb{R}}\int_{\mathbb{R}}e^{i\lambda( \beta t+x_{1})}(cq)(t,x_{1},x^{\prime})dx_{1}dt=\mathcal{F}_{(t,x_{1})\to( \xi_{1},\xi_{2})}(cq)|_{(\xi_{1},\xi_{2})=-\lambda(\beta,1)},\] where \(x^{\prime}\in M_{0}\), \(\beta\in(\frac{1}{\sqrt{3}},1)\), and \(\lambda\in\mathbb{R}\). Since \(q\in C(\overline{Q})\), the function \(f(\cdot,\beta,\lambda)\) is continuous on \(M_{0}\). Since \(\gamma\) is an arbitrary non-tangential geodesic, it follows from (4.7) that the following attenuated geodesic ray transform \[I^{-\sqrt{1-\beta^{2}}\lambda}(f(\cdot,\beta,\lambda))(x,\xi)=\int_{0}^{ \tau_{\text{ext}}(x,\xi)}e^{-\sqrt{1-\beta^{2}}\lambda\tau}f(\gamma_{x,\xi}( \tau),\beta,\lambda)d\tau,\quad(x,\xi)\in\partial_{-}SM_{0}\setminus\Gamma_{ -}, \tag{4.8}\] vanishes. By Assumption 1, there exists \(\varepsilon>0\) such that \(f(\gamma(\tau),\beta,\lambda)=0\) when \(\sqrt{1-\beta^{2}}|\lambda|<\varepsilon\). Hence, there exist \(\beta_{0}\in(\frac{1}{\sqrt{3}},1)\), \(\lambda_{0}>0\), and \(\delta>0\) such that for every \((\lambda,\beta)\in\mathbb{R}^{2}\) satisfying \(|\beta-\beta_{0}|\), \(|\lambda-\lambda_{0}|<\delta\), and \(\lambda\neq 0\), we have \(\sqrt{1-\beta^{2}}|\lambda|<\varepsilon\). Thus, we see that \(\mathcal{F}_{(t,x_{1})\to(\xi_{1},\xi_{2})}(cq)=0\) in an open set of \(\mathbb{R}^{2}\). Furthermore, since \(q\) is compactly supported, we get from the Paley-Wiener theorem that \(\mathcal{F}(cq)\) is real analytic. Therefore, we conclude that \(cq=0\) in \(Q\). Since \(c\) is a positive function, we have \(q=q_{1}-q_{2}=0\). This completes the proof of Theorem 1.2.
2309.08583
ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer
While state-of-the-art large language models (LLMs) can excel at adapting text from one style to another, current work does not address the explainability of style transfer models. Recent work has explored generating textual explanations from larger teacher models and distilling them into smaller student models. One challenge with such approach is that LLM outputs may contain errors that require expertise to correct, but gathering and incorporating expert feedback is difficult due to cost and availability. To address this challenge, we propose ICLEF, a novel human-AI collaboration approach to model distillation that incorporates scarce expert human feedback by combining in-context learning and model self-critique. We show that our method leads to generation of high-quality synthetic explainable style transfer datasets for formality (e-GYAFC) and subjective bias (e-WNC). Via automatic and human evaluation, we show that specialized student models fine-tuned on our datasets outperform generalist teacher models on the explainable style transfer task in one-shot settings, and perform competitively compared to few-shot teacher models, highlighting the quality of the data and the role of expert feedback. In an extrinsic task of authorship attribution, we show that explanations generated by smaller models fine-tuned on e-GYAFC are more predictive of authorship than explanations generated by few-shot teacher models.
Arkadiy Saakyan, Smaranda Muresan
2023-09-15T17:41:14Z
http://arxiv.org/abs/2309.08583v2
# ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer ###### Abstract While state-of-the-art language models excel at the style transfer task, current work does not address explainability of style transfer systems. Explanations could be generated using large language models such as GPT-3.5 and GPT-4, but the use of such complex systems is inefficient when smaller, widely distributed, and transparent alternatives are available. We propose a framework to augment and improve a formality style transfer dataset with explanations via model distillation from ChatGPT. To further refine the generated explanations, we propose a novel way to incorporate scarce expert human feedback using in-context learning (ICLEF: In-Context Learning from Expert Feedback) by prompting ChatGPT to act as a critic to its own outputs. We use the resulting dataset of 9,960 explainable formality style transfer instances (e-GYAFC) to show that current openly distributed instruction-tuned models (and, in some settings, ChatGPT) perform poorly on the task, and that fine-tuning on our high-quality dataset leads to significant improvements as shown by automatic evaluation. In human evaluation, we show that models much smaller than ChatGPT fine-tuned on our data align better with expert preferences. Finally, we discuss two potential applications of models fine-tuned on the explainable style transfer task: interpretable authorship verification and interpretable adversarial attacks on AI-generated text detectors. ## 1 Introduction Attribute style transfer is the task of transforming a given text along a particular style dimension, such as changing its formality, bias, or level of offensiveness (Lample et al., 2019; Sudhakar et al., 2019; Jin et al., 2022). Current state-of-the-art approaches to style transfer typically utilize pre-trained language models (Krishna et al., 2020; Reif et al., 2022; Jin et al., 2022). While highly effective, these methods lack interpretability (Belinkov and Glass, 2019). Interpretable tools should improve people's understanding of the AI model and help recognize model uncertainty (Wang and Yin, 2022). Recent progress has shown that large language models (LLMs) are capable of generating faithful natural language explanations comprehensible to an end-user (Camburu et al., 2018; Majumder et al., 2022; Wiegreffe et al., 2021; Lyu et al., 2023). Besides helping the user, the generated explanations could act as a defense against spurious correlations (Ludan et al., 2023; Camburu et al., 2018) and annotation artifacts (McCoy et al., 2019; Poliak et al., 2018). We define the task of _explainable_ style transfer to improve explainability of pre-trained language models applied to style transfer. Along with the paraphrase, the model needs to provide natural lan Figure 1: Proposed method to augment formality style transfer dataset GYAFC (Rao and Tetreault, 2018) with structured natural language explanations. ChatGPT is asked to generate the informal attributes of the input sentence, a formal paraphrase, and the formal attributes of the resulting sentence. Expert feedback is provided in the form of a few-shot prompt and a critic model is asked to imitate a human annotator in order to improve generated quality of the informal attributes. guage explanations (Camburu et al., 2018) containing informality and formality attributes (see _Informal Attributes_ and _Formal Attributes_ boxes in Figure 1). While LLMs like GPT-4 (OpenAI, 2023) and ChatGPT may be capable of producing plausible explanations for style transfer, this approach is inefficient due to these models' large parameter count1. Moreover, it is difficult to fine-tune these models to incorporate specialized expert feedback as they are only available through an API. Even if available publicly, fine-tuning a model of such scale is highly impractical. While elaborate prompt strategies could improve performance (Wei et al., 2022; Kojima et al., 2022, _inter alia_), this leads to increase in context length and corresponding variable costs that can be avoided by fine-tuning the model once. Footnote 1: We do not know the exact parameter count of ChatGPT, but for this work we will assume it is in the order of 175B. A popular solution is to train smaller models on outputs from large models like GPT-4, PALM (Google, 2022), and ChatGPT; this approach is known as distillation (Ho et al., 2022; Magister et al., 2023). We propose to use distillation as a means to augment existing style transfer corpora with natural language explanations. Our approach is visualized in Figure 1. We start with the GYAFC (Rao and Tetreault, 2018) formality style transfer dataset, which is a parallel dataset of informal and formal sentences. We use ChatGPT to generate semi-structured natural language explanations elaborating on the attributes of source and target styles. Due to the high chance of hallucinations (confabulations) (Ji et al., 2023), we ask expert linguists to evaluate the explanations and provide feedback. However, due to scarcity and high cost of expert feedback, it would not be possible to annotate the entire corpora in such a way. Inspired by the self-critique ability of LLMs (Madaan et al., 2023; Bai et al., 2022; Saunders et al., 2022; Scheurer et al., 2023, _inter alia_), we prompt ChatGPT with a small number of expert corrections and ask it to act as an annotator criticizing its own outputs. We use this approach (ICLEF: In-Context Learning from Expert Feedback) to improve the generated instances before incorporating them in our explainable GYAFC dataset (e-GYAFC). We validate the quality of e-GYAFC with expert evaluation. We then use this dataset to compare instruction-tuned models (in one-shot setting), ChatGPT, and fine-tuned models in their ability to perform the explainable style transfer task. Our investigation shows that task-specific fine-tuned models can outperform models significantly larger in size or fine-tuned on thousands of instructions. Finally, we explore the applications of our fine-tuned models capable of providing explanations, i.e. self-rationalizing models (Wiegreffe et al., 2021; Alvarez Melis and Jaakkola, 2018), in two settings. First, we show that informality explanations could provide interpretable features for the author verification task. Second, we show that our formal-to-informal explainable style transfer model produces an interpretable adversarial attack on AI-generated text detection. To summarize, our contributions are: * A high-quality dataset for explainable formality style transfer (e-GYAFC) containing explanations for style attributes as well as improved formality paraphrases (SS3). * A framework to augment a style transfer dataset with semi-structured natural language explanations and improved paraphrases (ICLEF). Our method leverages model distillation and incorporates scarce yet high-quality expert feedback via in-context learning and self-critique (SS3.2). * Rigorous evaluation of current instruction-tuned models on the resulting dataset (SS4). Via both automatic and human evaluation, we show that a smaller, specialized, fine-tuned model outperforms one-shot instruction-tuned models larger in size (SS5). * A discussion of practical importance of models fine-tuned on the explainable style transfer task in two downstream applications (SS6). We will release the data, models, and scripts to encourage further research in explainability, learning from human feedback, and style transfer. 2 Footnote 2: github.com/asaakyan/explain-st ## 2 Related Work Model distillationModel or knowledge distillation is a process of fine-tuning a smaller student model to imitate the behaviour of a more competent teacher model. (Beyer et al., 2022; Buciluun-defined et al., 2006; Hinton et al., 2015). With the advance of large language models such as GPT-3 (Brown et al., 2020), knowledge distillation became an especially popular technique, allowing to generate datasets of similar quality to the crowd-sourced data West et al. (2022), especially when combined with a model-in-the-loop approach Wiegreffe et al. (2022); Bartolo et al. (2022); Chakrabarty et al. (2022). Unlike these approaches, we do not rely on a large number of crowdworkers but instead incorporate scarce expert feedback. Recent work explores learning from natural language explanations Wang et al. (2023); Ho et al. (2022); Magister et al. (2023), showing that large language models are capable of generating acceptable enough reasoning steps and explanations to fine-tune improved smaller models. We take this further by providing a task-specific detailed prompt to facilitate correct explanation generations. Human feedbackReinforcement learning from human feedback (RLHF) Stiennon et al. (2020), while effective, is difficult to implement and requires a lot of annotated data. Alternative approaches, such as Chain-of-Hindsight Liu et al. (2023), Sequence Likelihood Calibration Zhao et al. (2023), and Direct Preference Optimization Rafailov et al. (2023) also require large corpora of paired feedback to fine-tune on. Imitation learning from human feedback (ILF) Scheurer et al. (2023) utilizes human feedback to improve model-generated instances, and then fine-tunes on that data. In this work, we focus on incorporating _expert_ feedback which is naturally scarce and expensive to collect. We propose to do this by teaching a language model to generate synthetic feedback based on few examples of expert feedback. Motivated by the success of self-critiquing approaches Madaan et al. (2023); Bai et al. (2022); Saunders et al. (2022), _inter alia_), we condition the model on expert corrections and prompt it to criticize its outputs. ExplainabilityNatural language explanations have been utilized for a variety of tasks, such as natural language inference Camburu et al. (2018), commonsense Rajani et al. (2019); Aggarwal et al. (2021), figurative language understanding Chakrabarty et al. (2022), social norm entailment CH-Wang et al. (2023) (see more in Wiegreffe and Marasovic (2021)). In this work, we focus on creating natural language explanations for the style transfer task, which to our knowledge has not been addressed. Recently, there has been an increase in the number of works focusing on of structured explanations Wiegreffe and Marasovic (2021). Structured explanations provide well-defined templates for the model as opposed to free-text explanations for which it is hard to collect reliable human annotations and compare against them in automatic evaluation. Since the inherent structure of the style transfer task allows it, we collect semi-structured explanations. See more in SS3. Style transferStyle transfer approaches range from instruction-based methods Reif et al. (2022) and paraphrasing Krishna et al. (2020), to approaches focused on learning in low-resource settings Patel et al. (2022). Much of style transfer work focuses on style representations that decouple style and content Wegmann et al. (2022); Wegmann and Nguyen (2021), however most of these methods are not interpretable. Interpretable approaches rely on constructing interpretable embeddings, such as LIWC Tausczik and Pennebaker (2010) or LISA Patel et al. (2023). Unlike these approaches, we propose to use natural language explanations to further enhance model interpretability. ## 3 e-GYAFC: An Explainable Style Transfer Dataset We build an explainable style transfer dataset by first augmenting GYAFC with natural language explanations generated by an LLM (SS3.1), and then improving the generated data with ICLEF: in-context learning from expert feedback (SS3.2). ### Augmenting Style Transfer Datasets with Natural Language Explanations The GYAFC Rao and Tetreault (2018) formality style transfer dataset contains parallel formal and informal expressions. The informal sentences are collected from Yahoo Answers, and formal paraphrases were crowdsourced using Amazon Mechanical Turk (AMT). Our initial goal was to generate structured natural language explanations for formality and informality attributes absent from the dataset. However, we noticed that the formal paraphrase generated by ChatGPT rivals or sometimes exceeds the quality of the crowdsourced paraphrase in the GYAFC data (see the paraphrase in Figure 1). Hence, we also use the model to generate a paraphrase and use it as part of our dataset. Inspired by the observation that LLMs can generate intermediate reasoning steps Wei et al. (2022); Kojima et al. (2022), we formulate the following multi-step generation task: given a GYAFC informal sentence \(s_{i}\), generate a structured explanation of its informal attributes \(e_{i}\), then generate a formal paraphrase \(s_{f}\) based on these attributes, then generate the formal attributes of the resulting paraphrase \(e_{f}\) jointly. In other words, we are modelling the conditional probability \(p(e_{i},s_{f},e_{f}|s_{i})\). We chose the Informal\(\rightarrow\)Formal direction for data generation for two reasons. First, the informal sentences in GYAFC are not crowdsourced with AMT but are user-generated. Second, we hypothesize that it is easier for ChatGPT to generate formal text rather than diverse informal text that typically would have high perplexity. We use a semi-structured format for the explanations. Specifically, we ask the model to generate a list of attributes followed by an excerpt from the sentence as the evidence: attribute ("evidence"), see examples in Figure 1. These explanations are more helpful to the user than traditional free-text explanations since they have a consistent format with textual evidence, making it easier to verify. Moreover, it is easier to evaluate structured explanations using automatic metrics since there is less variation of possible answers compared to free-text explanations. We design a prompt taking advantage of the long context of ChatGPT. The prompt explains the task in detail and provides examples of formality style attributes and informality style attributes. In addition, we leverage the existing instances in the GYAFC dataset and provide "candidate" informality attributes by appending the set difference of words between informal and formal GYAFC sentences. This helps the model to more accurately identify the informal attributes, but does not bias the model into generating a formal sentence similar to the one in GYAFC. Instead, we encourage the model to faithfully generate a formal sentence based on the informal attributes it identifies. Details on our prompt of 3,391 tokens in length and can be found in Appendix A. The resulting data allows us to train models in two directions: Formal\(\rightarrow\)Informal (given \(s_{f}\), generate \(e_{f},s_{i},e_{i}\)) and Informal\(\rightarrow\)Formal (given \(s_{i}\), generate \(e_{i},s_{f},e_{f}\)). ### In-Context Learning from Expert Feedback (ICLEF) We observe that ChatGPT generations can contain hallucinations and inaccuracies (e.g., the generated style attribute "abbreviated language" with the evidence "I would" in Figure 1). To improve the quality of the data, we turn to expert feedback, since previous work has identified that crowdworkers on platforms such as Amazon Mechanical Turk could be unreliable for open-ended generation tasks Karpinska et al. (2021), and recently even rely on chatGPT to provide their answers Veselovsky et al. (2023), amplifying the low-quality generations. To improve generated explanations, we turn to expert linguists (defined as having a formal educational background in linguistics). We hire 3 experts with formal education in linguistics: one with a bachelors degree in linguistics (E1), one with a bachelors degree in linguistics and a masters degree in education (E2), and one with a bachelor and master degrees in linguistics (E3) on Upwork3. We pay the experts 15-30 USD per hour depending on their asking rate and qualifications. Our annotation protocol provides a non-exhaustive reference to formality and informality attributes (also used in the ChatGPT prompt) and asks the annotators to be very critical and provide feedback on what attributes in \(e_{i},e_{f}\) are missing or are incorrect. Footnote 3: Upwork.com We provide 50 generated examples for annotation, which takes each expert around 3 hours. In our preliminary investigations, we found that providing feedback from multiple experts hurts the identification of all the incorrect attributes as the model learns to imitate a certain experts' style instead of the content of the annotation. Due to the objective nature of the task, there is little concern of expert bias (as the attributes are either correct from a linguistics perspective or not), so we select a single expert's feedback. Qualitatively, we select the expert with most thorough feedback. Quantitatively, we select the expert with the largest number of corrections provided. We selected expert E3. Given the high price and scarcity of expert feedback, we cannot use traditional fine-tuning pipelines or techniques to incorporate human feedback like Reinforcement learning from human feedback (RLHF) Stiennon et al. (2020), Chain-of-Hindsight Liu et al. (2023), Sequence Likelihood Calibration Zhao et al. (2023), Direct Preference Optimization Rafailov et al. (2023), or Imitation Learning from Language Feedback Scheurer et al. (2023) that all rely on large-scale non-expert annotations. Instead, inspired by the recent obser vations that models can self-critique their outputs (Bai et al., 2022; Saunders et al., 2022), we provide the model with the expert human feedback corrections in-context and prompt it to act as an annotator on the new instances (see Appendix A). We refer to the resulting model as GPT-Critic. We observed that the rate of incorrectly generated attributes was higher for \(e_{i}\) than \(e_{f}\) (\(\approx 29\%\) vs. \(\approx 15\%\)), perhaps since \(e_{f}\) is generated for the model's own paraphrase, and it is easier for the model to "comprehend" formal text. We also find that it is easier for GPT-Critic to identify incorrect instances rather than generate the missing ones, since generation of novel attributes may induce new hallucinations. We query the model to identify incorrect attributes in \(e_{i}\), if there are any, and then ask it to provide a fixed version \(\hat{e_{i}}\). In this way, we fixed \(\approx 30\%\) of the generated data (2853 instances). The resulting data (which we refer to as e-GYAFC) contains 9,960 \(s_{i}\leftrightarrow s_{f}\) instances with corresponding \(\hat{e_{i}},e_{f}\) attribute explanations. We randomly split the data into 8,000 training instances and 1,960 held-out test instances. ### Dataset quality Automatic evaluationWe estimate paraphrase quality automatically using Mutual Implication Score and Formality Score (see SS4.2) between our formal paraphrases and the ones in GYAFC. We find that our paraphrases are of comparable quality with a MIS of 81.32 vs. 83.08 for GYAFC, yet we achieve a much higher formality score of 91.62 vs. 80.14 for GYAFC (see example in Figure 1, where the GYAFC formal instance contains _kick them out_). Human evaluationWe re-hire the 2 expert annotators who performed the feedback annotations (A1 and A3) as well as an independent expert annotator with a masters degree in linguistics who did not perform the feedback annotations (A2). We ask their preferences on 100 randomly sampled instances with respect to the explanations (generated \(e_{i}\) vs. fixed \(\hat{e_{i}}\)) and paraphrases (GYAFC \(s_{f}\) vs. generated \(s_{f}\)). In addition, we ask for acceptability judgments for the preferred instance and separately for \(e_{f}\). We report preference or equal rates (pref.)4 and acceptability rates (accept.) in Table 1. Overall, we found that our dataset instances are considered acceptable (the average acceptability rate for \(e_{i},s_{f},e_{f}\) is 87%, 77%, 98% respectively) and that generated paraphrases are generally preferred to the ones in the GYAFC corpus (average 77%), and fixed explanations are preferred or are equal in quality with original generations on average in 90% of cases. Annotator A2 expressed concerns that the paraphrases may sound unnatural due to excessive formality (we believe it is due to the context in which the informal expression would be uttered) and that explanations sometimes miss punctuation errors (which, while important, is not critical for model-generated explanations). Due to high prevalence of the positive class, there is a high chance of random agreement, hence we provide a more granular look into the expert annotations than the inter-rater agreement in Table 1. Pairwise accuracy between annotator responses for all categories of e-GYAFC evaluation, and found that it averages at 81% across all categories. Footnote 4: We compute preference or equal preference among acceptable instances. For acceptability, we compute dispreferred instances as unacceptable. ## 4 Experiments We evaluate several models on their ability to generate generate \(e_{f},s_{i},e_{i}\) given \(s_{f}\) (Formal\(\rightarrow\)Informal) and \(e_{i},s_{f},e_{f}\) given \(s_{i}\) (Informal\(\rightarrow\)Formal) on a held-out test set. We evaluate how closely generated \(e_{i},e_{f}\) match e-GYAFC explanations, and we evaluate the semantic closeness and paraphrase quality for \(s_{i},s_{f}\) with reference-free metrics. We note that we focus on evaluating the performance on the _explainable_ style transfer task rather than the style transfer performance itself. ### Models We test general-purpose instruction-tuned models (in a 1-shot scenario) of various sizes to show that they are not adequately equipped to deal with the explainable style transfer task. Additionally, we test the models fine-tuned on our domain-specific data. See hyperparameters and instructions in Appendix B. We do not fine-tune larger models as that \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(e_{i}\) & \(e_{i}\) & \(s_{f}\) & \(s_{f}\) & \(e_{f}\) \\ & pref. & accept. & pref. & accept. & accept. \\ \hline A1 & 91\% & 95\% & 64\% & 64\% & 98\% \\ A2 & 87\% & 84\% & 76\% & 75\% & 96\% \\ A3 & 91\% & 83\% & 91\% & 93\% & 100\% \\ \hline \hline \end{tabular} \end{table} Table 1: Expert evaluation of e-GYAFC dataset quality. We report percentage of time each item was preferred, as well as acceptability judgements. contradicts the purpose of our paper to introduce more efficient models for explainable style transfer. Instruction-tuned ModelsAll instruction-tuned models are provided with the same one-shot prompt (modulo special token requirements) and generation parameters. * MPT-7B-Instruct: built by finetuning MPT-7B (Team, 2023) on a dataset derived from the Databricks Dolly-15k (Databricks, 2023) and the Anthropic Helpful and Harmless (Bai et al., 2022) datasets. * Vicuna-13B (Chiang et al., 2023): an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT5. It places first in the Huggingface Open LLM Leaderboard (Beeching et al., 2023) based on human and GPT-4 evaluation as of writing this paper. Footnote 5: sharegpt.com * Falcon-40B (Almazrouei et al., 2023) causal decoder-only model trained on 1,000B tokens of RefinedWeb (Penedo et al., 2023) enhanced with curated corpora. * Tulu-65B (Wang et al., 2023) a 65B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2 (Longpre et al., 2023), CoT (Wei et al., 2022), Dolly (Databricks, 2023), Open Assistant 1 (Kopf et al., 2023), GPT4-Alpaca (Peng et al., 2023), Code-Alpaca (Chaudhary, 2023), and ShareGPT). ChatGPTAs most of our data was generated after March 1st, 2023, we use gpt-3.5-turbo-0301. However, automatic evaluation still favors this model due to architectural similarities. Note that we do not provide long context, dataset hints, or human feedback-based critiques in this setting. Fine-tuned modelsWe fine-tune below models on e-GYAFC. \(\rightarrow\) indicates fine-tuning two models in each direction, and \(\leftrightarrow\) indicates fine-tuning on combined data in both directions. * FLAN-T5-XL\(\leftrightarrow\)(Chung et al., 2022) approximately 3B parameter instruction-tuned model based on the T5 architecture (Raffel et al., 2020). * LLaMA-7B\(\rightarrow\)(Touvron et al., 2023) model by Meta trained on 1 trillion tokens. * Alpaca-7B\(\rightarrow,\leftrightarrow\)(Taori et al., 2023) a model fine-tuned from the LLaMA-7B model on 52K instruction-following demonstrations. In addition, we test Alpaca-7B\({}_{noexpl}\) as the model fine-tuned for Formal to Informal style transfer with no explanations provided in the fine-tuning data or in the output. ### Automatic Metrics * BLEU (Papineni et al., 2002): We would like to measure the amount of exactly matched formal and informal attributes and evidences between the generated structured explanation and e-GYAFC reference, which makes BLEU a fitting metric. * Mutual Implication Score (MIS) (Babakov et al., 2022) is a symmetric measure of text semantic similarity based on a RoBERTa (Liu et al., 2019) model fine-tuned for natural language inference and on a paraphrase detection dataset. We chose this over alternative metrics like P-SP (Wieting et al., 2022) due to ease of implementation in the huggingface library and use in prior work (e.g., Patel et al., 2022). * Formality/Informality Score6: RoBERTa (Liu et al., 2019) trained to predict for English sentences, whether they are formal or informal using GYAFC and Online Formality Corpus (OFC) (Pavlick and Tetreault, 2016). This classifier achieves 0.97 AUC on GYAFC making it highly reliable. We apply the sigmoid function on the logits to obtain a score between 0 and 1. Footnote 6: huggingface.co/s-nlp/roberta-base-formality-ranker ### Human Evaluation: Preference Judgments We select the best fine-tuned model, the best instruction-tuned model, and ChatGPT outputs (100 random samples) and evaluate them in terms of human preferences. We hire an expert with a masters degree in linguistics (same as A2 above) on Upwork and ask which model generation among the three do they prefer in terms of correctness and completeness of explanations as well as semantic preservation of paraphrase. We also perform a crowdworker evaluation on AMT hiring 3 workers per instance. To mitigate ChatGPT usage, we require to provide answer justifications and reject entries that clearly used ChatGPT (for Upwork, we also block copy-pasting with javascript). Results and Analysis Automatic evaluationTable 2 shows performance of models of various types (instruction-tuned one-shot, ChatGPT, fine-tuned models), sizes (7B-175B), and task direction combinations (\(\rightarrow\), \(\leftrightarrow\)). We find that the Formal\(\rightarrow\)Informal direction is especially challenging, with only fine-tuned models being able to achieve informality scores above 50 (see Vicuna and ChatGPT examples in Table 3). We hypothesize it is due to the high perplexity of user-generated informal speech. Interestingly, smaller model size does not prevent a 13B Vicuna model from outperforming other instruction-tuned models even as large as 40B and 65B, indicating the importance of instruction tuning data for downstream performance. Moreover, we find that fine-tuning helps smaller models reach and surpass the performance of the teacher model (our best model achieves 55.13 average score while ChatGPT is at 47.53), demonstrating the benefits of fine-tuning domain-specific models on data with incorporated expert feedback. The model fine-tuned without explanations (\(\text{Alpaca}_{noexpl}\)) achieves comparable performance, indicating that generating explanations does not significantly hurt performance on the standard style transfer task. Many of the errors in instruction-tuned models are caused by not being able to follow a complex instruction that requires three steps to perform a task. For example, for all instruct models except Vicuna and ChatGPT the performance from Formality Attributes identification (step 1 of the task) to Informality Attribute identification (step 3 of the task) drops by 85-99%. This can be explained by the model not fully following the instruction and not generating all the 3 steps, as well as compounding errors due to autoregressive generation. Meanwhile, for fine-tuned models this drop is only 48-52%. For Vicuna and ChatGPT this percentage drop is in line with fine-tuned models at 53% and 48% respectively. Based on these findings, we posit that fine-tuning instruct models for better understanding of multi-step instructions could be an interesting area of future work. Human evaluationAlpaca\({}_{+}\) generations (the best fine-tuned model on our data according to average scores) are preferred to ChatGPT and Vicuna 53% of the time by the expert linguist (ChatGPT is preferred 42%) and 36% of the time by the AMT crowdworkers (compared to 34% for ChatGPT), indicating fine-tuned models are more aligned with both expert and crowdworker preferences. See a qualitative example in Table 3. ## 6 Discussion: downstream applications of models fine-tuned for explainable style transfer Informality attribute explanations provide interpretable features for authorship verification We turn to the task of Authorship Verification (Martindale and McKenzie, 1995; Coulthard, 2004; Neal et al., 2017) utilizing PAN 2022 (Bevendorff et al., 2022) data. This is a binary classification task of deciding if two texts belong to the same author. We run \(\text{Alpaca}_{\text{IF}\rightarrow\text{F}}\) model and extract explanations containing informality attributes and evidence (see Table 4). In a preliminary evaluation of the usefulness of these features, we compute the similarity between authors by measuring the percentage of overlapping attributes. At most 15 sentences per author are considered. We then use the percentage as a classification score. We find that this simplistic approach, which does not even consider the evidences, achieves a classification performance of 0.60 AUC, indicating a non-random signal. This indicates a potential application of the explanations generated by our model to be used as interpretable authorship features that can be explored in future work. Explainable formal \(\rightarrow\) informal style transfer is an interpretable adversarial attack on AI-generated text detection methods, including retrievalKrishna et al. (2023) established that paraphrasing easily evades the detection of AI-generated text, and proposed a retrieval-based defense. However, we hypothesize that retrieval-based metrics will degrade as similarity between generations becomes more ambiguous, as is the case for formality style transfer. For example, an adversarial agent might generate a post containing misinformation in typical "formal" language generated by a language model like ChatGPT. This text is relatively detectable by current classifiers and 100% detectable by retrieval-based methods. However, the agent might apply a style transfer model to lower the formality of the message. Alarmingly, not only this accomplishes the goal of spreading the AI-generated message more effectively as the result looks more like user-generated text, but, as we show, it also decreases the chances of being detected as AI-generated by current methods. We test this in the following setting: we use an online dataset of political tweets 7, and sample 30 of them. We ask ChatGPT to generate a political commentary post on the topic of the tweet (GPT-F), as well as an informal paraphrase of the said post (GPT-Inf). We manually annotate the resulting summaries and select those that look like they could be legitimate political messages posted on social media and have valid paraphrases. We then use our Alpacar\({}_{\rightarrow\text{IF}}\) model to generate an informal paraphrase of the GPT-F posts sentence-by-sentence. We also verify that these paraphrases are semantically valid and close to the original GPT-Formal post and select 24 high-quality generations. We choose a relatively small sample since we want to verify the paraphrase was still close to the original sentence manually to ensure semantic control for the experiment. Footnote 7: kaggle.com We report detection scores8 from 4 methods surveyed by Krishna et al. (2023): GPTZero Tian (2023), OpenAI classifier OpenAI (2023b), DetectGPT Mitchell et al. (2023), and their proposed \begin{table} \begin{tabular}{l|l} \hline **Attribute** & **Evidence** \\ \hline Colloquialism & “assumed they all started off \\ low!?”, “typing it out” \\ \hline Textese & “xx” \\ \hline Informal Vocabulary & “give you a call”, “arrange something” \\ \hline Informal Tone & “hoping to borrow a couple of \\ charging leads” \\ \hline \end{tabular} \end{table} Table 4: Informality features could be used for authorship identification: on the left, informality attributes identified by our model, on the right, textual evidence from the author’s text provided by the model. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Formal \(\rightarrow\) Informal} & \multicolumn{4}{c}{Informal \(\rightarrow\) Formal} \\ \cline{3-10} Model & Size & \multicolumn{2}{c}{Form.Attrs.} & \multicolumn{1}{c}{MIS} & Informality & Inform.Attrs. & \multicolumn{1}{c}{Inform.Attrs.} & \multicolumn{1}{c}{MIS} & Formality & \multicolumn{1}{c}{Form.Attrs.} & \multicolumn{1}{c}{Alexage} \\ \cline{2-10} & & BLEU & & BLEU & & BLEU & & BLEU & \\ \hline MPT & 7B & 24.59 & 51.84 & 12.35 & 2.10 & 23.26 & 46.26 & 53.33 & 0.86 & 26.82 \\ Vicuna & 13B & 23.16 & 83.24 & 36.52 & 10.97 & 27.31 & 61.22 & **92.26** & 9.88 & 43.07 \\ Falcon & 40B & 8.38 & 28.12 & 12.78 & 1.23 & 20.80 & 38.01 & 58.40 & 7.13 & 21.86 \\ Tiltu & 65B & 24.90 & 19.60 & 6.68 & 0.02 & 27.76 & 26.69 & 25.50 & 0.28 & 16.43 \\ \hline ChatGPT & 175B* & 25.48 & **86.85** & 43.69 & 13.18 & 30.12 & 77.21 & 92.06 & 11.62 & 47.53 \\ \hline FLAN-T5\({}_{\rightarrow}\) & 3B & 0.00 & 8.54 & 0.02 & 0.00 & 0.00 & 9.82 & 1.05 & 0.00 & 2.43 \\ LLaMA\({}_{\rightarrow}\) & 7B & 39.64 & 85.31 & 56.10 & 19.86 & 38.02 & 81.80 & 90.91 & 25.10 & 54.59 \\ Alpaca\({}_{unc2}\) & 7B & - & 85.34 & 54.75 & - & - & **83.20** & 91.10 & - & - \\ Alpaca\({}_{\rightarrow}\) & 7B & 39.98 & 84.70 & 56.74 & 19.22 & **40.56** & 81.69 & 91.12 & 24.71 & 54.84 \\ Alpaca\({}_{\rightarrow}\) & 7B & **40.42** & 81.76 & **60.38** & **21.11** & 40.34 & 79.43 & 91.81 & **25.75** & **55.13** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of different models across subdivisions of Formal to Informal and Informal to Formal style transfer. Models in the top 4 rows are instruction-tuned evaluated in a one-shot setting, row 5 is ChatGPT (was used to generate the data), the bottom rows are fine-tuned models. Best overall performance is in bold, best performance among non-GPT instruction models is underlined. \begin{table} \begin{tabular}{l|l} \hline **Formal (Input)** \\ If it is feasible, allow love to prevail. \\ \hline **Vicuna** \\ \hline Formal Attributes: passive voice (“it is feasible”, “allow love to prevail”), use of verb “to be” (“is feasible”, “prevail”) \\ \hline Informal: It’s okay to let love rule. \\ Informal Attributes: active voice (“it’s okay”, “let love rule”), informal verb use (“rule”) \\ \hline **ChatGPT** \\ \hline Formal Attributes: conditional clause (“if it is feasible”), elevated vocabulary (“allow love to prevail”) \\ \hline Informal: If it’s possible, just go with love. \\ Informal Attributes: simple language (“if it’s possible”), imperative form (“just go”), contraction (“it’s”) \\ \hline **Alpaca\({}_{\rightarrow}\)** \\ \hline Formal Attributes: precise language (“feasible”), use of conditional clause (“if it is feasible”), lexical sophistication (“prevail”) \\ \hline Informal: let love win if it’s possible. \\ Informal Attributes: contraction (“it’s”), casual tone (“let love win”) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison between generations from a one-shot instruction-tuned model (Vicuna, ChatGPT, and our best fine-tuned model). Vicuna outputs wrong attributes. ChatGPT paraphrase does not sound natural. retrieval methods based on BM25 (Robertson and Zaragoza, 2009) or P-SP (Wieting et al., 2022) retrievers. As can be seen in Table 5, the formaldehyde-to-informal transfer model significantly decreases detection scores of all AI-generated text detection methods, including the retrieval-based one (despite the fact that the retrieval corpus is significantly smaller than it would be in real-world). Interestingly, for the BM25 retrieval method, the Chat-GPT paraphrases are slightly harder to detect than \(\text{Alpac}_{\text{F}\rightarrow\text{IF}}\), whereas it is easier for all other methods. Since we used ChatGPT to generate the original posts, we could not use the watermarking methods (Kirchenbauer et al., 2023), but this can be explored in future work. This result highlights the need to investigate new methods of detecting style transferred AI-generated text. As formality style transfer remains an effective attack, informality features produced by our model could help improve such classifiers. We leave this investigation for future work. ## 7 Conclusion and Future Work We propose a framework to augment a formality style transfer dataset with semi-structured natural language explanations. We leverage long context and existing data cues to guide the teacher model to produce high-quality generations. To further improve quality and incorporate expert feedback, we propose ICLEF (In-Context Learning from Expert Feedback), a method to incorporate a small amount of expert feedback via in-context learning and self-critique. We provide a rigorous evaluation of contemporaneous instruction-tuned models on the resulting dataset and find promising areas of future work, such as multi-step instruction following. Future work could use the generated explanations for authorship verification and AI-generated text detection. ## Limitations The GYAFC dataset does not contain all types of informal and formal language, namely, they mostly focus on interpersonal relationships (the subset used for this paper) and entertainment. Future work could consider extending our approach to other style transfer datasets, including ones more encompassing of formality. As OpenAI may deprecate the model at any time, some ChatGPT results may not be fully reproducible in the future. However, we preserved the generations used for experiments. While our methods are intended to produce faithful explanations, there can still be instances when a model does not rely on the attributes in order to complete the paraphrase. We also observed that hallucinations can still be present in our fine-tuned models' explanations and hope that future work will try to address these issues. One limitation of our method is that we used a relatively small number of experts to conduct our study. However, we believe that this setting mirrors real-life conditions where experts are usually scarcely available. We hope our approach provides a more general framework for incorporating expert feedback that can be adjusted to experts' needs (e.g., a forensic linguist may require a different style transfer explanation than a literary critic). Fine-tuning and running inference on large models requires expensive computational resources. However, we hope that our study presents a convincing argument that fine-tuning a smaller model once may be more efficient and accurate than running a large general-purpose model with elaborate long-context prompts. ## Ethics Statement Our model is intended for explainable style transfer. In our study, we show how style transfer can be used to evade AI-text detectors. Similarly to Krishna et al. (2023), we reiterate that this is not to provide a way to attack such systems, but to bring awareness to the community that current detectors are easy to evade. Moreover, we bring to attention the issue of detecting text on which style transfer paraphrase has been applied. We hope that future work develops systems capable of defending against such attacks, perhaps utilizing explanations generated by our system. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Models & GPTZero & OpenAI & GPTDetect & BM25 & P-SP \\ \hline GPT-F & 85.92 & 70.64 & 104.88 & 100 & 100 \\ GPT-Inf & 69.58 & 54.24 & 65.42 & **48.15** & 74.99 \\ F \(\rightarrow\) IF & **6.11** & **44.86** & **54.92** & 58.68 & **74.08** \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of various AI-generated text detectors on informal paraphrases from our model. Even retrieval methods perform poorly in this setting. ## Acknowledgements This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.